Prediction of a service demand using combined forecasting approach
NASA Astrophysics Data System (ADS)
Zhou, Ling
2017-08-01
Forecasting facilitates cutting down operational and management costs while ensuring service level for a logistics service provider. Our case study here is to investigate how to forecast short-term logistic demand for a LTL carrier. Combined approach depends on several forecasting methods simultaneously, instead of a single method. It can offset the weakness of a forecasting method with the strength of another, which could improve the precision performance of prediction. Main issues of combined forecast modeling are how to select methods for combination, and how to find out weight coefficients among methods. The principles of method selection include that each method should apply to the problem of forecasting itself, also methods should differ in categorical feature as much as possible. Based on these principles, exponential smoothing, ARIMA and Neural Network are chosen to form the combined approach. Besides, least square technique is employed to settle the optimal weight coefficients among forecasting methods. Simulation results show the advantage of combined approach over the three single methods. The work done in the paper helps manager to select prediction method in practice.
Effects of Two Combined Methods on the Teaching of Basic Astronomy Concepts
ERIC Educational Resources Information Center
Korur, Fikret; Enil, Gizem; Göçer, Gizem
2016-01-01
The authors mainly aimed to investigate the following question: Are there any significant effects of the first combined method of a conceptual change approach with refutation text, worksheets, and activities with respect to the second combined method of a conceptual change approach with conceptual texts, presentations, and activities on students'…
Group decision-making approach for flood vulnerability identification using the fuzzy VIKOR method
NASA Astrophysics Data System (ADS)
Lee, G.; Jun, K. S.; Chung, E.-S.
2015-04-01
This study proposes an improved group decision making (GDM) framework that combines the VIKOR method with data fuzzification to quantify the spatial flood vulnerability including multiple criteria. In general, GDM method is an effective tool for formulating a compromise solution that involves various decision makers since various stakeholders may have different perspectives on their flood risk/vulnerability management responses. The GDM approach is designed to achieve consensus building that reflects the viewpoints of each participant. The fuzzy VIKOR method was developed to solve multi-criteria decision making (MCDM) problems with conflicting and noncommensurable criteria. This comprising method can be used to obtain a nearly ideal solution according to all established criteria. This approach effectively can propose some compromising decisions by combining the GDM method and fuzzy VIKOR method. The spatial flood vulnerability of the southern Han River using the GDM approach combined with the fuzzy VIKOR method was compared with the spatial flood vulnerability using general MCDM methods, such as the fuzzy TOPSIS and classical GDM methods (i.e., Borda, Condorcet, and Copeland). As a result, the proposed fuzzy GDM approach can reduce the uncertainty in the data confidence and weight derivation techniques. Thus, the combination of the GDM approach with the fuzzy VIKOR method can provide robust prioritization because it actively reflects the opinions of various groups and considers uncertainty in the input data.
Calculation of evapotranspiration: Recursive and explicit methods
USDA-ARS?s Scientific Manuscript database
Crop yield is proportional to crop evapotranspiration (ETc) and it is important to calculate ETc correctly. Methods to calculate ETc have combined empirical and theoretical approaches. The combination method was used to calculate potential ETp. It is a combination method because it combined the ener...
Group decision-making approach for flood vulnerability identification using the fuzzy VIKOR method
NASA Astrophysics Data System (ADS)
Lee, G.; Jun, K. S.; Cung, E. S.
2014-09-01
This study proposes an improved group decision making (GDM) framework that combines VIKOR method with fuzzified data to quantify the spatial flood vulnerability including multi-criteria evaluation indicators. In general, GDM method is an effective tool for formulating a compromise solution that involves various decision makers since various stakeholders may have different perspectives on their flood risk/vulnerability management responses. The GDM approach is designed to achieve consensus building that reflects the viewpoints of each participant. The fuzzy VIKOR method was developed to solve multi-criteria decision making (MCDM) problems with conflicting and noncommensurable criteria. This comprising method can be used to obtain a nearly ideal solution according to all established criteria. Triangular fuzzy numbers are used to consider the uncertainty of weights and the crisp data of proxy variables. This approach can effectively propose some compromising decisions by combining the GDM method and fuzzy VIKOR method. The spatial flood vulnerability of the south Han River using the GDM approach combined with the fuzzy VIKOR method was compared with the results from general MCDM methods, such as the fuzzy TOPSIS and classical GDM methods, such as those developed by Borda, Condorcet, and Copeland. The evaluated priorities were significantly dependent on the employed decision-making method. The proposed fuzzy GDM approach can reduce the uncertainty in the data confidence and weight derivation techniques. Thus, the combination of the GDM approach with the fuzzy VIKOR method can provide robust prioritization because it actively reflects the opinions of various groups and considers uncertainty in the input data.
NASA Technical Reports Server (NTRS)
Reddy, C. J.; Deshpande, Manohar D.; Cockrell, C. R.; Beck, F. B.
1995-01-01
A combined finite element method/method of moments (FEM/MoM) approach is used to analyze the electromagnetic scattering properties of a three-dimensional-cavity-backed aperture in an infinite ground plane. The FEM is used to formulate the fields inside the cavity, and the MoM (with subdomain bases) in both spectral and spatial domains is used to formulate the fields above the ground plane. Fields in the aperture and the cavity are solved using a system of equations resulting from the combination of the FEM and the MoM. By virtue of the FEM, this combined approach is applicable to all arbitrarily shaped cavities with inhomogeneous material fillings, and because of the subdomain bases used in the MoM, the apertures can be of any arbitrary shape. This approach leads to a partly sparse and partly full symmetric matrix, which is efficiently solved using a biconjugate gradient algorithm. Numerical results are presented to validate the analysis.
NASA Astrophysics Data System (ADS)
Guo, Yang; Becker, Ute; Neese, Frank
2018-03-01
Local correlation theories have been developed in two main flavors: (1) "direct" local correlation methods apply local approximation to the canonical equations and (2) fragment based methods reconstruct the correlation energy from a series of smaller calculations on subsystems. The present work serves two purposes. First, we investigate the relative efficiencies of the two approaches using the domain-based local pair natural orbital (DLPNO) approach as the "direct" method and the cluster in molecule (CIM) approach as the fragment based approach. Both approaches are applied in conjunction with second-order many-body perturbation theory (MP2) as well as coupled-cluster theory with single-, double- and perturbative triple excitations [CCSD(T)]. Second, we have investigated the possible merits of combining the two approaches by performing CIM calculations with DLPNO methods serving as the method of choice for performing the subsystem calculations. Our cluster-in-molecule approach is closely related to but slightly deviates from approaches in the literature since we have avoided real space cutoffs. Moreover, the neglected distant pair correlations in the previous CIM approach are considered approximately. Six very large molecules (503-2380 atoms) were studied. At both MP2 and CCSD(T) levels of theory, the CIM and DLPNO methods show similar efficiency. However, DLPNO methods are more accurate for 3-dimensional systems. While we have found only little incentive for the combination of CIM with DLPNO-MP2, the situation is different for CIM-DLPNO-CCSD(T). This combination is attractive because (1) the better parallelization opportunities offered by CIM; (2) the methodology is less memory intensive than the genuine DLPNO-CCSD(T) method and, hence, allows for large calculations on more modest hardware; and (3) the methodology is applicable and efficient in the frequently met cases, where the largest subsystem calculation is too large for the canonical CCSD(T) method.
Promoting Critical, Elaborative Discussions through a Collaboration Script and Argument Diagrams
ERIC Educational Resources Information Center
Scheuer, Oliver; McLaren, Bruce M.; Weinberger, Armin; Niebuhr, Sabine
2014-01-01
During the past two decades a variety of approaches to support argumentation learning in computer-based learning environments have been investigated. We present an approach that combines argumentation diagramming and collaboration scripts, two methods successfully used in the past individually. The rationale for combining the methods is to…
Discovering Synergistic Drug Combination from a Computational Perspective.
Ding, Pingjian; Luo, Jiawei; Liang, Cheng; Xiao, Qiu; Cao, Buwen; Li, Guanghui
2018-03-30
Synergistic drug combinations play an important role in the treatment of complex diseases. The identification of effective drug combination is vital to further reduce the side effects and improve therapeutic efficiency. In previous years, in vitro method has been the main route to discover synergistic drug combinations. However, many limitations of time and resource consumption lie within the in vitro method. Therefore, with the rapid development of computational models and the explosive growth of large and phenotypic data, computational methods for discovering synergistic drug combinations are an efficient and promising tool and contribute to precision medicine. It is the key of computational methods how to construct the computational model. Different computational strategies generate different performance. In this review, the recent advancements in computational methods for predicting effective drug combination are concluded from multiple aspects. First, various datasets utilized to discover synergistic drug combinations are summarized. Second, we discussed feature-based approaches and partitioned these methods into two classes including feature-based methods in terms of similarity measure, and feature-based methods in terms of machine learning. Third, we discussed network-based approaches for uncovering synergistic drug combinations. Finally, we analyzed and prospected computational methods for predicting effective drug combinations. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
A Tale of Two Methods: Chart and Interview Methods for Identifying Delirium
Saczynski, Jane S.; Kosar, Cyrus M.; Xu, Guoquan; Puelle, Margaret R.; Schmitt, Eva; Jones, Richard N.; Marcantonio, Edward R.; Wong, Bonnie; Isaza, Ilean; Inouye, Sharon K.
2014-01-01
Background Interview and chart-based methods for identifying delirium have been validated. However, relative strengths and limitations of each method have not been described, nor has a combined approach (using both interviews and chart), been systematically examined. Objectives To compare chart and interview-based methods for identification of delirium. Design, Setting and Participants Participants were 300 patients aged 70+ undergoing major elective surgery (majority were orthopedic surgery) interviewed daily during hospitalization for delirium using the Confusion Assessment Method (CAM; interview-based method) and whose medical charts were reviewed for delirium using a validated chart-review method (chart-based method). We examined rate of agreement on the two methods and patient characteristics of those identified using each approach. Predictive validity for clinical outcomes (length of stay, postoperative complications, discharge disposition) was compared. In the absence of a gold-standard, predictive value could not be calculated. Results The cumulative incidence of delirium was 23% (n= 68) by the interview-based method, 12% (n=35) by the chart-based method and 27% (n=82) by the combined approach. Overall agreement was 80%; kappa was 0.30. The methods differed in detection of psychomotor features and time of onset. The chart-based method missed delirium in CAM-identified patients laacking features of psychomotor agitation or inappropriate behavior. The CAM-based method missed chart-identified cases occurring during the night shift. The combined method had high predictive validity for all clinical outcomes. Conclusions Interview and chart-based methods have specific strengths for identification of delirium. A combined approach captures the largest number and the broadest range of delirium cases. PMID:24512042
NASA Technical Reports Server (NTRS)
Brooke, D.; Vondrasek, D. V.
1978-01-01
The aerodynamic influence coefficients calculated using an existing linear theory program were used to modify the pressures calculated using impact theory. Application of the combined approach to several wing-alone configurations shows that the combined approach gives improved predictions of the local pressure and loadings over either linear theory alone or impact theory alone. The approach not only removes most of the short-comings of the individual methods, as applied in the Mach 4 to 8 range, but also provides the basis for an inverse design procedure applicable to high speed configurations.
Comparing multiple imputation methods for systematically missing subject-level data.
Kline, David; Andridge, Rebecca; Kaizar, Eloise
2017-06-01
When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this paper, we focus on missing subject-level variables and compare two multiple imputation approaches: a joint modeling approach and a sequential conditional modeling approach. We find the joint modeling approach to be preferable to the sequential conditional approach, except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Remarkably, the estimates from the sequential conditional method are often less efficient than a complete case analysis, which, in the context of research synthesis, implies that we lose efficiency by combining studies. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Mixed Methods Approaches in Family Science Research
ERIC Educational Resources Information Center
Plano Clark, Vicki L.; Huddleston-Casas, Catherine A.; Churchill, Susan L.; Green, Denise O'Neil; Garrett, Amanda L.
2008-01-01
The complex phenomena of interest to family scientists require the use of quantitative and qualitative approaches. Researchers across the social sciences are now turning to mixed methods designs that combine these two approaches. Mixed methods research has great promise for addressing family science topics, but only if researchers understand the…
An approach to achieve progress in spacecraft shielding
NASA Astrophysics Data System (ADS)
Thoma, K.; Schäfer, F.; Hiermaier, S.; Schneider, E.
2004-01-01
Progress in shield design against space debris can be achieved only when a combined approach based on several tools is used. This approach depends on the combined application of advanced numerical methods, specific material models and experimental determination of input parameters for these models. Examples of experimental methods for material characterization are given, covering the range from quasi static to very high strain rates for materials like Nextel and carbon fiber-reinforced materials. Mesh free numerical methods have extraordinary capabilities in the simulation of extreme material behaviour including complete failure with phase changes, combined with shock wave phenomena and the interaction with structural components. In this paper the benefits from combining numerical methods, material modelling and detailed experimental studies for shield design are demonstrated. The following examples are given: (1) Development of a material model for Nextel and Kevlar-Epoxy to enable numerical simulation of hypervelocity impacts on complex heavy protection shields for the International Space Station. (2) The influence of projectile shape on protection performance of Whipple Shields and how experimental problems in accelerating such shapes can be overcome by systematic numerical simulation. (3) The benefits of using metallic foams in "sandwich bumper shields" for spacecraft and how to approach systematic characterization of such materials.
Review of Reliability-Based Design Optimization Approach and Its Integration with Bayesian Method
NASA Astrophysics Data System (ADS)
Zhang, Xiangnan
2018-03-01
A lot of uncertain factors lie in practical engineering, such as external load environment, material property, geometrical shape, initial condition, boundary condition, etc. Reliability method measures the structural safety condition and determine the optimal design parameter combination based on the probabilistic theory. Reliability-based design optimization (RBDO) is the most commonly used approach to minimize the structural cost or other performance under uncertainty variables which combines the reliability theory and optimization. However, it cannot handle the various incomplete information. The Bayesian approach is utilized to incorporate this kind of incomplete information in its uncertainty quantification. In this paper, the RBDO approach and its integration with Bayesian method are introduced.
NASA Astrophysics Data System (ADS)
Yang, Jinping; Li, Peizhen; Yang, Youfa; Xu, Dian
2018-04-01
Empirical mode decomposition (EMD) is a highly adaptable signal processing method. However, the EMD approach has certain drawbacks, including distortions from end effects and mode mixing. In the present study, these two problems are addressed using an end extension method based on the support vector regression machine (SVRM) and a modal decomposition method based on the characteristics of the Hilbert transform. The algorithm includes two steps: using the SVRM, the time series data are extended at both endpoints to reduce the end effects, and then, a modified EMD method using the characteristics of the Hilbert transform is performed on the resulting signal to reduce mode mixing. A new combined static-dynamic method for identifying structural damage is presented. This method combines the static and dynamic information in an equilibrium equation that can be solved using the Moore-Penrose generalized matrix inverse. The combination method uses the differences in displacements of the structure with and without damage and variations in the modal force vector. Tests on a four-story, steel-frame structure were conducted to obtain static and dynamic responses of the structure. The modal parameters are identified using data from the dynamic tests and improved EMD method. The new method is shown to be more accurate and effective than the traditional EMD method. Through tests with a shear-type test frame, the higher performance of the proposed static-dynamic damage detection approach, which can detect both single and multiple damage locations and the degree of the damage, is demonstrated. For structures with multiple damage, the combined approach is more effective than either the static or dynamic method. The proposed EMD method and static-dynamic damage detection method offer improved modal identification and damage detection, respectively, in structures.
Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe; Frouin, Frederique; Garreau, Mireille
2015-01-01
This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert.
Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe
2015-01-01
This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert. PMID:26287691
Rajaraman, Prathish K; Manteuffel, T A; Belohlavek, M; Heys, Jeffrey J
2017-01-01
A new approach has been developed for combining and enhancing the results from an existing computational fluid dynamics model with experimental data using the weighted least-squares finite element method (WLSFEM). Development of the approach was motivated by the existence of both limited experimental blood velocity in the left ventricle and inexact numerical models of the same flow. Limitations of the experimental data include measurement noise and having data only along a two-dimensional plane. Most numerical modeling approaches do not provide the flexibility to assimilate noisy experimental data. We previously developed an approach that could assimilate experimental data into the process of numerically solving the Navier-Stokes equations, but the approach was limited because it required the use of specific finite element methods for solving all model equations and did not support alternative numerical approximation methods. The new approach presented here allows virtually any numerical method to be used for approximately solving the Navier-Stokes equations, and then the WLSFEM is used to combine the experimental data with the numerical solution of the model equations in a final step. The approach dynamically adjusts the influence of the experimental data on the numerical solution so that more accurate data are more closely matched by the final solution and less accurate data are not closely matched. The new approach is demonstrated on different test problems and provides significantly reduced computational costs compared with many previous methods for data assimilation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Illustrating a Mixed-Method Approach for Validating Culturally Specific Constructs
ERIC Educational Resources Information Center
Hitchcock, J.H.; Nastasi, B.K.; Dai, D.Y.; Newman, J.; Jayasena, A.; Bernstein-Moore, R.; Sarkar, S.; Varjas, K.
2005-01-01
The purpose of this article is to illustrate a mixed-method approach (i.e., combining qualitative and quantitative methods) for advancing the study of construct validation in cross-cultural research. The article offers a detailed illustration of the approach using the responses 612 Sri Lankan adolescents provided to an ethnographic survey. Such…
ERIC Educational Resources Information Center
Eckes, Thomas
2017-01-01
This paper presents an approach to standard setting that combines the prototype group method (PGM; Eckes, 2012) with a receiver operating characteristic (ROC) analysis. The combined PGM-ROC approach is applied to setting cut scores on a placement test of English as a foreign language (EFL). To implement the PGM, experts first named learners whom…
Issues in evaluation: evaluating assessments of elderly people using a combination of methods.
McEwan, R T
1989-02-01
In evaluating a health service, individuals will give differing accounts of its performance, according to their experiences of the service, and the evaluative perspective they adopt. The value of a service may also change through time, and according to the particular part of the service studied. Traditional health care evaluations have generally not accounted for this variability because of the approaches used. Studies evaluating screening or assessment programmes for the elderly have focused on programme effectiveness and efficiency, using relatively inflexible quantitative methods. Evaluative approaches must reflect the complexity of health service provision, and methods must vary to suit the particular research objective. Under these circumstances, this paper presents the case for the use of multiple triangulation in evaluative research, where differing methods and perspectives are combined in one study. Emphasis is placed on the applications and benefits of subjectivist approaches in evaluation. An example of combined methods is provided in the form of an evaluation of the Newcastle Care Plan for the Elderly.
ASSESSING AND COMBINING RELIABILITY OF PROTEIN INTERACTION SOURCES
LEACH, SONIA; GABOW, AARON; HUNTER, LAWRENCE; GOLDBERG, DEBRA S.
2008-01-01
Integrating diverse sources of interaction information to create protein networks requires strategies sensitive to differences in accuracy and coverage of each source. Previous integration approaches calculate reliabilities of protein interaction information sources based on congruity to a designated ‘gold standard.’ In this paper, we provide a comparison of the two most popular existing approaches and propose a novel alternative for assessing reliabilities which does not require a gold standard. We identify a new method for combining the resultant reliabilities and compare it against an existing method. Further, we propose an extrinsic approach to evaluation of reliability estimates, considering their influence on the downstream tasks of inferring protein function and learning regulatory networks from expression data. Results using this evaluation method show 1) our method for reliability estimation is an attractive alternative to those requiring a gold standard and 2) the new method for combining reliabilities is less sensitive to noise in reliability assignments than the similar existing technique. PMID:17990508
NASA Astrophysics Data System (ADS)
Kieseler, Jan
2017-11-01
A method is discussed that allows combining sets of differential or inclusive measurements. It is assumed that at least one measurement was obtained with simultaneously fitting a set of nuisance parameters, representing sources of systematic uncertainties. As a result of beneficial constraints from the data all such fitted parameters are correlated among each other. The best approach for a combination of these measurements would be the maximization of a combined likelihood, for which the full fit model of each measurement and the original data are required. However, only in rare cases this information is publicly available. In absence of this information most commonly used combination methods are not able to account for these correlations between uncertainties, which can lead to severe biases as shown in this article. The method discussed here provides a solution for this problem. It relies on the public result and its covariance or Hessian, only, and is validated against the combined-likelihood approach. A dedicated software package implementing this method is also presented. It provides a text-based user interface alongside a C++ interface. The latter also interfaces to ROOT classes for simple combination of binned measurements such as differential cross sections.
ERIC Educational Resources Information Center
Hampden-Thompson, Gillian; Lubben, Fred; Bennett, Judith
2011-01-01
Quantitative secondary analysis of large-scale data can be combined with in-depth qualitative methods. In this paper, we discuss the role of this combined methods approach in examining the uptake of physics and chemistry in post compulsory schooling for students in England. The secondary data analysis of the National Pupil Database (NPD) served…
Approaches, tools and methods used for setting priorities in health research in the 21st century
Yoshida, Sachiyo
2016-01-01
Background Health research is difficult to prioritize, because the number of possible competing ideas for research is large, the outcome of research is inherently uncertain, and the impact of research is difficult to predict and measure. A systematic and transparent process to assist policy makers and research funding agencies in making investment decisions is a permanent need. Methods To obtain a better understanding of the landscape of approaches, tools and methods used to prioritize health research, I conducted a methodical review using the PubMed database for the period 2001–2014. Results A total of 165 relevant studies were identified, in which health research prioritization was conducted. They most frequently used the CHNRI method (26%), followed by the Delphi method (24%), James Lind Alliance method (8%), the Combined Approach Matrix (CAM) method (2%) and the Essential National Health Research method (<1%). About 3% of studies reported no clear process and provided very little information on how priorities were set. A further 19% used a combination of expert panel interview and focus group discussion (“consultation process”) but provided few details, while a further 2% used approaches that were clearly described, but not established as a replicable method. Online surveys that were not accompanied by face–to–face meetings were used in 8% of studies, while 9% used a combination of literature review and questionnaire to scrutinise the research options for prioritization among the participating experts. Conclusion The number of priority setting exercises in health research published in PubMed–indexed journals is increasing, especially since 2010. These exercises are being conducted at a variety of levels, ranging from the global level to the level of an individual hospital. With the development of new tools and methods which have a well–defined structure – such as the CHNRI method, James Lind Alliance Method and Combined Approach Matrix – it is likely that the Delphi method and non–replicable consultation processes will gradually be replaced by these emerging tools, which offer more transparency and replicability. It is too early to say whether any single method can address the needs of most exercises conducted at different levels, or if better results may perhaps be achieved through combination of components of several methods. PMID:26401271
Efficient Testing Combining Design of Experiment and Learn-to-Fly Strategies
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.; Brandon, Jay M.
2017-01-01
Rapid modeling and efficient testing methods are important in a number of aerospace applications. In this study efficient testing strategies were evaluated in a wind tunnel test environment and combined to suggest a promising approach for both ground-based and flight-based experiments. Benefits of using Design of Experiment techniques, well established in scientific, military, and manufacturing applications are evaluated in combination with newly developing methods for global nonlinear modeling. The nonlinear modeling methods, referred to as Learn-to-Fly methods, utilize fuzzy logic and multivariate orthogonal function techniques that have been successfully demonstrated in flight test. The blended approach presented has a focus on experiment design and identifies a sequential testing process with clearly defined completion metrics that produce increased testing efficiency.
Combining the Cutting and Mulliken methods for primary repair of the bilateral cleft lip nose.
Morovic, Carmen Gloria; Cutting, Court
2005-11-01
Since 1990, primary bilateral cleft nasal reconstruction has been focused on placing the lower lateral cartilages into normal anatomical position. Of the four major techniques in this class, the Cutting (i.e., retrograde) method and the Mulliken method have been most successful. The retrograde method makes no external nasal incisions, but requires either preoperative or postoperative nasal molding to achieve maximum benefit. Mulliken's technique does not require molding, but leaves the footplates of the medial crura in the depression above the projecting premaxilla associated with the diminutive anterior nasal spine. Leaving the footplates in place also prevents adequate approximation of the alar bases. In this article, the two methods are combined to achieve the benefits of both. We report our experience with the retrograde nasal approach associated with marginal rim incisions (Mulliken method) in a series of 25 consecutive bilateral cleft lip cases simultaneous with lip repair. We performed a retrograde approach through membranous septum incisions elevating a prolabial-columellar flap. To facilitate alar cartilage manipulation we added bilateral marginal rim incisions. Nasal width, columella length and width, tip projection, and nasolabial angle were analyzed after a minimum of 2 years after surgery. These were compared with a normal, age-matched, control group. We also examined nostril symmetry and marginal nostril scars. Columellar length was not statistically significantly different from that of the control group (p = 0.122442). Nasal width, columellar width, tip projection, and nasolabial angle were all significantly greater in the cleft group than normal (p < 0.001). No hypertrophied scars were found associated with the marginal rim scar. Adding the Mulliken approach allows alar cartilage manipulation to be performed more easily than when using the retrograde approach alone. Tip projection and alar base narrowing are facilitated using the combined technique rather than the Mulliken approach alone. Prolabial flap manipulation is safe using this combined approach, even in cases with a severely projected premaxilla. We believe that the combined approach is safe and yields better long-term results than either technique alone.
Approaches, tools and methods used for setting priorities in health research in the 21(st) century.
Yoshida, Sachiyo
2016-06-01
Health research is difficult to prioritize, because the number of possible competing ideas for research is large, the outcome of research is inherently uncertain, and the impact of research is difficult to predict and measure. A systematic and transparent process to assist policy makers and research funding agencies in making investment decisions is a permanent need. To obtain a better understanding of the landscape of approaches, tools and methods used to prioritize health research, I conducted a methodical review using the PubMed database for the period 2001-2014. A total of 165 relevant studies were identified, in which health research prioritization was conducted. They most frequently used the CHNRI method (26%), followed by the Delphi method (24%), James Lind Alliance method (8%), the Combined Approach Matrix (CAM) method (2%) and the Essential National Health Research method (<1%). About 3% of studies reported no clear process and provided very little information on how priorities were set. A further 19% used a combination of expert panel interview and focus group discussion ("consultation process") but provided few details, while a further 2% used approaches that were clearly described, but not established as a replicable method. Online surveys that were not accompanied by face-to-face meetings were used in 8% of studies, while 9% used a combination of literature review and questionnaire to scrutinise the research options for prioritization among the participating experts. The number of priority setting exercises in health research published in PubMed-indexed journals is increasing, especially since 2010. These exercises are being conducted at a variety of levels, ranging from the global level to the level of an individual hospital. With the development of new tools and methods which have a well-defined structure - such as the CHNRI method, James Lind Alliance Method and Combined Approach Matrix - it is likely that the Delphi method and non-replicable consultation processes will gradually be replaced by these emerging tools, which offer more transparency and replicability. It is too early to say whether any single method can address the needs of most exercises conducted at different levels, or if better results may perhaps be achieved through combination of components of several methods.
Optimal guidance law development for an advanced launch system
NASA Technical Reports Server (NTRS)
Calise, Anthony J.; Leung, Martin S. K.
1995-01-01
The objective of this research effort was to develop a real-time guidance approach for launch vehicles ascent to orbit injection. Various analytical approaches combined with a variety of model order and model complexity reduction have been investigated. Singular perturbation methods were first attempted and found to be unsatisfactory. The second approach based on regular perturbation analysis was subsequently investigated. It also fails because the aerodynamic effects (ignored in the zero order solution) are too large to be treated as perturbations. Therefore, the study demonstrates that perturbation methods alone (both regular and singular perturbations) are inadequate for use in developing a guidance algorithm for the atmospheric flight phase of a launch vehicle. During a second phase of the research effort, a hybrid analytic/numerical approach was developed and evaluated. The approach combines the numerical methods of collocation and the analytical method of regular perturbations. The concept of choosing intelligent interpolating functions is also introduced. Regular perturbation analysis allows the use of a crude representation for the collocation solution, and intelligent interpolating functions further reduce the number of elements without sacrificing the approximation accuracy. As a result, the combined method forms a powerful tool for solving real-time optimal control problems. Details of the approach are illustrated in a fourth order nonlinear example. The hybrid approach is then applied to the launch vehicle problem. The collocation solution is derived from a bilinear tangent steering law, and results in a guidance solution for the entire flight regime that includes both atmospheric and exoatmospheric flight phases.
Combining accounting approaches to practice valuation.
Schwartzben, D; Finkler, S A
1998-06-01
Healthcare organizations that wish to acquire physician or ambulatory care practices can choose from a variety of practice valuation approaches. Basic accounting methods assess the value of a physician practice on the basis of a historical, balance-sheet description of tangible assets. Yet these methods alone are inadequate to determine the true financial value of a practice. By using a combination of accounting approaches to practice valuation that consider factors such as fair market value, opportunity cost, and discounted cash flow over a defined time period, organizations can more accurately assess a practice's actual value.
Recognizing of stereotypic patterns in epileptic EEG using empirical modes and wavelets
NASA Astrophysics Data System (ADS)
Grubov, V. V.; Sitnikova, E.; Pavlov, A. N.; Koronovskii, A. A.; Hramov, A. E.
2017-11-01
Epileptic activity in the form of spike-wave discharges (SWD) appears in the electroencephalogram (EEG) during absence seizures. This paper evaluates two approaches for detecting stereotypic rhythmic activities in EEG, i.e., the continuous wavelet transform (CWT) and the empirical mode decomposition (EMD). The CWT is a well-known method of time-frequency analysis of EEG, whereas EMD is a relatively novel approach for extracting signal's waveforms. A new method for pattern recognition based on combination of CWT and EMD is proposed. It was found that this combined approach resulted to the sensitivity of 86.5% and specificity of 92.9% for sleep spindles and 97.6% and 93.2% for SWD, correspondingly. Considering strong within- and between-subjects variability of sleep spindles, the obtained efficiency in their detection was high in comparison with other methods based on CWT. It is concluded that the combination of a wavelet-based approach and empirical modes increases the quality of automatic detection of stereotypic patterns in rat's EEG.
Combining the Best of Two Standard Setting Methods: The Ordered Item Booklet Angoff
ERIC Educational Resources Information Center
Smith, Russell W.; Davis-Becker, Susan L.; O'Leary, Lisa S.
2014-01-01
This article describes a hybrid standard setting method that combines characteristics of the Angoff (1971) and Bookmark (Mitzel, Lewis, Patz & Green, 2001) methods. The proposed approach utilizes strengths of each method while addressing weaknesses. An ordered item booklet, with items sorted based on item difficulty, is used in combination…
ERIC Educational Resources Information Center
Lal, Shalini; Suto, Melinda; Ungar, Michael
2012-01-01
Increasingly, qualitative researchers are combining methods, processes, and principles from two or more methodologies over the course of a research study. Critics charge that researchers adopting combined approaches place too little attention on the historical, epistemological, and theoretical aspects of the research design. Rather than…
Linear combination methods to improve diagnostic/prognostic accuracy on future observations
Kang, Le; Liu, Aiyi; Tian, Lili
2014-01-01
Multiple diagnostic tests or biomarkers can be combined to improve diagnostic accuracy. The problem of finding the optimal linear combinations of biomarkers to maximise the area under the receiver operating characteristic curve has been extensively addressed in the literature. The purpose of this article is threefold: (1) to provide an extensive review of the existing methods for biomarker combination; (2) to propose a new combination method, namely, the nonparametric stepwise approach; (3) to use leave-one-pair-out cross-validation method, instead of re-substitution method, which is overoptimistic and hence might lead to wrong conclusion, to empirically evaluate and compare the performance of different linear combination methods in yielding the largest area under receiver operating characteristic curve. A data set of Duchenne muscular dystrophy was analysed to illustrate the applications of the discussed combination methods. PMID:23592714
Combined proportional and additive residual error models in population pharmacokinetic modelling.
Proost, Johannes H
2017-11-15
In pharmacokinetic modelling, a combined proportional and additive residual error model is often preferred over a proportional or additive residual error model. Different approaches have been proposed, but a comparison between approaches is still lacking. The theoretical background of the methods is described. Method VAR assumes that the variance of the residual error is the sum of the statistically independent proportional and additive components; this method can be coded in three ways. Method SD assumes that the standard deviation of the residual error is the sum of the proportional and additive components. Using datasets from literature and simulations based on these datasets, the methods are compared using NONMEM. The different coding of methods VAR yield identical results. Using method SD, the values of the parameters describing residual error are lower than for method VAR, but the values of the structural parameters and their inter-individual variability are hardly affected by the choice of the method. Both methods are valid approaches in combined proportional and additive residual error modelling, and selection may be based on OFV. When the result of an analysis is used for simulation purposes, it is essential that the simulation tool uses the same method as used during analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
An Adaptive Cross-Architecture Combination Method for Graph Traversal
DOE Office of Scientific and Technical Information (OSTI.GOV)
You, Yang; Song, Shuaiwen; Kerbyson, Darren J.
2014-06-18
Breadth-First Search (BFS) is widely used in many real-world applications including computational biology, social networks, and electronic design automation. The combination method, using both top-down and bottom-up techniques, is the most effective BFS approach. However, current combination methods rely on trial-and-error and exhaustive search to locate the optimal switching point, which may cause significant runtime overhead. To solve this problem, we design an adaptive method based on regression analysis to predict an optimal switching point for the combination method at runtime within less than 0.1% of the BFS execution time.
NASA Astrophysics Data System (ADS)
Shi, Min; Niu, Zhong-Ming; Liang, Haozhao
2018-06-01
We have combined the complex momentum representation method with the Green's function method in the relativistic mean-field framework to establish the RMF-CMR-GF approach. This new approach is applied to study the halo structure of 74Ca. All the continuum level density of concerned resonant states are calculated accurately without introducing any unphysical parameters, and they are independent of the choice of integral contour. The important single-particle wave functions and densities for the halo phenomenon in 74Ca are discussed in detail.
Hybrid method for moving interface problems with application to the Hele-Shaw flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, T.Y.; Li, Zhilin; Osher, S.
In this paper, a hybrid approach which combines the immersed interface method with the level set approach is presented. The fast version of the immersed interface method is used to solve the differential equations whose solutions and their derivatives may be discontinuous across the interfaces due to the discontinuity of the coefficients or/and singular sources along the interfaces. The moving interfaces then are updated using the newly developed fast level set formulation which involves computation only inside some small tubes containing the interfaces. This method combines the advantage of the two approaches and gives a second-order Eulerian discretization for interfacemore » problems. Several key steps in the implementation are addressed in detail. This new approach is then applied to Hele-Shaw flow, an unstable flow involving two fluids with very different viscosity. 40 refs., 10 figs., 3 tabs.« less
Applying Meta-Analysis to Structural Equation Modeling
ERIC Educational Resources Information Center
Hedges, Larry V.
2016-01-01
Structural equation models play an important role in the social sciences. Consequently, there is an increasing use of meta-analytic methods to combine evidence from studies that estimate the parameters of structural equation models. Two approaches are used to combine evidence from structural equation models: A direct approach that combines…
[Combine fats products: methodic opportunities of it identification].
Viktorova, E V; Kulakova, S N; Mikhaĭlov, N A
2006-01-01
At present time very topical problem is falsification of milk fat. The number of methods was considered to detection of milk fat authention and possibilities his difference from combined fat products. The analysis of modern approaches to valuation of milk fat authention has showed that the main method for detection of fat nature is gas chromatography analysis. The computer method of express identification of fat products is proposed for quick getting of information about accessory of examine fat to nature milk or combined fat product.
NASA Astrophysics Data System (ADS)
Foster, Hyacinth Carmen
Science educators and administrators support the idea that inquiry-based and didactic-based instructional strategies have varying effects on students' acquisition of science concepts. The research problem addressed whether incorporating the two approaches covered the learning requirements of all students in science classes, enabling them to meet state and national standards. The purpose of this quasiexperimental, posttest design research study was to determine if student learning and achievement in high school biology classes differed for each type of instructional method. Constructivism theory suggested that each learner creates knowledge over time because of the learners' interactions with the environment. The optimal teaching method, didactic (teacher-directed), inquiry-based, or a combination of two approaches instructional method, becomes essential if students are to discover ways to learn information. The research question examined which form of instruction had a significant effect on student achievement in biology. The data analysis consisted of single-factor, independent-measures analysis of variance (ANOVA) that tested the hypotheses of the research study. Locally, the results indicated greater and statistically significant differences in standardized laboratory scores for students who were taught using the combination of two approaches. Based on these results, biology instructors will gain new insights into ways of improving the instructional process. Social change may occur as the science curriculum leadership applies the combination of two instructional approaches to improve acquisition of science concepts by biology students.
Halcomb, Elizabeth; Hickman, Louise
2015-04-08
Mixed methods research involves the use of qualitative and quantitative data in a single research project. It represents an alternative methodological approach, combining qualitative and quantitative research approaches, which enables nurse researchers to explore complex phenomena in detail. This article provides a practical overview of mixed methods research and its application in nursing, to guide the novice researcher considering a mixed methods research project.
NASA Astrophysics Data System (ADS)
Bharti, P. K.; Khan, M. I.; Singh, Harbinder
2010-10-01
Off-line quality control is considered to be an effective approach to improve product quality at a relatively low cost. The Taguchi method is one of the conventional approaches for this purpose. Through this approach, engineers can determine a feasible combination of design parameters such that the variability of a product's response can be reduced and the mean is close to the desired target. The traditional Taguchi method was focused on ensuring good performance at the parameter design stage with one quality characteristic, but most products and processes have multiple quality characteristics. The optimal parameter design minimizes the total quality loss for multiple quality characteristics. Several studies have presented approaches addressing multiple quality characteristics. Most of these papers were concerned with maximizing the parameter combination of signal to noise (SN) ratios. The results reveal the advantages of this approach are that the optimal parameter design is the same as the traditional Taguchi method for the single quality characteristic; the optimal design maximizes the amount of reduction of total quality loss for multiple quality characteristics. This paper presents a literature review on solving multi-response problems in the Taguchi method and its successful implementation in various industries.
Paradigms, pragmatism and possibilities: mixed-methods research in speech and language therapy.
Glogowska, Margaret
2011-01-01
After the decades of the so-called 'paradigm wars' in social science research methodology and the controversy about the relative place and value of quantitative and qualitative research methodologies, 'paradigm peace' appears to have now been declared. This has come about as many researchers have begun to take a 'pragmatic' approach in the selection of research methodology, choosing the methodology best suited to answering the research question rather than conforming to a methodological orthodoxy. With the differences in the philosophical underpinnings of the two traditions set to one side, an increasing awareness, and valuing, of the 'mixed-methods' approach to research is now present in the fields of social, educational and health research. To explore what is meant by mixed-methods research and the ways in which quantitative and qualitative methodologies and methods can be combined and integrated, particularly in the broad field of health services research and the narrower one of speech and language therapy. The paper discusses the ways in which methodological approaches have already been combined and integrated in health services research and speech and language therapy, highlighting the suitability of mixed-methods research for answering the typically multifaceted questions arising from the provision of complex interventions. The challenges of combining and integrating quantitative and qualitative methods and the barriers to the adoption of mixed-methods approaches are also considered. The questions about healthcare, as it is being provided in the 21st century, calls for a range of methodological approaches. This is particularly the case for human communication and its disorders, where mixed-methods research offers a wealth of possibilities. In turn, speech and language therapy research should be able to contribute substantively to the future development of mixed-methods research. © 2010 Royal College of Speech & Language Therapists.
Brusky, John P.; Tran, Viet Q.; Rieder, Jocelyn M.; Aboseif, Sherif R.
2008-01-01
Purpose. This paper aims at describing the combined penoscrotal and perineal approach for placement of penile prosthesis in cases of severe corporal fibrosis and scarring. Materials and methods. Three patients with extensive corporal fibrosis underwent penile prosthesis placement via combined penoscrotal and perineal approach from 1997 to 2006. Follow-up ranged from 15 to 129 months. Results. All patients underwent successful implantation of semirigid penile prosthesis. There were no short- or long-term complications. Conclusions. Results on combined penoscrotal and perineal approach to penile prosthetic surgery in this preliminary series of patients suggest that it is a safe technique and increases the chance of successful outcome in the surgical management of severe corporal fibrosis. PMID:19043562
Khabirov, F A; Khaĭbullin, T I; Grigor'eva, O V
2011-01-01
We studied 110 patients, aged 34-71 years, in the early rehabilitation period after stroke who were admitted to a rehabilitation neurologic department of Kazan. The rehabilitation approach was based on the combination of several methods: kinesitherapy, transcranial magnetic stimulation and cerebrolysin treatment. This complex reanimation allowed to achieve the marked functional restoration of movement abilities in many cases that was correlated with the normalization of brain bioelectric activity (the increase of alpha-rhythm spectral power, the decrease of slow-wave EEG components). The combined use of these three methods was more effective than a combination of any two of them.
Chen, Jinsong; Zhang, Dake; Choi, Jaehwa
2015-12-01
It is common to encounter latent variables with ordinal data in social or behavioral research. Although a mediated effect of latent variables (latent mediated effect, or LME) with ordinal data may appear to be a straightforward combination of LME with continuous data and latent variables with ordinal data, the methodological challenges to combine the two are not trivial. This research covers model structures as complex as LME and formulates both point and interval estimates of LME for ordinal data using the Bayesian full-information approach. We also combine weighted least squares (WLS) estimation with the bias-corrected bootstrapping (BCB; Efron Journal of the American Statistical Association, 82, 171-185, 1987) method or the traditional delta method as the limited-information approach. We evaluated the viability of these different approaches across various conditions through simulation studies, and provide an empirical example to illustrate the approaches. We found that the Bayesian approach with reasonably informative priors is preferred when both point and interval estimates are of interest and the sample size is 200 or above.
Shen, Weifeng; Jiang, Libing; Zhang, Mao; Ma, Yuefeng; Jiang, Guanyu; He, Xiaojun
2014-01-01
To review the research methods of mass casualty incident (MCI) systematically and introduce the concept and characteristics of complexity science and artificial system, computational experiments and parallel execution (ACP) method. We searched PubMed, Web of Knowledge, China Wanfang and China Biology Medicine (CBM) databases for relevant studies. Searches were performed without year or language restrictions and used the combinations of the following key words: "mass casualty incident", "MCI", "research method", "complexity science", "ACP", "approach", "science", "model", "system" and "response". Articles were searched using the above keywords and only those involving the research methods of mass casualty incident (MCI) were enrolled. Research methods of MCI have increased markedly over the past few decades. For now, dominating research methods of MCI are theory-based approach, empirical approach, evidence-based science, mathematical modeling and computer simulation, simulation experiment, experimental methods, scenario approach and complexity science. This article provides an overview of the development of research methodology for MCI. The progresses of routine research approaches and complexity science are briefly presented in this paper. Furthermore, the authors conclude that the reductionism underlying the exact science is not suitable for MCI complex systems. And the only feasible alternative is complexity science. Finally, this summary is followed by a review that ACP method combining artificial systems, computational experiments and parallel execution provides a new idea to address researches for complex MCI.
NASA Astrophysics Data System (ADS)
Anikushina, V.; Taratukhin, V.; Stutterheim, C. v.; Gushin, V.
2018-02-01
A new psycholinguistic view on the crew communication, combined with biochemical and psychological data, contributes to noninvasive methods for stress appraisal and proposes alternative approaches to improve in-group communication and cohesion.
Applying Mixed Methods Research at the Synthesis Level: An Overview
ERIC Educational Resources Information Center
Heyvaert, Mieke; Maes, Bea; Onghena, Patrick
2011-01-01
Historically, qualitative and quantitative approaches have been applied relatively separately in synthesizing qualitative and quantitative evidence, respectively, in several research domains. However, mixed methods approaches are becoming increasingly popular nowadays, and practices of combining qualitative and quantitative research components at…
A fast combination method in DSmT and its application to recommender system
Liu, Yihai
2018-01-01
In many applications involving epistemic uncertainties usually modeled by belief functions, it is often necessary to approximate general (non-Bayesian) basic belief assignments (BBAs) to subjective probabilities (called Bayesian BBAs). This necessity occurs if one needs to embed the fusion result in a system based on the probabilistic framework and Bayesian inference (e.g. tracking systems), or if one needs to make a decision in the decision making problems. In this paper, we present a new fast combination method, called modified rigid coarsening (MRC), to obtain the final Bayesian BBAs based on hierarchical decomposition (coarsening) of the frame of discernment. Regarding this method, focal elements with probabilities are coarsened efficiently to reduce computational complexity in the process of combination by using disagreement vector and a simple dichotomous approach. In order to prove the practicality of our approach, this new approach is applied to combine users’ soft preferences in recommender systems (RSs). Additionally, in order to make a comprehensive performance comparison, the proportional conflict redistribution rule #6 (PCR6) is regarded as a baseline in a range of experiments. According to the results of experiments, MRC is more effective in accuracy of recommendations compared to original Rigid Coarsening (RC) method and comparable in computational time. PMID:29351297
Huang, Shi; MacKinnon, David P.; Perrino, Tatiana; Gallo, Carlos; Cruden, Gracelyn; Brown, C Hendricks
2016-01-01
Mediation analysis often requires larger sample sizes than main effect analysis to achieve the same statistical power. Combining results across similar trials may be the only practical option for increasing statistical power for mediation analysis in some situations. In this paper, we propose a method to estimate: 1) marginal means for mediation path a, the relation of the independent variable to the mediator; 2) marginal means for path b, the relation of the mediator to the outcome, across multiple trials; and 3) the between-trial level variance-covariance matrix based on a bivariate normal distribution. We present the statistical theory and an R computer program to combine regression coefficients from multiple trials to estimate a combined mediated effect and confidence interval under a random effects model. Values of coefficients a and b, along with their standard errors from each trial are the input for the method. This marginal likelihood based approach with Monte Carlo confidence intervals provides more accurate inference than the standard meta-analytic approach. We discuss computational issues, apply the method to two real-data examples and make recommendations for the use of the method in different settings. PMID:28239330
Kim, Tae Hyung; Setsompop, Kawin; Haldar, Justin P.
2016-01-01
Purpose Parallel imaging and partial Fourier acquisition are two classical approaches for accelerated MRI. Methods that combine these approaches often rely on prior knowledge of the image phase, but the need to obtain this prior information can place practical restrictions on the data acquisition strategy. In this work, we propose and evaluate SENSE-LORAKS, which enables combined parallel imaging and partial Fourier reconstruction without requiring prior phase information. Theory and Methods The proposed formulation is based on combining the classical SENSE model for parallel imaging data with the more recent LORAKS framework for MR image reconstruction using low-rank matrix modeling. Previous LORAKS-based methods have successfully enabled calibrationless partial Fourier parallel MRI reconstruction, but have been most successful with nonuniform sampling strategies that may be hard to implement for certain applications. By combining LORAKS with SENSE, we enable highly-accelerated partial Fourier MRI reconstruction for a broader range of sampling trajectories, including widely-used calibrationless uniformly-undersampled trajectories. Results Our empirical results with retrospectively undersampled datasets indicate that when SENSE-LORAKS reconstruction is combined with an appropriate k-space sampling trajectory, it can provide substantially better image quality at high-acceleration rates relative to existing state-of-the-art reconstruction approaches. Conclusion The SENSE-LORAKS framework provides promising new opportunities for highly-accelerated MRI. PMID:27037836
Bayesian-information-gap decision theory with an application to CO 2 sequestration
O'Malley, D.; Vesselinov, V. V.
2015-09-04
Decisions related to subsurface engineering problems such as groundwater management, fossil fuel production, and geologic carbon sequestration are frequently challenging because of an overabundance of uncertainties (related to conceptualizations, parameters, observations, etc.). Because of the importance of these problems to agriculture, energy, and the climate (respectively), good decisions that are scientifically defensible must be made despite the uncertainties. We describe a general approach to making decisions for challenging problems such as these in the presence of severe uncertainties that combines probabilistic and non-probabilistic methods. The approach uses Bayesian sampling to assess parametric uncertainty and Information-Gap Decision Theory (IGDT) to addressmore » model inadequacy. The combined approach also resolves an issue that frequently arises when applying Bayesian methods to real-world engineering problems related to the enumeration of possible outcomes. In the case of zero non-probabilistic uncertainty, the method reduces to a Bayesian method. Lastly, to illustrate the approach, we apply it to a site-selection decision for geologic CO 2 sequestration.« less
Yew, Yik Weng; Pan, Jiun Yit
2014-01-01
Genital warts in immunocompromised patients can be extensive and recalcitrant to treatment. We report a case of recalcitrant genital warts in a female patient with systemic lupus erythematosus (SLE), who achieved complete remission with a combination approach of surgical debulking and oral isotretinoin at an initial dose of 20 mg/day with a gradual taper of dose over 8 months. She had previously been treated with a combination of topical imiquimod cream and regular fortnightly liquid nitrogen. Although there was partial response, there was no complete clearance. Her condition worsened after topical imiquimod cream was stopped because of her pregnancy. She underwent a combination approach of surgical debulking and oral isotretinoin after her delivery and achieved full clearance for more than 2 years duration. Oral isotretinoin, especially in the treatment of recalcitrant genital warts, is a valuable and feasible option when other more conventional treatment methods have failed or are not possible. It can be used alone or in combination with other local or physical treatment methods. © 2013 Wiley Periodicals, Inc.
Evaluation of Sub Query Performance in SQL Server
NASA Astrophysics Data System (ADS)
Oktavia, Tanty; Sujarwo, Surya
2014-03-01
The paper explores several sub query methods used in a query and their impact on the query performance. The study uses experimental approach to evaluate the performance of each sub query methods combined with indexing strategy. The sub query methods consist of in, exists, relational operator and relational operator combined with top operator. The experimental shows that using relational operator combined with indexing strategy in sub query has greater performance compared with using same method without indexing strategy and also other methods. In summary, for application that emphasized on the performance of retrieving data from database, it better to use relational operator combined with indexing strategy. This study is done on Microsoft SQL Server 2012.
Fisz, Jacek J
2006-12-07
The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.
Boost OCR accuracy using iVector based system combination approach
NASA Astrophysics Data System (ADS)
Peng, Xujun; Cao, Huaigu; Natarajan, Prem
2015-01-01
Optical character recognition (OCR) is a challenging task because most existing preprocessing approaches are sensitive to writing style, writing material, noises and image resolution. Thus, a single recognition system cannot address all factors of real document images. In this paper, we describe an approach to combine diverse recognition systems by using iVector based features, which is a newly developed method in the field of speaker verification. Prior to system combination, document images are preprocessed and text line images are extracted with different approaches for each system, where iVector is transformed from a high-dimensional supervector of each text line and is used to predict the accuracy of OCR. We merge hypotheses from multiple recognition systems according to the overlap ratio and the predicted OCR score of text line images. We present evaluation results on an Arabic document database where the proposed method is compared against the single best OCR system using word error rate (WER) metric.
Schuemie, Martijn J; Mons, Barend; Weeber, Marc; Kors, Jan A
2007-06-01
Gene and protein name identification in text requires a dictionary approach to relate synonyms to the same gene or protein, and to link names to external databases. However, existing dictionaries are incomplete. We investigate two complementary methods for automatic generation of a comprehensive dictionary: combination of information from existing gene and protein databases and rule-based generation of spelling variations. Both methods have been reported in literature before, but have hitherto not been combined and evaluated systematically. We combined gene and protein names from several existing databases of four different organisms. The combined dictionaries showed a substantial increase in recall on three different test sets, as compared to any single database. Application of 23 spelling variation rules to the combined dictionaries further increased recall. However, many rules appeared to have no effect and some appear to have a detrimental effect on precision.
NASA Astrophysics Data System (ADS)
Araújo, Iván Gómez; Sánchez, Jesús Antonio García; Andersen, Palle
2018-05-01
Transmissibility-based operational modal analysis is a recent and alternative approach used to identify the modal parameters of structures under operational conditions. This approach is advantageous compared with traditional operational modal analysis because it does not make any assumptions about the excitation spectrum (i.e., white noise with a flat spectrum). However, common methodologies do not include a procedure to extract closely spaced modes with low signal-to-noise ratios. This issue is relevant when considering that engineering structures generally have closely spaced modes and that their measured responses present high levels of noise. Therefore, to overcome these problems, a new combined method for modal parameter identification is proposed in this work. The proposed method combines blind source separation (BSS) techniques and transmissibility-based methods. Here, BSS techniques were used to recover source signals, and transmissibility-based methods were applied to estimate modal information from the recovered source signals. To achieve this combination, a new method to define a transmissibility function was proposed. The suggested transmissibility function is based on the relationship between the power spectral density (PSD) of mixed signals and the PSD of signals from a single source. The numerical responses of a truss structure with high levels of added noise and very closely spaced modes were processed using the proposed combined method to evaluate its ability to identify modal parameters in these conditions. Colored and white noise excitations were used for the numerical example. The proposed combined method was also used to evaluate the modal parameters of an experimental test on a structure containing closely spaced modes. The results showed that the proposed combined method is capable of identifying very closely spaced modes in the presence of noise and, thus, may be potentially applied to improve the identification of damping ratios.
Pavement crack detection combining non-negative feature with fast LoG in complex scene
NASA Astrophysics Data System (ADS)
Wang, Wanli; Zhang, Xiuhua; Hong, Hanyu
2015-12-01
Pavement crack detection is affected by much interference in the realistic situation, such as the shadow, road sign, oil stain, salt and pepper noise etc. Due to these unfavorable factors, the exist crack detection methods are difficult to distinguish the crack from background correctly. How to extract crack information effectively is the key problem to the road crack detection system. To solve this problem, a novel method for pavement crack detection based on combining non-negative feature with fast LoG is proposed. The two key novelties and benefits of this new approach are that 1) using image pixel gray value compensation to acquisit uniform image, and 2) combining non-negative feature with fast LoG to extract crack information. The image preprocessing results demonstrate that the method is indeed able to homogenize the crack image with more accurately compared to existing methods. A large number of experimental results demonstrate the proposed approach can detect the crack regions more correctly compared with traditional methods.
EMSE at TREC 2015 Clinical Decision Support Track
2015-11-20
pseudo relevant documents, semantic ressources of UMLS , and a hybrid approach called SMERA that combines LSI and UMLS based approaches. Only three of...approach to query expansion uses ontologies ( UMLS ) and a lo- cal approach based on pseudo relevant feedback documents using LSI. A brief description of...pseudo relevance feedback documents, and a semantic method based on UMLS concepts. The LSI-based method was used only to expand summary terms that can’t
Combined Teaching Method: An Experimental Study
ERIC Educational Resources Information Center
Kolesnikova, Iryna V.
2016-01-01
The search for the best approach to business education has led educators and researchers to seek many different teaching strategies, ranging from the traditional teaching methods to various experimental approaches such as active learning techniques. The aim of this experimental study was to compare the effects of the traditional and combined…
A Mixed Learning Approach in Mechatronics Education
ERIC Educational Resources Information Center
Yilmaz, O.; Tuncalp, K.
2011-01-01
This study aims to investigate the effect of a Web-based mixed learning approach model on mechatronics education. The model combines different perception methods such as reading, listening, and speaking and practice methods developed in accordance with the vocational background of students enrolled in the course Electromechanical Systems in…
NASA Astrophysics Data System (ADS)
Liu, Likun
2018-01-01
In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.
A new fictitious domain approach for Stokes equation
NASA Astrophysics Data System (ADS)
Yang, Min
2017-10-01
The purpose of this paper is to present a new fictitious domain approach based on the Nietzsche’s method combining with a penalty method for the Stokes equation. This method allows for an easy and flexible handling of the geometrical aspects. Stability and a priori error estimate are proved. Finally, a numerical experiment is provided to verify the theoretical findings.
Combining Qualitative and Quantitative Approaches: Some Arguments for Mixed Methods Research
ERIC Educational Resources Information Center
Lund, Thorleif
2012-01-01
One purpose of the present paper is to elaborate 4 general advantages of the mixed methods approach. Another purpose is to propose a 5-phase evaluation design, and to demonstrate its usefulness for mixed methods research. The account is limited to research on groups in need of treatment, i.e., vulnerable groups, and the advantages of mixed methods…
ERIC Educational Resources Information Center
Kelly, Nick; Montenegro, Maximiliano; Gonzalez, Carlos; Clasing, Paula; Sandoval, Augusto; Jara, Magdalena; Saurina, Elvira; Alarcón, Rosa
2017-01-01
Purpose: The purpose of this paper is to demonstrate the utility of combining event-centred and variable-centred approaches when analysing big data for higher education institutions. It uses a large, university-wide data set to demonstrate the methodology for this analysis by using the case study method. It presents empirical findings about…
Protein fold recognition using geometric kernel data fusion.
Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves
2014-07-01
Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.
Xander: employing a novel method for efficient gene-targeted metagenomic assembly
Wang, Qiong; Fish, Jordan A.; Gilman, Mariah; ...
2015-08-05
Here, metagenomics can provide important insight into microbial communities. However, assembling metagenomic datasets has proven to be computationally challenging. Current methods often assemble only fragmented partial genes. We present a novel method for targeting assembly of specific protein-coding genes. This method combines a de Bruijn graph, as used in standard assembly approaches, and a protein profile hidden Markov model (HMM) for the gene of interest, as used in standard annotation approaches. These are used to create a novel combined weighted assembly graph. Xander performs both assembly and annotation concomitantly using information incorporated in this graph. We demonstrate the utility ofmore » this approach by assembling contigs for one phylogenetic marker gene and for two functional marker genes, first on Human Microbiome Project (HMP)-defined community Illumina data and then on 21 rhizosphere soil metagenomic datasets from three different crops totaling over 800 Gbp of unassembled data. We compared our method to a recently published bulk metagenome assembly method and a recently published gene-targeted assembler and found our method produced more, longer, and higher quality gene sequences. In conclusion, xander combines gene assignment with the rapid assembly of full-length or near full-length functional genes from metagenomic data without requiring bulk assembly or post-processing to find genes of interest. HMMs used for assembly can be tailored to the targeted genes, allowing flexibility to improve annotation over generic annotation pipelines.« less
Landsgesell, Jonas; Holm, Christian; Smiatek, Jens
2017-02-14
We present a novel method for the study of weak polyelectrolytes and general acid-base reactions in molecular dynamics and Monte Carlo simulations. The approach combines the advantages of the reaction ensemble and the Wang-Landau sampling method. Deprotonation and protonation reactions are simulated explicitly with the help of the reaction ensemble method, while the accurate sampling of the corresponding phase space is achieved by the Wang-Landau approach. The combination of both techniques provides a sufficient statistical accuracy such that meaningful estimates for the density of states and the partition sum can be obtained. With regard to these estimates, several thermodynamic observables like the heat capacity or reaction free energies can be calculated. We demonstrate that the computation times for the calculation of titration curves with a high statistical accuracy can be significantly decreased when compared to the original reaction ensemble method. The applicability of our approach is validated by the study of weak polyelectrolytes and their thermodynamic properties.
Vázquez-Rowe, Ian; Iribarren, Diego
2015-01-01
Life-cycle (LC) approaches play a significant role in energy policy making to determine the environmental impacts associated with the choice of energy source. Data envelopment analysis (DEA) can be combined with LC approaches to provide quantitative benchmarks that orientate the performance of energy systems towards environmental sustainability, with different implications depending on the selected LC + DEA method. The present paper examines currently available LC + DEA methods and develops a novel method combining carbon footprinting (CFP) and DEA. Thus, the CFP + DEA method is proposed, a five-step structure including data collection for multiple homogenous entities, calculation of target operating points, evaluation of current and target carbon footprints, and result interpretation. As the current context for energy policy implies an anthropocentric perspective with focus on the global warming impact of energy systems, the CFP + DEA method is foreseen to be the most consistent LC + DEA approach to provide benchmarks for energy policy making. The fact that this method relies on the definition of operating points with optimised resource intensity helps to moderate the concerns about the omission of other environmental impacts. Moreover, the CFP + DEA method benefits from CFP specifications in terms of flexibility, understanding, and reporting.
Vázquez-Rowe, Ian
2015-01-01
Life-cycle (LC) approaches play a significant role in energy policy making to determine the environmental impacts associated with the choice of energy source. Data envelopment analysis (DEA) can be combined with LC approaches to provide quantitative benchmarks that orientate the performance of energy systems towards environmental sustainability, with different implications depending on the selected LC + DEA method. The present paper examines currently available LC + DEA methods and develops a novel method combining carbon footprinting (CFP) and DEA. Thus, the CFP + DEA method is proposed, a five-step structure including data collection for multiple homogenous entities, calculation of target operating points, evaluation of current and target carbon footprints, and result interpretation. As the current context for energy policy implies an anthropocentric perspective with focus on the global warming impact of energy systems, the CFP + DEA method is foreseen to be the most consistent LC + DEA approach to provide benchmarks for energy policy making. The fact that this method relies on the definition of operating points with optimised resource intensity helps to moderate the concerns about the omission of other environmental impacts. Moreover, the CFP + DEA method benefits from CFP specifications in terms of flexibility, understanding, and reporting. PMID:25654136
Exploring Mouse Protein Function via Multiple Approaches.
Huang, Guohua; Chu, Chen; Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning; Cai, Yu-Dong
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality.
Exploring Mouse Protein Function via Multiple Approaches
Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality. PMID:27846315
Anderson, Annette Carola; Hellwig, Elmar; Vespermann, Robin; Wittmer, Annette; Schmid, Michael; Karygianni, Lamprini; Al-Ahmad, Ali
2012-01-01
Persistence of microorganisms or reinfections are the main reasons for failure of root canal therapy. Very few studies to date have included culture-independent methods to assess the microbiota, including non-cultivable microorganisms. The aim of this study was to combine culture methods with culture-independent cloning methods to analyze the microbial flora of root-filled teeth with periradicular lesions. Twenty-one samples from previously root-filled teeth were collected from patients with periradicular lesions. Microorganisms were cultivated, isolated and biochemically identified. In addition, ribosomal DNA of bacteria, fungi and archaea derived from the same samples was amplified and the PCR products were used to construct clone libraries. DNA of selected clones was sequenced and microbial species were identified, comparing the sequences with public databases. Microorganisms were found in 12 samples with culture-dependent and -independent methods combined. The number of bacterial species ranged from 1 to 12 in one sample. The majority of the 26 taxa belonged to the phylum Firmicutes (14 taxa), followed by Actinobacteria, Proteobacteria and Bacteroidetes. One sample was positive for fungi, and archaea could not be detected. The results obtained with both methods differed. The cloning technique detected several as-yet-uncultivated taxa. Using a combination of both methods 13 taxa were detected that had not been found in root-filled teeth so far. Enterococcus faecalis was only detected in two samples using culture methods. Combining the culture-dependent and –independent approaches revealed new candidate endodontic pathogens and a high diversity of the microbial flora in root-filled teeth with periradicular lesions. Both methods yielded differing results, emphasizing the benefit of combined methods for the detection of the actual microbial diversity in apical periodontitis. PMID:23152922
Nekhay, Olexandr; Arriaza, Manuel; Boerboom, Luc
2009-07-01
The study presents an approach that combined objective information such as sampling or experimental data with subjective information such as expert opinions. This combined approach was based on the Analytic Network Process method. It was applied to evaluate soil erosion risk and overcomes one of the drawbacks of USLE/RUSLE soil erosion models, namely that they do not consider interactions among soil erosion factors. Another advantage of this method is that it can be used if there are insufficient experimental data. The lack of experimental data can be compensated for through the use of expert evaluations. As an example of the proposed approach, the risk of soil erosion was evaluated in olive groves in Southern Spain, showing the potential of the ANP method for modelling a complex physical process like soil erosion.
A New Computational Method to Fit the Weighted Euclidean Distance Model.
ERIC Educational Resources Information Center
De Leeuw, Jan; Pruzansky, Sandra
1978-01-01
A computational method for weighted euclidean distance scaling (a method of multidimensional scaling) which combines aspects of an "analytic" solution with an approach using loss functions is presented. (Author/JKS)
[Mixed methods research in public health: issues and illustration].
Guével, Marie-Renée; Pommier, Jeanine
2012-01-01
For many years, researchers in a range of fields have combined quantitative and qualitative methods. However, the combined use of quantitative and qualitative methods has only recently been conceptualized and defined as mixed methods research. Some authors have described the emerging field as a third methodological tradition (in addition to the qualitative and quantitative traditions). Mixed methods research combines different perspectives and facilitates the study of complex interventions or programs, particularly in public health, an area where interdisciplinarity is critical. However, the existing literature is primarily in English. By contrast, the literature in French remains limited. The purpose of this paper is to present the emergence of mixed methods research for francophone public health specialists. A literature review was conducted to identify the main characteristics of mixed methods research. The results provide an overall picture of the mixed methods approach through its history, definitions, and applications, and highlight the tools developed to clarify the approach (typologies) and to implement it (integration of results and quality standards). The tools highlighted in the literature review are illustrated by a study conducted in France. Mixed methods research opens new possibilities for examining complex research questions and provides relevant and promising opportunities for addressing current public health issues in France.
Accurate Phylogenetic Tree Reconstruction from Quartets: A Heuristic Approach
Reaz, Rezwana; Bayzid, Md. Shamsuzzoha; Rahman, M. Sohel
2014-01-01
Supertree methods construct trees on a set of taxa (species) combining many smaller trees on the overlapping subsets of the entire set of taxa. A ‘quartet’ is an unrooted tree over taxa, hence the quartet-based supertree methods combine many -taxon unrooted trees into a single and coherent tree over the complete set of taxa. Quartet-based phylogeny reconstruction methods have been receiving considerable attentions in the recent years. An accurate and efficient quartet-based method might be competitive with the current best phylogenetic tree reconstruction methods (such as maximum likelihood or Bayesian MCMC analyses), without being as computationally intensive. In this paper, we present a novel and highly accurate quartet-based phylogenetic tree reconstruction method. We performed an extensive experimental study to evaluate the accuracy and scalability of our approach on both simulated and biological datasets. PMID:25117474
Rüdt, Matthias; Gillet, Florian; Heege, Stefanie; Hitzler, Julian; Kalbfuss, Bernd; Guélat, Bertrand
2015-09-25
Application of model-based design is appealing to support the development of protein chromatography in the biopharmaceutical industry. However, the required efforts for parameter estimation are frequently perceived as time-consuming and expensive. In order to speed-up this work, a new parameter estimation approach for modelling ion-exchange chromatography in linear conditions was developed. It aims at reducing the time and protein demand for the model calibration. The method combines the estimation of kinetic and thermodynamic parameters based on the simultaneous variation of the gradient slope and the residence time in a set of five linear gradient elutions. The parameters are estimated from a Yamamoto plot and a gradient-adjusted Van Deemter plot. The combined approach increases the information extracted per experiment compared to the individual methods. As a proof of concept, the combined approach was successfully applied for a monoclonal antibody on a cation-exchanger and for a Fc-fusion protein on an anion-exchange resin. The individual parameter estimations for the mAb confirmed that the new approach maintained the accuracy of the usual Yamamoto and Van Deemter plots. In the second case, offline size-exclusion chromatography was performed in order to estimate the thermodynamic parameters of an impurity (high molecular weight species) simultaneously with the main product. Finally, the parameters obtained from the combined approach were used in a lumped kinetic model to simulate the chromatography runs. The simulated chromatograms obtained for a wide range of gradient lengths and residence times showed only small deviations compared to the experimental data. Copyright © 2015 Elsevier B.V. All rights reserved.
Monitoring of microbial communities in anaerobic digestion sludge for biogas optimisation.
Lim, Jun Wei; Ge, Tianshu; Tong, Yen Wah
2018-01-01
This study characterised and compared the microbial communities of anaerobic digestion (AD) sludge using three different methods - (1) Clone library; (2) Pyrosequencing; and (3) Terminal restriction fragment length polymorphism (T-RFLP). Although high-throughput sequencing techniques are becoming increasingly popular and affordable, the reliance of such techniques for frequent monitoring of microbial communities may be a financial burden for some. Furthermore, the depth of microbial analysis revealed by high-throughput sequencing may not be required for monitoring purposes. This study aims to develop a rapid, reliable and economical approach for the monitoring of microbial communities in AD sludge. A combined approach where genetic information of sequences from clone library was used to assign phylogeny to T-RFs determined experimentally was developed in this study. In order to assess the effectiveness of the combined approach, microbial communities determined by the combined approach was compared to that characterised by pyrosequencing. Results showed that both pyrosequencing and clone library methods determined the dominant bacteria phyla to be Proteobacteria, Firmicutes, Bacteroidetes, and Thermotogae. Both methods also found that sludge A and B were predominantly dominated by acetogenic methanogens followed by hydrogenotrophic methanogens. The number of OTUs detected by T-RFLP was significantly lesser than that detected by the clone library. In this study, T-RFLP analysis identified majority of the dominant species of the archaeal consortia. However, many of the more highly diverse bacteria consortia were missed. Nevertheless, the combined approach developed in this study where clone sequences from the clone library were used to assign phylogeny to T-RFs determined experimentally managed to accurately predict the same dominant microbial groups for both sludge A and sludge B, as compared to the pyrosequencing results. Results showed that the combined approach of clone library and T-RFLP accurately predicted the dominant microbial groups and thus is a reliable and more economical way to monitor the evolution of microbial systems in AD sludge. Copyright © 2017 Elsevier Ltd. All rights reserved.
A multiscale approach to accelerate pore-scale simulation of porous electrodes
NASA Astrophysics Data System (ADS)
Zheng, Weibo; Kim, Seung Hyun
2017-04-01
A new method to accelerate pore-scale simulation of porous electrodes is presented. The method combines the macroscopic approach with pore-scale simulation by decomposing a physical quantity into macroscopic and local variations. The multiscale method is applied to the potential equation in pore-scale simulation of a Proton Exchange Membrane Fuel Cell (PEMFC) catalyst layer, and validated with the conventional approach for pore-scale simulation. Results show that the multiscale scheme substantially reduces the computational cost without sacrificing accuracy.
Towards an Airframe Noise Prediction Methodology: Survey of Current Approaches
NASA Technical Reports Server (NTRS)
Farassat, Fereidoun; Casper, Jay H.
2006-01-01
In this paper, we present a critical survey of the current airframe noise (AFN) prediction methodologies. Four methodologies are recognized. These are the fully analytic method, CFD combined with the acoustic analogy, the semi-empirical method and fully numerical method. It is argued that for the immediate need of the aircraft industry, the semi-empirical method based on recent high quality acoustic database is the best available method. The method based on CFD and the Ffowcs William- Hawkings (FW-H) equation with penetrable data surface (FW-Hpds ) has advanced considerably and much experience has been gained in its use. However, more research is needed in the near future particularly in the area of turbulence simulation. The fully numerical method will take longer to reach maturity. Based on the current trends, it is predicted that this method will eventually develop into the method of choice. Both the turbulence simulation and propagation methods need to develop more for this method to become useful. Nonetheless, the authors propose that the method based on a combination of numerical and analytical techniques, e.g., CFD combined with FW-H equation, should also be worked on. In this effort, the current symbolic algebra software will allow more analytical approaches to be incorporated into AFN prediction methods.
Andersen, Jesper H; Aroviita, Jukka; Carstensen, Jacob; Friberg, Nikolai; Johnson, Richard K; Kauppila, Pirkko; Lindegarth, Mats; Murray, Ciarán; Norling, Karl
2016-10-01
We review approaches and tools currently used in Nordic countries (Denmark, Finland, Norway and Sweden) for integrated assessment of 'ecological status' sensu the EU Water Framework Directive as well as assessment of 'eutrophication status' in coastal and marine waters. Integration principles for combining indicators within biological quality elements (BQEs) and combining BQEs into a final-integrated assessment are discussed. Specific focus has been put on combining different types of information into indices, since several methods are currently employed. As a consequence of the variety of methods used, comparisons across both BQEs and water categories (river, lakes and coastal waters) can be difficult. Based on our analyses, we conclude that some principles and methods for integration can be critical and that a harmonised approach should be developed. Further, we conclude that the integration principles applied within BQEs are critical and in need of harmonisation if we want a better understanding of potential transition in ecological status between surface water types, e.g. when riverine water enters a downstream lake or coastal water body.
Propellant Readiness Level: A Methodological Approach to Propellant Characterization
NASA Technical Reports Server (NTRS)
Bossard, John A.; Rhys, Noah O.
2010-01-01
A methodological approach to defining propellant characterization is presented. The method is based on the well-established Technology Readiness Level nomenclature. This approach establishes the Propellant Readiness Level as a metric for ascertaining the readiness of a propellant or a propellant combination by evaluating the following set of propellant characteristics: thermodynamic data, toxicity, applications, combustion data, heat transfer data, material compatibility, analytical prediction modeling, injector/chamber geometry, pressurization, ignition, combustion stability, system storability, qualification testing, and flight capability. The methodology is meant to be applicable to all propellants or propellant combinations; liquid, solid, and gaseous propellants as well as monopropellants and propellant combinations are equally served. The functionality of the proposed approach is tested through the evaluation and comparison of an example set of hydrocarbon fuels.
Combining approaches to on-line handwriting information retrieval
NASA Astrophysics Data System (ADS)
Peña Saldarriaga, Sebastián; Viard-Gaudin, Christian; Morin, Emmanuel
2010-01-01
In this work, we propose to combine two quite different approaches for retrieving handwritten documents. Our hypothesis is that different retrieval algorithms should retrieve different sets of documents for the same query. Therefore, significant improvements in retrieval performances can be expected. The first approach is based on information retrieval techniques carried out on the noisy texts obtained through handwriting recognition, while the second approach is recognition-free using a word spotting algorithm. Results shows that for texts having a word error rate (WER) lower than 23%, the performances obtained with the combined system are close to the performances obtained on clean digital texts. In addition, for poorly recognized texts (WER > 52%), an improvement of nearly 17% can be observed with respect to the best available baseline method.
Jin, Rui; Huang, Jian-Mei; Wang, Yu-Guang; Zhang, Bing
2016-02-01
Combined use of Chinese medicine and western medicine is one of the hot spots in the domestic medical and academic fields for many years. There are lots of involved reports and studies on interaction problems due to combined used of Chinese medicine and western medicine, however, framework understanding is still rarely seen, affecting the clinical rationality of drug combinations. Actually, the inference ideas of drug interactions in clinical practice are more extensive and practical, and the overall viewpoint and pragmatic idea are the important factors in evaluating the rationality of clinical drug combinations. Based on above points, this paper systemically analyzed the existing information and examples, deeply discuss the embryology background (environment and action mechanism of interactions), and principally divided the interactions into three important and independent categories. Among the three categories, the first category (Ⅰapproach) was defined as the physical/chemical reactions after direct contact in vivo or in vitro, such as the combination of Chinese medicine injections and western medicine injections (in vitro), combination of bromide and Chinese medicines containing cinnabar (in vivo). The evaluation method for such interactions may be generalized theory of Acid-Base reaction. The second category (Ⅱ approach) was defined as the interactions through the pharmacokinetic process including absorption (such as the combination of aspirin and Huowei capsule), distribution (such as the combination of artosin and medicinal herbs containing coumarin), metabolism (such as the combination of phenobarbital and glycyrrhiza) and excretion (such as the combination of furadantin and Crataegi Fructus). The existing pharmacokinetic theory can act as the evaluation method for this type of interaction. The third category (Ⅲ approach) was defined as the synergy/antagonism interactions by pharmacological effects or biological pathways. The combination of warfarin and Salvia miltiorrhiza is an example for synergy interaction, while the combination of guanethidine and ephedra is an example for anatagonism interaction. The repeated application of Chinese and western medicine compound preparations and same type of western medicine also belongs to this approach. The receptor competition theory under the view of the overall pathways might act as the evaluation method for this type of interactions. Above all, the research framework on interactions between Chinese medicine and western medicine was proposed, providing overall thinking and support for the essential study on combined application of Chinese medicine and western medicine. Copyright© by the Chinese Pharmaceutical Association.
Testa, Maria; Livingston, Jennifer A; VanZile-Tamsen, Carol
2011-02-01
A mixed methods approach, combining quantitative with qualitative data methods and analysis, offers a promising means of advancing the study of violence. Integrating semi-structured interviews and qualitative analysis into a quantitative program of research on women's sexual victimization has resulted in valuable scientific insight and generation of novel hypotheses for testing. This mixed methods approach is described and recommendations for integrating qualitative data into quantitative research are provided.
Design of k-Space Channel Combination Kernels and Integration with Parallel Imaging
Beatty, Philip J.; Chang, Shaorong; Holmes, James H.; Wang, Kang; Brau, Anja C. S.; Reeder, Scott B.; Brittain, Jean H.
2014-01-01
Purpose In this work, a new method is described for producing local k-space channel combination kernels using a small amount of low-resolution multichannel calibration data. Additionally, this work describes how these channel combination kernels can be combined with local k-space unaliasing kernels produced by the calibration phase of parallel imaging methods such as GRAPPA, PARS and ARC. Methods Experiments were conducted to evaluate both the image quality and computational efficiency of the proposed method compared to a channel-by-channel parallel imaging approach with image-space sum-of-squares channel combination. Results Results indicate comparable image quality overall, with some very minor differences seen in reduced field-of-view imaging. It was demonstrated that this method enables a speed up in computation time on the order of 3–16X for 32-channel data sets. Conclusion The proposed method enables high quality channel combination to occur earlier in the reconstruction pipeline, reducing computational and memory requirements for image reconstruction. PMID:23943602
NUCLEON-mission: A New Approach to Cosmic Rays Investigation
NASA Technical Reports Server (NTRS)
Adams, J.; Bashindzhagyan, G.; Chilingarian, A.; Drury, L.; Egorov, N.; Golubkov, S.; Grebenyuk, V.; Korotkova, N.; Mashkantcev, A.; Nanjo, H.;
2001-01-01
A new approach to Cosmic Rays Investigation is proposed. The main idea is to combine two experimental methods (KLEM and UHIS) for the NUCLEON Project. The KLEM (Kinematic Lightweight Energy Meter) method is used for the study of chemical composition and elemental energy spectra of galactic CRs in extremely wide energy range 10(exp 11)-10(exp 15) eV. The UHIS (Ultra Heavy Isotope Spectrometer) method is used for the ultra heavy CR nuclei fluxes registration nuclei beyond the iron peak. Combination of the two techniques will lead not to simple mechanical unification of two instruments in one block, but lead to the creation of a unique instrument, with a number of advantages.
A Critical Commentary on Combined Methods Approach to Researching Educational and Social Issues
ERIC Educational Resources Information Center
Nudzor, Hope Pius
2009-01-01
One major issue social science research is faced with concerns the methodological schism and internecine "warfare" that divides the field. This paper examines critically what is referred to as combined methods research, and the claim that this is the best methodology for addressing complex social issues. The paper discredits this claim on the…
2009-09-01
instructional format. Using a mixed- method coding and analysis approach, the sample of POIs were categorized, coded, statistically analyzed, and a... Method SECURITY CLASSIFICATION OF 19. LIMITATION OF 20. NUMBER 21. RESPONSIBLE PERSON 16. REPORT Unclassified 17. ABSTRACT...transition to a distributed (or blended) learning format. Procedure: A mixed- methods approach, combining qualitative coding procedures with basic
Xander: employing a novel method for efficient gene-targeted metagenomic assembly.
Wang, Qiong; Fish, Jordan A; Gilman, Mariah; Sun, Yanni; Brown, C Titus; Tiedje, James M; Cole, James R
2015-01-01
Metagenomics can provide important insight into microbial communities. However, assembling metagenomic datasets has proven to be computationally challenging. Current methods often assemble only fragmented partial genes. We present a novel method for targeting assembly of specific protein-coding genes. This method combines a de Bruijn graph, as used in standard assembly approaches, and a protein profile hidden Markov model (HMM) for the gene of interest, as used in standard annotation approaches. These are used to create a novel combined weighted assembly graph. Xander performs both assembly and annotation concomitantly using information incorporated in this graph. We demonstrate the utility of this approach by assembling contigs for one phylogenetic marker gene and for two functional marker genes, first on Human Microbiome Project (HMP)-defined community Illumina data and then on 21 rhizosphere soil metagenomic datasets from three different crops totaling over 800 Gbp of unassembled data. We compared our method to a recently published bulk metagenome assembly method and a recently published gene-targeted assembler and found our method produced more, longer, and higher quality gene sequences. Xander combines gene assignment with the rapid assembly of full-length or near full-length functional genes from metagenomic data without requiring bulk assembly or post-processing to find genes of interest. HMMs used for assembly can be tailored to the targeted genes, allowing flexibility to improve annotation over generic annotation pipelines. This method is implemented as open source software and is available at https://github.com/rdpstaff/Xander_assembler.
Treatment of winery wastewater by electrochemical methods and advanced oxidation processes.
Orescanin, Visnja; Kollar, Robert; Nad, Karlo; Mikelic, Ivanka Lovrencic; Gustek, Stefica Findri
2013-01-01
The aim of this research was development of new system for the treatment of highly polluted wastewater (COD = 10240 mg/L; SS = 2860 mg/L) originating from vine-making industry. The system consisted of the main treatment that included electrochemical methods (electro oxidation, electrocoagulation using stainless steel, iron and aluminum electrode sets) with simultaneous sonication and recirculation in strong electromagnetic field. Ozonation combined with UV irradiation in the presence of added hydrogen peroxide was applied for the post-treatment of the effluent. Following the combined treatment, the final removal efficiencies of the parameters color, turbidity, suspended solids and phosphates were over 99%, Fe, Cu and ammonia approximately 98%, while the removal of COD and sulfates was 77% and 62%, respectively. A new approach combining electrochemical methods with ultrasound in the strong electromagnetic field resulted in significantly better removal efficiencies for majority of the measured parameters compared to the biological methods, advanced oxidation processes or electrocoagulation. Reduction of the treatment time represents another advantage of this new approach.
A Systematic Approach to Determining the Identifiability of Multistage Carcinogenesis Models.
Brouwer, Andrew F; Meza, Rafael; Eisenberg, Marisa C
2017-07-01
Multistage clonal expansion (MSCE) models of carcinogenesis are continuous-time Markov process models often used to relate cancer incidence to biological mechanism. Identifiability analysis determines what model parameter combinations can, theoretically, be estimated from given data. We use a systematic approach, based on differential algebra methods traditionally used for deterministic ordinary differential equation (ODE) models, to determine identifiable combinations for a generalized subclass of MSCE models with any number of preinitation stages and one clonal expansion. Additionally, we determine the identifiable combinations of the generalized MSCE model with up to four clonal expansion stages, and conjecture the results for any number of clonal expansion stages. The results improve upon previous work in a number of ways and provide a framework to find the identifiable combinations for further variations on the MSCE models. Finally, our approach, which takes advantage of the Kolmogorov backward equations for the probability generating functions of the Markov process, demonstrates that identifiability methods used in engineering and mathematics for systems of ODEs can be applied to continuous-time Markov processes. © 2016 Society for Risk Analysis.
Yang, Baohui; Lu, Teng
2017-01-01
For patients with AS and lower cervical spine fractures, surgical methods have mainly included the single anterior approach, single posterior approach, and combined anterior-posterior approach. However, various surgical procedures were utilized because the fractures have not been clearly classified according to presence of displacement in these previous studies. Consequently, controversies have been raised regarding the selection of the surgical procedure. This study retrospective analysis was conducted in 12 patients with AS and lower cervical spine fractures and dislocations and explored single-session combined anterior-posterior approach for the treatment of AS with obvious displaced lower cervical spine fractures and dislocations which has demonstrated advantages such as good stabilization, satisfied fracture healing, and easy postoperative cares. However, to some extent, the difficulty and risk of this approach should be considered. Attention should be paid to the prevention of perioperative complications. PMID:28133616
Kovalchuk, Sergey V; Funkner, Anastasia A; Metsker, Oleg G; Yakovlev, Aleksey N
2018-06-01
An approach to building a hybrid simulation of patient flow is introduced with a combination of data-driven methods for automation of model identification. The approach is described with a conceptual framework and basic methods for combination of different techniques. The implementation of the proposed approach for simulation of the acute coronary syndrome (ACS) was developed and used in an experimental study. A combination of data, text, process mining techniques, and machine learning approaches for the analysis of electronic health records (EHRs) with discrete-event simulation (DES) and queueing theory for the simulation of patient flow was proposed. The performed analysis of EHRs for ACS patients enabled identification of several classes of clinical pathways (CPs) which were used to implement a more realistic simulation of the patient flow. The developed solution was implemented using Python libraries (SimPy, SciPy, and others). The proposed approach enables more a realistic and detailed simulation of the patient flow within a group of related departments. An experimental study shows an improved simulation of patient length of stay for ACS patient flow obtained from EHRs in Almazov National Medical Research Centre in Saint Petersburg, Russia. The proposed approach, methods, and solutions provide a conceptual, methodological, and programming framework for the implementation of a simulation of complex and diverse scenarios within a flow of patients for different purposes: decision making, training, management optimization, and others. Copyright © 2018 Elsevier Inc. All rights reserved.
Integrating structure-based and ligand-based approaches for computational drug design.
Wilson, Gregory L; Lill, Markus A
2011-04-01
Methods utilized in computer-aided drug design can be classified into two major categories: structure based and ligand based, using information on the structure of the protein or on the biological and physicochemical properties of bound ligands, respectively. In recent years there has been a trend towards integrating these two methods in order to enhance the reliability and efficiency of computer-aided drug-design approaches by combining information from both the ligand and the protein. This trend resulted in a variety of methods that include: pseudoreceptor methods, pharmacophore methods, fingerprint methods and approaches integrating docking with similarity-based methods. In this article, we will describe the concepts behind each method and selected applications.
Objectively combining AR5 instrumental period and paleoclimate climate sensitivity evidence
NASA Astrophysics Data System (ADS)
Lewis, Nicholas; Grünwald, Peter
2018-03-01
Combining instrumental period evidence regarding equilibrium climate sensitivity with largely independent paleoclimate proxy evidence should enable a more constrained sensitivity estimate to be obtained. Previous, subjective Bayesian approaches involved selection of a prior probability distribution reflecting the investigators' beliefs about climate sensitivity. Here a recently developed approach employing two different statistical methods—objective Bayesian and frequentist likelihood-ratio—is used to combine instrumental period and paleoclimate evidence based on data presented and assessments made in the IPCC Fifth Assessment Report. Probabilistic estimates from each source of evidence are represented by posterior probability density functions (PDFs) of physically-appropriate form that can be uniquely factored into a likelihood function and a noninformative prior distribution. The three-parameter form is shown accurately to fit a wide range of estimated climate sensitivity PDFs. The likelihood functions relating to the probabilistic estimates from the two sources are multiplicatively combined and a prior is derived that is noninformative for inference from the combined evidence. A posterior PDF that incorporates the evidence from both sources is produced using a single-step approach, which avoids the order-dependency that would arise if Bayesian updating were used. Results are compared with an alternative approach using the frequentist signed root likelihood ratio method. Results from these two methods are effectively identical, and provide a 5-95% range for climate sensitivity of 1.1-4.05 K (median 1.87 K).
Thermal/structural design verification strategies for large space structures
NASA Technical Reports Server (NTRS)
Benton, David
1988-01-01
Requirements for space structures of increasing size, complexity, and precision have engendered a search for thermal design verification methods that do not impose unreasonable costs, that fit within the capabilities of existing facilities, and that still adequately reduce technical risk. This requires a combination of analytical and testing methods. This requires two approaches. The first is to limit thermal testing to sub-elements of the total system only in a compact configuration (i.e., not fully deployed). The second approach is to use a simplified environment to correlate analytical models with test results. These models can then be used to predict flight performance. In practice, a combination of these approaches is needed to verify the thermal/structural design of future very large space systems.
Systematic process synthesis and design methods for cost effective waste minimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biegler, L.T.; Grossman, I.E.; Westerberg, A.W.
We present progress on our work to develop synthesis methods to aid in the design of cost effective approaches to waste minimization. Work continues to combine the approaches of Douglas and coworkers and of Grossmann and coworkers on a hierarchical approach where bounding information allows it to fit within a mixed integer programming approach. We continue work on the synthesis of reactors and of flexible separation processes. In the first instance, we strive for methods we can use to reduce the production of potential pollutants, while in the second we look for ways to recover and recycle solvents.
Financial Planning for Information Technology: Conventional Approaches Need Not Apply.
ERIC Educational Resources Information Center
Falduto, Ellen F.
1999-01-01
Rapid advances in information technology have rendered conventional approaches to planning and budgeting useless, and no single method is universally appropriate. The most successful planning efforts are consistent with the institution's overall plan, and may combine conventional, opportunistic, and entrepreneurial approaches. Chief financial…
Wang, Wei; Song, Wei-Guo; Liu, Shi-Xing; Zhang, Yong-Ming; Zheng, Hong-Yang; Tian, Wei
2011-04-01
An improved method for detecting cloud combining Kmeans clustering and the multi-spectral threshold approach is described. On the basis of landmark spectrum analysis, MODIS data is categorized into two major types initially by Kmeans method. The first class includes clouds, smoke and snow, and the second class includes vegetation, water and land. Then a multi-spectral threshold detection is applied to eliminate interference such as smoke and snow for the first class. The method is tested with MODIS data at different time under different underlying surface conditions. By visual method to test the performance of the algorithm, it was found that the algorithm can effectively detect smaller area of cloud pixels and exclude the interference of underlying surface, which provides a good foundation for the next fire detection approach.
NASA Astrophysics Data System (ADS)
Eissa, Maya S.; Abou Al Alamein, Amal M.
2018-03-01
Different innovative spectrophotometric methods were introduced for the first time for simultaneous quantification of sacubitril/valsartan in their binary mixture and in their combined dosage form without prior separation through two manipulation approaches. These approaches were developed and based either on two wavelength selection in zero-order absorption spectra namely; dual wavelength method (DWL) at 226 nm and 275 nm for valsartan, induced dual wavelength method (IDW) at 226 nm and 254 nm for sacubitril and advanced absorbance subtraction (AAS) based on their iso-absorptive point at 246 nm (λiso) and 261 nm (sacubitril shows equal absorbance values at the two selected wavelengths) or on ratio spectra using their normalized spectra namely; ratio difference spectrophotometric method (RD) at 225 nm and 264 nm for both of them in their ratio spectra, first derivative of ratio spectra (DR1) at 232 nm for valsartan and 239 nm for sacubitril and mean centering of ratio spectra (MCR) at 260 nm for both of them. Both sacubitril and valsartan showed linearity upon application of these methods in the range of 2.5-25.0 μg/mL. The developed spectrophotmetric methods were successfully applied to the analysis of their combined tablet dosage form ENTRESTO™. The adopted spectrophotometric methods were also validated according to ICH guidelines. The results obtained from the proposed methods were statistically compared to a reported HPLC method using Student t-test, F-test and a comparative study was also developed with one-way ANOVA, showing no statistical difference in accordance to precision and accuracy.
Mixed methods in gerontological research: Do the qualitative and quantitative data “touch”?
Happ, Mary Beth
2010-01-01
This paper distinguishes between parallel and integrated mixed methods research approaches. Barriers to integrated mixed methods approaches in gerontological research are discussed and critiqued. The author presents examples of mixed methods gerontological research to illustrate approaches to data integration at the levels of data analysis, interpretation, and research reporting. As a summary of the methodological literature, four basic levels of mixed methods data combination are proposed. Opportunities for mixing qualitative and quantitative data are explored using contemporary examples from published studies. Data transformation and visual display, judiciously applied, are proposed as pathways to fuller mixed methods data integration and analysis. Finally, practical strategies for mixing qualitative and quantitative data types are explicated as gerontological research moves beyond parallel mixed methods approaches to achieve data integration. PMID:20077973
Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin
2015-01-01
The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419
Rao, Jinmeng; Qiao, Yanjun; Ren, Fu; Wang, Junxing; Du, Qingyun
2017-01-01
The purpose of this study was to develop a robust, fast and markerless mobile augmented reality method for registration, geovisualization and interaction in uncontrolled outdoor environments. We propose a lightweight deep-learning-based object detection approach for mobile or embedded devices; the vision-based detection results of this approach are combined with spatial relationships by means of the host device’s built-in Global Positioning System receiver, Inertial Measurement Unit and magnetometer. Virtual objects generated based on geospatial information are precisely registered in the real world, and an interaction method based on touch gestures is implemented. The entire method is independent of the network to ensure robustness to poor signal conditions. A prototype system was developed and tested on the Wuhan University campus to evaluate the method and validate its results. The findings demonstrate that our method achieves a high detection accuracy, stable geovisualization results and interaction. PMID:28837096
Rao, Jinmeng; Qiao, Yanjun; Ren, Fu; Wang, Junxing; Du, Qingyun
2017-08-24
The purpose of this study was to develop a robust, fast and markerless mobile augmented reality method for registration, geovisualization and interaction in uncontrolled outdoor environments. We propose a lightweight deep-learning-based object detection approach for mobile or embedded devices; the vision-based detection results of this approach are combined with spatial relationships by means of the host device's built-in Global Positioning System receiver, Inertial Measurement Unit and magnetometer. Virtual objects generated based on geospatial information are precisely registered in the real world, and an interaction method based on touch gestures is implemented. The entire method is independent of the network to ensure robustness to poor signal conditions. A prototype system was developed and tested on the Wuhan University campus to evaluate the method and validate its results. The findings demonstrate that our method achieves a high detection accuracy, stable geovisualization results and interaction.
Testa, Maria; Livingston, Jennifer A.; VanZile-Tamsen, Carol
2011-01-01
A mixed methods approach, combining quantitative with qualitative data methods and analysis, offers a promising means of advancing the study of violence. Integrating semi-structured interviews and qualitative analysis into a quantitative program of research on women’s sexual victimization has resulted in valuable scientific insight and generation of novel hypotheses for testing. This mixed methods approach is described and recommendations for integrating qualitative data into quantitative research are provided. PMID:21307032
NASA Astrophysics Data System (ADS)
Dodd, Michael; Ferrante, Antonino
2017-11-01
Our objective is to perform DNS of finite-size droplets that are evaporating in isotropic turbulence. This requires fully resolving the process of momentum, heat, and mass transfer between the droplets and surrounding gas. We developed a combined volume-of-fluid (VOF) method and low-Mach-number approach to simulate this flow. The two main novelties of the method are: (i) the VOF algorithm captures the motion of the liquid gas interface in the presence of mass transfer due to evaporation and condensation without requiring a projection step for the liquid velocity, and (ii) the low-Mach-number approach allows for local volume changes caused by phase change while the total volume of the liquid-gas system is constant. The method is verified against an analytical solution for a Stefan flow problem, and the D2 law is verified for a single droplet in quiescent gas. We also demonstrate the schemes robustness when performing DNS of an evaporating droplet in forced isotropic turbulence.
NASA Technical Reports Server (NTRS)
Kim, Hakil; Swain, Philip H.
1990-01-01
An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method.
Combined Simulated Annealing and Genetic Algorithm Approach to Bus Network Design
NASA Astrophysics Data System (ADS)
Liu, Li; Olszewski, Piotr; Goh, Pong-Chai
A new method - combined simulated annealing (SA) and genetic algorithm (GA) approach is proposed to solve the problem of bus route design and frequency setting for a given road network with fixed bus stop locations and fixed travel demand. The method involves two steps: a set of candidate routes is generated first and then the best subset of these routes is selected by the combined SA and GA procedure. SA is the main process to search for a better solution to minimize the total system cost, comprising user and operator costs. GA is used as a sub-process to generate new solutions. Bus demand assignment on two alternative paths is performed at the solution evaluation stage. The method was implemented on four theoretical grid networks of different size and a benchmark network. Several GA operators (crossover and mutation) were utilized and tested for their effectiveness. The results show that the proposed method can efficiently converge to the optimal solution on a small network but computation time increases significantly with network size. The method can also be used for other transport operation management problems.
Yu, Huanzhou; Shimakawa, Ann; Hines, Catherine D. G.; McKenzie, Charles A.; Hamilton, Gavin; Sirlin, Claude B.; Brittain, Jean H.; Reeder, Scott B.
2011-01-01
Multipoint water–fat separation techniques rely on different water–fat phase shifts generated at multiple echo times to decompose water and fat. Therefore, these methods require complex source images and allow unambiguous separation of water and fat signals. However, complex-based water–fat separation methods are sensitive to phase errors in the source images, which may lead to clinically important errors. An alternative approach to quantify fat is through “magnitude-based” methods that acquire multiecho magnitude images. Magnitude-based methods are insensitive to phase errors, but cannot estimate fat-fraction greater than 50%. In this work, we introduce a water–fat separation approach that combines the strengths of both complex and magnitude reconstruction algorithms. A magnitude-based reconstruction is applied after complex-based water–fat separation to removes the effect of phase errors. The results from the two reconstructions are then combined. We demonstrate that using this hybrid method, 0–100% fat-fraction can be estimated with improved accuracy at low fat-fractions. PMID:21695724
A 2D MTF approach to evaluate and guide dynamic imaging developments.
Chao, Tzu-Cheng; Chung, Hsiao-Wen; Hoge, W Scott; Madore, Bruno
2010-02-01
As the number and complexity of partially sampled dynamic imaging methods continue to increase, reliable strategies to evaluate performance may prove most useful. In the present work, an analytical framework to evaluate given reconstruction methods is presented. A perturbation algorithm allows the proposed evaluation scheme to perform robustly without requiring knowledge about the inner workings of the method being evaluated. A main output of the evaluation process consists of a two-dimensional modulation transfer function, an easy-to-interpret visual rendering of a method's ability to capture all combinations of spatial and temporal frequencies. Approaches to evaluate noise properties and artifact content at all spatial and temporal frequencies are also proposed. One fully sampled phantom and three fully sampled cardiac cine datasets were subsampled (R = 4 and 8) and reconstructed with the different methods tested here. A hybrid method, which combines the main advantageous features observed in our assessments, was proposed and tested in a cardiac cine application, with acceleration factors of 3.5 and 6.3 (skip factors of 4 and 8, respectively). This approach combines features from methods such as k-t sensitivity encoding, unaliasing by Fourier encoding the overlaps in the temporal dimension-sensitivity encoding, generalized autocalibrating partially parallel acquisition, sensitivity profiles from an array of coils for encoding and reconstruction in parallel, self, hybrid referencing with unaliasing by Fourier encoding the overlaps in the temporal dimension and generalized autocalibrating partially parallel acquisition, and generalized autocalibrating partially parallel acquisition-enhanced sensitivity maps for sensitivity encoding reconstructions.
A Deep Learning Approach to on-Node Sensor Data Analytics for Mobile or Wearable Devices.
Ravi, Daniele; Wong, Charence; Lo, Benny; Yang, Guang-Zhong
2017-01-01
The increasing popularity of wearable devices in recent years means that a diverse range of physiological and functional data can now be captured continuously for applications in sports, wellbeing, and healthcare. This wealth of information requires efficient methods of classification and analysis where deep learning is a promising technique for large-scale data analytics. While deep learning has been successful in implementations that utilize high-performance computing platforms, its use on low-power wearable devices is limited by resource constraints. In this paper, we propose a deep learning methodology, which combines features learned from inertial sensor data together with complementary information from a set of shallow features to enable accurate and real-time activity classification. The design of this combined method aims to overcome some of the limitations present in a typical deep learning framework where on-node computation is required. To optimize the proposed method for real-time on-node computation, spectral domain preprocessing is used before the data are passed onto the deep learning framework. The classification accuracy of our proposed deep learning approach is evaluated against state-of-the-art methods using both laboratory and real world activity datasets. Our results show the validity of the approach on different human activity datasets, outperforming other methods, including the two methods used within our combined pipeline. We also demonstrate that the computation times for the proposed method are consistent with the constraints of real-time on-node processing on smartphones and a wearable sensor platform.
Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M
2006-04-21
Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association studies using the case-control design, the application of a combination of several methods, including the set association approach, MDR and the random forests approach, will likely be a useful strategy to find the important genes and interaction patterns involved in complex diseases.
ERIC Educational Resources Information Center
Peterson, Janey C.; Czajkowski, Susan; Charlson, Mary E.; Link, Alissa R.; Wells, Martin T.; Isen, Alice M.; Mancuso, Carol A.; Allegrante, John P.; Boutin-Foster, Carla; Ogedegbe, Gbenga; Jobe, Jared B.
2013-01-01
Objective: To describe a mixed-methods approach to develop and test a basic behavioral science-informed intervention to motivate behavior change in 3 high-risk clinical populations. Our theoretically derived intervention comprised a combination of positive affect and self-affirmation (PA/SA), which we applied to 3 clinical chronic disease…
From Physical Process to Economic Cost - Integrated Approaches of Landslide Risk Assessment
NASA Astrophysics Data System (ADS)
Klose, M.; Damm, B.
2014-12-01
The nature of landslides is complex in many respects, with landslide hazard and impact being dependent on a variety of factors. This obviously requires an integrated assessment for fundamental understanding of landslide risk. Integrated risk assessment, according to the approach presented in this contribution, implies combining prediction of future landslide occurrence with analysis of landslide impact in the past. A critical step for assessing landslide risk in integrated perspective is to analyze what types of landslide damage affected people and property in which way and how people contributed and responded to these damage types. In integrated risk assessment, the focus is on systematic identification and monetization of landslide damage, and analytical tools that allow deriving economic costs from physical landslide processes are at the heart of this approach. The broad spectrum of landslide types and process mechanisms as well as nonlinearity between landslide magnitude, damage intensity, and direct costs are some main factors explaining recent challenges in risk assessment. The two prevailing approaches for assessing the impact of landslides in economic terms are cost survey (ex-post) and risk analysis (ex-ante). Both approaches are able to complement each other, but yet a combination of them has not been realized so far. It is common practice today to derive landslide risk without considering landslide process-based cause-effect relationships, since integrated concepts or new modeling tools expanding conventional methods are still widely missing. The approach introduced in this contribution is based on a systematic framework that combines cost survey and GIS-based tools for hazard or cost modeling with methods to assess interactions between land use practices and landslides in historical perspective. Fundamental understanding of landslide risk also requires knowledge about the economic and fiscal relevance of landslide losses, wherefore analysis of their impact on public budgets is a further component of this approach. In integrated risk assessment, combination of methods plays an important role, with the objective of collecting and integrating complex data sets on landslide risk.
A Novel Method to Identify Differential Pathways in Hippocampus Alzheimer's Disease.
Liu, Chun-Han; Liu, Lian
2017-05-08
BACKGROUND Alzheimer's disease (AD) is the most common type of dementia. The objective of this paper is to propose a novel method to identify differential pathways in hippocampus AD. MATERIAL AND METHODS We proposed a combined method by merging existed methods. Firstly, pathways were identified by four known methods (DAVID, the neaGUI package, the pathway-based co-expressed method, and the pathway network approach), and differential pathways were evaluated through setting weight thresholds. Subsequently, we combined all pathways by a rank-based algorithm and called the method the combined method. Finally, common differential pathways across two or more of five methods were selected. RESULTS Pathways obtained from different methods were also different. The combined method obtained 1639 pathways and 596 differential pathways, which included all pathways gained from the four existing methods; hence, the novel method solved the problem of inconsistent results. Besides, a total of 13 common pathways were identified, such as metabolism, immune system, and cell cycle. CONCLUSIONS We have proposed a novel method by combining four existing methods based on a rank product algorithm, and identified 13 significant differential pathways based on it. These differential pathways might provide insight into treatment and diagnosis of hippocampus AD.
Combining Feature Extraction Methods to Assist the Diagnosis of Alzheimer's Disease.
Segovia, F; Górriz, J M; Ramírez, J; Phillips, C
2016-01-01
Neuroimaging data as (18)F-FDG PET is widely used to assist the diagnosis of Alzheimer's disease (AD). Looking for regions with hypoperfusion/ hypometabolism, clinicians may predict or corroborate the diagnosis of the patients. Modern computer aided diagnosis (CAD) systems based on the statistical analysis of whole neuroimages are more accurate than classical systems based on quantifying the uptake of some predefined regions of interests (ROIs). In addition, these new systems allow determining new ROIs and take advantage of the huge amount of information comprised in neuroimaging data. A major branch of modern CAD systems for AD is based on multivariate techniques, which analyse a neuroimage as a whole, considering not only the voxel intensities but also the relations among them. In order to deal with the vast dimensionality of the data, a number of feature extraction methods have been successfully applied. In this work, we propose a CAD system based on the combination of several feature extraction techniques. First, some commonly used feature extraction methods based on the analysis of the variance (as principal component analysis), on the factorization of the data (as non-negative matrix factorization) and on classical magnitudes (as Haralick features) were simultaneously applied to the original data. These feature sets were then combined by means of two different combination approaches: i) using a single classifier and a multiple kernel learning approach and ii) using an ensemble of classifier and selecting the final decision by majority voting. The proposed approach was evaluated using a labelled neuroimaging database along with a cross validation scheme. As conclusion, the proposed CAD system performed better than approaches using only one feature extraction technique. We also provide a fair comparison (using the same database) of the selected feature extraction methods.
Combining p-values in replicated single-case experiments with multivariate outcome.
Solmi, Francesca; Onghena, Patrick
2014-01-01
Interest in combining probabilities has a long history in the global statistical community. The first steps in this direction were taken by Ronald Fisher, who introduced the idea of combining p-values of independent tests to provide a global decision rule when multiple aspects of a given problem were of interest. An interesting approach to this idea of combining p-values is the one based on permutation theory. The methods belonging to this particular approach exploit the permutation distributions of the tests to be combined, and use a simple function to combine probabilities. Combining p-values finds a very interesting application in the analysis of replicated single-case experiments. In this field the focus, while comparing different treatments effects, is more articulated than when just looking at the means of the different populations. Moreover, it is often of interest to combine the results obtained on the single patients in order to get more global information about the phenomenon under study. This paper gives an overview of how the concept of combining p-values was conceived, and how it can be easily handled via permutation techniques. Finally, the method of combining p-values is applied to a simulated replicated single-case experiment, and a numerical illustration is presented.
Sun, Jimeng; Hu, Jianying; Luo, Dijun; Markatou, Marianthi; Wang, Fei; Edabollahi, Shahram; Steinhubl, Steven E.; Daar, Zahra; Stewart, Walter F.
2012-01-01
Background: The ability to identify the risk factors related to an adverse condition, e.g., heart failures (HF) diagnosis, is very important for improving care quality and reducing cost. Existing approaches for risk factor identification are either knowledge driven (from guidelines or literatures) or data driven (from observational data). No existing method provides a model to effectively combine expert knowledge with data driven insight for risk factor identification. Methods: We present a systematic approach to enhance known knowledge-based risk factors with additional potential risk factors derived from data. The core of our approach is a sparse regression model with regularization terms that correspond to both knowledge and data driven risk factors. Results: The approach is validated using a large dataset containing 4,644 heart failure cases and 45,981 controls. The outpatient electronic health records (EHRs) for these patients include diagnosis, medication, lab results from 2003–2010. We demonstrate that the proposed method can identify complementary risk factors that are not in the existing known factors and can better predict the onset of HF. We quantitatively compare different sets of risk factors in the context of predicting onset of HF using the performance metric, the Area Under the ROC Curve (AUC). The combined risk factors between knowledge and data significantly outperform knowledge-based risk factors alone. Furthermore, those additional risk factors are confirmed to be clinically meaningful by a cardiologist. Conclusion: We present a systematic framework for combining knowledge and data driven insights for risk factor identification. We demonstrate the power of this framework in the context of predicting onset of HF, where our approach can successfully identify intuitive and predictive risk factors beyond a set of known HF risk factors. PMID:23304365
Summary of tracking and identification methods
NASA Astrophysics Data System (ADS)
Blasch, Erik; Yang, Chun; Kadar, Ivan
2014-06-01
Over the last two decades, many solutions have arisen to combine target tracking estimation with classification methods. Target tracking includes developments from linear to non-linear and Gaussian to non-Gaussian processing. Pattern recognition includes detection, classification, recognition, and identification methods. Integrating tracking and pattern recognition has resulted in numerous approaches and this paper seeks to organize the various approaches. We discuss the terminology so as to have a common framework for various standards such as the NATO STANAG 4162 - Identification Data Combining Process. In a use case, we provide a comparative example highlighting that location information (as an example) with additional mission objectives from geographical, human, social, cultural, and behavioral modeling is needed to determine identification as classification alone does not allow determining identification or intent.
NASA Astrophysics Data System (ADS)
Harabuchi, Yu; Taketsugu, Tetsuya; Maeda, Satoshi
2017-04-01
We report a new approach to search for structures of minimum energy conical intersection (MECIs) automatically. Gradient projection (GP) method and single component artificial force induced reaction (SC-AFIR) method were combined in the present approach. As case studies, MECIs of benzene and naphthalene between their ground and first excited singlet electronic states (S0/S1-MECIs) were explored. All S0/S1-MECIs reported previously were obtained automatically. Furthermore, the number of force calculations was reduced compared to the one required in the previous search. Improved convergence in a step in which various geometrical displacements are induced by SC-AFIR would contribute to the cost reduction.
A random forest learning assisted "divide and conquer" approach for peptide conformation search.
Chen, Xin; Yang, Bing; Lin, Zijing
2018-06-11
Computational determination of peptide conformations is challenging as it is a problem of finding minima in a high-dimensional space. The "divide and conquer" approach is promising for reliably reducing the search space size. A random forest learning model is proposed here to expand the scope of applicability of the "divide and conquer" approach. A random forest classification algorithm is used to characterize the distributions of the backbone φ-ψ units ("words"). A random forest supervised learning model is developed to analyze the combinations of the φ-ψ units ("grammar"). It is found that amino acid residues may be grouped as equivalent "words", while the φ-ψ combinations in low-energy peptide conformations follow a distinct "grammar". The finding of equivalent words empowers the "divide and conquer" method with the flexibility of fragment substitution. The learnt grammar is used to improve the efficiency of the "divide and conquer" method by removing unfavorable φ-ψ combinations without the need of dedicated human effort. The machine learning assisted search method is illustrated by efficiently searching the conformations of GGG/AAA/GGGG/AAAA/GGGGG through assembling the structures of GFG/GFGG. Moreover, the computational cost of the new method is shown to increase rather slowly with the peptide length.
A statistical approach to combining multisource information in one-class classifiers
Simonson, Katherine M.; Derek West, R.; Hansen, Ross L.; ...
2017-06-08
A new method is introduced in this paper for combining information from multiple sources to support one-class classification. The contributing sources may represent measurements taken by different sensors of the same physical entity, repeated measurements by a single sensor, or numerous features computed from a single measured image or signal. The approach utilizes the theory of statistical hypothesis testing, and applies Fisher's technique for combining p-values, modified to handle nonindependent sources. Classifier outputs take the form of fused p-values, which may be used to gauge the consistency of unknown entities with one or more class hypotheses. The approach enables rigorousmore » assessment of classification uncertainties, and allows for traceability of classifier decisions back to the constituent sources, both of which are important for high-consequence decision support. Application of the technique is illustrated in two challenge problems, one for skin segmentation and the other for terrain labeling. Finally, the method is seen to be particularly effective for relatively small training samples.« less
A statistical approach to combining multisource information in one-class classifiers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonson, Katherine M.; Derek West, R.; Hansen, Ross L.
A new method is introduced in this paper for combining information from multiple sources to support one-class classification. The contributing sources may represent measurements taken by different sensors of the same physical entity, repeated measurements by a single sensor, or numerous features computed from a single measured image or signal. The approach utilizes the theory of statistical hypothesis testing, and applies Fisher's technique for combining p-values, modified to handle nonindependent sources. Classifier outputs take the form of fused p-values, which may be used to gauge the consistency of unknown entities with one or more class hypotheses. The approach enables rigorousmore » assessment of classification uncertainties, and allows for traceability of classifier decisions back to the constituent sources, both of which are important for high-consequence decision support. Application of the technique is illustrated in two challenge problems, one for skin segmentation and the other for terrain labeling. Finally, the method is seen to be particularly effective for relatively small training samples.« less
Using Peptide-Level Proteomics Data for Detecting Differentially Expressed Proteins.
Suomi, Tomi; Corthals, Garry L; Nevalainen, Olli S; Elo, Laura L
2015-11-06
The expression of proteins can be quantified in high-throughput means using different types of mass spectrometers. In recent years, there have emerged label-free methods for determining protein abundance. Although the expression is initially measured at the peptide level, a common approach is to combine the peptide-level measurements into protein-level values before differential expression analysis. However, this simple combination is prone to inconsistencies between peptides and may lose valuable information. To this end, we introduce here a method for detecting differentially expressed proteins by combining peptide-level expression-change statistics. Using controlled spike-in experiments, we show that the approach of averaging peptide-level expression changes yields more accurate lists of differentially expressed proteins than does the conventional protein-level approach. This is particularly true when there are only few replicate samples or the differences between the sample groups are small. The proposed technique is implemented in the Bioconductor package PECA, and it can be downloaded from http://www.bioconductor.org.
Combining large number of weak biomarkers based on AUC.
Yan, Li; Tian, Lili; Liu, Song
2015-12-20
Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.
Combining large number of weak biomarkers based on AUC
Yan, Li; Tian, Lili; Liu, Song
2018-01-01
Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. PMID:26227901
Andersson, Claes R; Hvidsten, Torgeir R; Isaksson, Anders; Gustafsson, Mats G; Komorowski, Jan
2007-01-01
Background We address the issue of explaining the presence or absence of phase-specific transcription in budding yeast cultures under different conditions. To this end we use a model-based detector of gene expression periodicity to divide genes into classes depending on their behavior in experiments using different synchronization methods. While computational inference of gene regulatory circuits typically relies on expression similarity (clustering) in order to find classes of potentially co-regulated genes, this method instead takes advantage of known time profile signatures related to the studied process. Results We explain the regulatory mechanisms of the inferred periodic classes with cis-regulatory descriptors that combine upstream sequence motifs with experimentally determined binding of transcription factors. By systematic statistical analysis we show that periodic classes are best explained by combinations of descriptors rather than single descriptors, and that different combinations correspond to periodic expression in different classes. We also find evidence for additive regulation in that the combinations of cis-regulatory descriptors associated with genes periodically expressed in fewer conditions are frequently subsets of combinations associated with genes periodically expression in more conditions. Finally, we demonstrate that our approach retrieves combinations that are more specific towards known cell-cycle related regulators than the frequently used clustering approach. Conclusion The results illustrate how a model-based approach to expression analysis may be particularly well suited to detect biologically relevant mechanisms. Our new approach makes it possible to provide more refined hypotheses about regulatory mechanisms of the cell cycle and it can easily be adjusted to reveal regulation of other, non-periodic, cellular processes. PMID:17939860
NASA Astrophysics Data System (ADS)
Bejuri, Wan Mohd Yaakob Wan; Mohamad, Mohd Murtadha
2014-11-01
This paper introduces a new grey-world-based feature detection and matching algorithm, intended for use with mobile positioning systems. This approach uses a combination of a wireless local area network (WLAN) and a mobile phone camera to determine positioning in an illumination environment using a practical and pervasive approach. The signal combination is based on retrieved signal strength from the WLAN access point and the image processing information from the building hallways. The results show our method can handle information better than Harlan Hile's method relative to the illumination environment, producing lower illumination error in five (5) different environments.
Seol, Daehee; Park, Seongjae; Varenyk, Olexandr V; Lee, Shinbuhm; Lee, Ho Nyung; Morozovska, Anna N; Kim, Yunseok
2016-07-28
Hysteresis loop analysis via piezoresponse force microscopy (PFM) is typically performed to probe the existence of ferroelectricity at the nanoscale. However, such an approach is rather complex in accurately determining the pure contribution of ferroelectricity to the PFM. Here, we suggest a facile method to discriminate the ferroelectric effect from the electromechanical (EM) response through the use of frequency dependent ac amplitude sweep with combination of hysteresis loops in PFM. Our combined study through experimental and theoretical approaches verifies that this method can be used as a new tool to differentiate the ferroelectric effect from the other factors that contribute to the EM response.
Seol, Daehee; Park, Seongjae; Varenyk, Olexandr V.; Lee, Shinbuhm; Lee, Ho Nyung; Morozovska, Anna N.; Kim, Yunseok
2016-01-01
Hysteresis loop analysis via piezoresponse force microscopy (PFM) is typically performed to probe the existence of ferroelectricity at the nanoscale. However, such an approach is rather complex in accurately determining the pure contribution of ferroelectricity to the PFM. Here, we suggest a facile method to discriminate the ferroelectric effect from the electromechanical (EM) response through the use of frequency dependent ac amplitude sweep with combination of hysteresis loops in PFM. Our combined study through experimental and theoretical approaches verifies that this method can be used as a new tool to differentiate the ferroelectric effect from the other factors that contribute to the EM response. PMID:27466086
A variational approach to parameter estimation in ordinary differential equations.
Kaschek, Daniel; Timmer, Jens
2012-08-14
Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.
Stroet, Martin; Koziara, Katarzyna B; Malde, Alpeshkumar K; Mark, Alan E
2017-12-12
A general method for parametrizing atomic interaction functions is presented. The method is based on an analysis of surfaces corresponding to the difference between calculated and target data as a function of alternative combinations of parameters (parameter space mapping). The consideration of surfaces in parameter space as opposed to local values or gradients leads to a better understanding of the relationships between the parameters being optimized and a given set of target data. This in turn enables for a range of target data from multiple molecules to be combined in a robust manner and for the optimal region of parameter space to be trivially identified. The effectiveness of the approach is illustrated by using the method to refine the chlorine 6-12 Lennard-Jones parameters against experimental solvation free enthalpies in water and hexane as well as the density and heat of vaporization of the liquid at atmospheric pressure for a set of 10 aromatic-chloro compounds simultaneously. Single-step perturbation is used to efficiently calculate solvation free enthalpies for a wide range of parameter combinations. The capacity of this approach to parametrize accurate and transferrable force fields is discussed.
Al Hares, Ghaith; Eschweiler, Jörg; Radermacher, Klaus
2015-06-01
The development of detailed and specific knowledge on the biomechanical behavior of loaded knee structures has received increased attention in recent years. Stress magnetic resonance imaging techniques have been introduced in previous work to study knee kinematics under load conditions. Previous studies captured the knee movement either in atypical loading supine positions, or in upright positions with help of inclined supporting backrests being insufficient for movement capture under full-body weight-bearing conditions. In this work, we used a combined magnetic resonance imaging approach for measurement and assessment in knee kinematics under full-body weight-bearing in single legged stance. The proposed method is based on registration of high-resolution static magnetic resonance imaging data acquired in supine position with low-resolution data, quasi-static upright-magnetic resonance imaging data acquired in loaded positions for different degrees of knee flexion. The proposed method was applied for the measurement of tibiofemoral kinematics in 10 healthy volunteers. The combined magnetic resonance imaging approach allows the non-invasive measurement of knee kinematics in single legged stance and under physiological loading conditions. We believe that this method can provide enhanced understanding of the loaded knee kinematics. © IMechE 2015.
Martha, Cornelius T; Hoogendoorn, Jan-Carel; Irth, Hubertus; Niessen, Wilfried M A
2011-05-15
Current development in catalyst discovery includes combinatorial synthesis methods for the rapid generation of compound libraries combined with high-throughput performance-screening methods to determine the associated activities. Of these novel methodologies, mass spectrometry (MS) based flow chemistry methods are especially attractive due to the ability to combine sensitive detection of the formed reaction product with identification of introduced catalyst complexes. Recently, such a mass spectrometry based continuous-flow reaction detection system was utilized to screen silver-adducted ferrocenyl bidentate catalyst complexes for activity in a multicomponent synthesis of a substituted 2-imidazoline. Here, we determine the merits of different ionization approaches by studying the combination of sensitive detection of product formation in the continuous-flow system with the ability to simultaneous characterize the introduced [ferrocenyl bidentate+Ag](+) catalyst complexes. To this end, we study the ionization characteristics of electrospray ionization (ESI), atmospheric-pressure chemical ionization (APCI), no-discharge APCI, dual ESI/APCI, and dual APCI/no-discharge APCI. Finally, we investigated the application potential of the different ionization approaches by the investigation of ferrocenyl bidentate catalyst complex responses in different solvents. Copyright © 2011 Elsevier B.V. All rights reserved.
Stable and low diffusive hybrid upwind splitting methods
NASA Technical Reports Server (NTRS)
Coquel, Frederic; Liou, Meng-Sing
1992-01-01
A new concept for upwinding is introduced, named the hybrid upwind splitting (HUS), which is achieved by combining the basically distinct flux vector splitting (FVS) and the flux difference splitting (FDS) approaches. The HUS approach yields upwind methods which share the robustness of the FVS schemes in the capture of nonlinear waves and the accuracy of some of the FDS schemes. Numerical illustrations are presented proving the relevance of the HUS methods for viscous calculations.
Combined mining: discovering informative knowledge in complex data.
Cao, Longbing; Zhang, Huaifeng; Zhao, Yanchang; Luo, Dan; Zhang, Chengqi
2011-06-01
Enterprise data mining applications often involve complex data such as multiple large heterogeneous data sources, user preferences, and business impact. In such situations, a single method or one-step mining is often limited in discovering informative knowledge. It would also be very time and space consuming, if not impossible, to join relevant large data sources for mining patterns consisting of multiple aspects of information. It is crucial to develop effective approaches for mining patterns combining necessary information from multiple relevant business lines, catering for real business settings and decision-making actions rather than just providing a single line of patterns. The recent years have seen increasing efforts on mining more informative patterns, e.g., integrating frequent pattern mining with classifications to generate frequent pattern-based classifiers. Rather than presenting a specific algorithm, this paper builds on our existing works and proposes combined mining as a general approach to mining for informative patterns combining components from either multiple data sets or multiple features or by multiple methods on demand. We summarize general frameworks, paradigms, and basic processes for multifeature combined mining, multisource combined mining, and multimethod combined mining. Novel types of combined patterns, such as incremental cluster patterns, can result from such frameworks, which cannot be directly produced by the existing methods. A set of real-world case studies has been conducted to test the frameworks, with some of them briefed in this paper. They identify combined patterns for informing government debt prevention and improving government service objectives, which show the flexibility and instantiation capability of combined mining in discovering informative knowledge in complex data.
Protocol vulnerability detection based on network traffic analysis and binary reverse engineering.
Wen, Shameng; Meng, Qingkun; Feng, Chao; Tang, Chaojing
2017-01-01
Network protocol vulnerability detection plays an important role in many domains, including protocol security analysis, application security, and network intrusion detection. In this study, by analyzing the general fuzzing method of network protocols, we propose a novel approach that combines network traffic analysis with the binary reverse engineering method. For network traffic analysis, the block-based protocol description language is introduced to construct test scripts, while the binary reverse engineering method employs the genetic algorithm with a fitness function designed to focus on code coverage. This combination leads to a substantial improvement in fuzz testing for network protocols. We build a prototype system and use it to test several real-world network protocol implementations. The experimental results show that the proposed approach detects vulnerabilities more efficiently and effectively than general fuzzing methods such as SPIKE.
Finite and spectral cell method for wave propagation in heterogeneous materials
NASA Astrophysics Data System (ADS)
Joulaian, Meysam; Duczek, Sascha; Gabbert, Ulrich; Düster, Alexander
2014-09-01
In the current paper we present a fast, reliable technique for simulating wave propagation in complex structures made of heterogeneous materials. The proposed approach, the spectral cell method, is a combination of the finite cell method and the spectral element method that significantly lowers preprocessing and computational expenditure. The spectral cell method takes advantage of explicit time-integration schemes coupled with a diagonal mass matrix to reduce the time spent on solving the equation system. By employing a fictitious domain approach, this method also helps to eliminate some of the difficulties associated with mesh generation. Besides introducing a proper, specific mass lumping technique, we also study the performance of the low-order and high-order versions of this approach based on several numerical examples. Our results show that the high-order version of the spectral cell method together requires less memory storage and less CPU time than other possible versions, when combined simultaneously with explicit time-integration algorithms. Moreover, as the implementation of the proposed method in available finite element programs is straightforward, these properties turn the method into a viable tool for practical applications such as structural health monitoring [1-3], quantitative ultrasound applications [4], or the active control of vibrations and noise [5, 6].
USDA-ARS?s Scientific Manuscript database
A new method of sample preparation was developed and is reported for the first time. The approach combines in-vial filtration with dispersive solid-phase extraction (d-SPE) in a fast and convenient cleanup of QuEChERS (quick, easy, cheap, effective, rugged, and safe) extracts. The method was appli...
ERIC Educational Resources Information Center
Eyisi, Daniel
2016-01-01
Research in science education is to discover the truth which involves the combination of reasoning and experiences. In order to find out appropriate teaching methods that are necessary for teaching science students problem-solving skills, different research approaches are used by educational researchers based on the data collection and analysis…
ERIC Educational Resources Information Center
Youngs, Howard; Piggot-Irvine, Eileen
2012-01-01
Mixed methods research has emerged as a credible alternative to unitary research approaches. The authors show how a combination of a triangulation convergence model with a triangulation multilevel model was used to research an aspiring school principal development pilot program. The multilevel model is used to show the national and regional levels…
Eissa, Maya S; Abou Al Alamein, Amal M
2018-03-15
Different innovative spectrophotometric methods were introduced for the first time for simultaneous quantification of sacubitril/valsartan in their binary mixture and in their combined dosage form without prior separation through two manipulation approaches. These approaches were developed and based either on two wavelength selection in zero-order absorption spectra namely; dual wavelength method (DWL) at 226nm and 275nm for valsartan, induced dual wavelength method (IDW) at 226nm and 254nm for sacubitril and advanced absorbance subtraction (AAS) based on their iso-absorptive point at 246nm (λ iso ) and 261nm (sacubitril shows equal absorbance values at the two selected wavelengths) or on ratio spectra using their normalized spectra namely; ratio difference spectrophotometric method (RD) at 225nm and 264nm for both of them in their ratio spectra, first derivative of ratio spectra (DR 1 ) at 232nm for valsartan and 239nm for sacubitril and mean centering of ratio spectra (MCR) at 260nm for both of them. Both sacubitril and valsartan showed linearity upon application of these methods in the range of 2.5-25.0μg/mL. The developed spectrophotmetric methods were successfully applied to the analysis of their combined tablet dosage form ENTRESTO™. The adopted spectrophotometric methods were also validated according to ICH guidelines. The results obtained from the proposed methods were statistically compared to a reported HPLC method using Student t-test, F-test and a comparative study was also developed with one-way ANOVA, showing no statistical difference in accordance to precision and accuracy. Copyright © 2017 Elsevier B.V. All rights reserved.
2016-01-01
Abstract Background Metabarcoding is becoming a common tool used to assess and compare diversity of organisms in environmental samples. Identification of OTUs is one of the critical steps in the process and several taxonomy assignment methods were proposed to accomplish this task. This publication evaluates the quality of reference datasets, alongside with several alignment and phylogeny inference methods used in one of the taxonomy assignment methods, called tree-based approach. This approach assigns anonymous OTUs to taxonomic categories based on relative placements of OTUs and reference sequences on the cladogram and support that these placements receive. New information In tree-based taxonomy assignment approach, reliable identification of anonymous OTUs is based on their placement in monophyletic and highly supported clades together with identified reference taxa. Therefore, it requires high quality reference dataset to be used. Resolution of phylogenetic trees is strongly affected by the presence of erroneous sequences as well as alignment and phylogeny inference methods used in the process. Two preparation steps are essential for the successful application of tree-based taxonomy assignment approach. Curated collections of genetic information do include erroneous sequences. These sequences have detrimental effect on the resolution of cladograms used in tree-based approach. They must be identified and excluded from the reference dataset beforehand. Various combinations of multiple sequence alignment and phylogeny inference methods provide cladograms with different topology and bootstrap support. These combinations of methods need to be tested in order to determine the one that gives highest resolution for the particular reference dataset. Completing the above mentioned preparation steps is expected to decrease the number of unassigned OTUs and thus improve the results of the tree-based taxonomy assignment approach. PMID:27932919
Holovachov, Oleksandr
2016-01-01
Metabarcoding is becoming a common tool used to assess and compare diversity of organisms in environmental samples. Identification of OTUs is one of the critical steps in the process and several taxonomy assignment methods were proposed to accomplish this task. This publication evaluates the quality of reference datasets, alongside with several alignment and phylogeny inference methods used in one of the taxonomy assignment methods, called tree-based approach. This approach assigns anonymous OTUs to taxonomic categories based on relative placements of OTUs and reference sequences on the cladogram and support that these placements receive. In tree-based taxonomy assignment approach, reliable identification of anonymous OTUs is based on their placement in monophyletic and highly supported clades together with identified reference taxa. Therefore, it requires high quality reference dataset to be used. Resolution of phylogenetic trees is strongly affected by the presence of erroneous sequences as well as alignment and phylogeny inference methods used in the process. Two preparation steps are essential for the successful application of tree-based taxonomy assignment approach. Curated collections of genetic information do include erroneous sequences. These sequences have detrimental effect on the resolution of cladograms used in tree-based approach. They must be identified and excluded from the reference dataset beforehand.Various combinations of multiple sequence alignment and phylogeny inference methods provide cladograms with different topology and bootstrap support. These combinations of methods need to be tested in order to determine the one that gives highest resolution for the particular reference dataset.Completing the above mentioned preparation steps is expected to decrease the number of unassigned OTUs and thus improve the results of the tree-based taxonomy assignment approach.
[How timely are the methods taught in psychotherapy training and practice?].
Beutel, Manfred E; Michal, Matthias; Wiltink, Jörg; Subic-Wrana, Claudia
2015-01-01
Even though many psychotherapists consider themselves to be eclectic or integrative, training and reimbursement in the modern healthcare system are clearly oriented toward the model of distinct psychotherapy approaches. Prompted by the proposition to favor general, disorder-oriented psychotherapy, we investigate how timely distinctive methods are that are taught in training and practice. We reviewed the pertinent literature regarding general and specific factors, the effectiveness of integrative and eclectic treatments, orientation toward specific disorders, manualization and psychotherapeutic training. There is a lack of systematic studies on the efficacy of combining therapy methods from different approaches. The first empirical findings reveal that a superiority of combined versus single treatmentmethods has yet to be demonstrated. The development of transnosological manuals shows the limits of disorder-specific treatment.General factors such as therapeutic alliance or education about the model of disease and treatment rationale require specific definitions. Taking reference to a specific treatment approach provides important consistency of theory, training therapy and supervision, though this does not preclude an openness toward other therapy concepts. Current manualized examples show that methods and techniques can indeed be integrated from other approaches. Integrating different methods can also be seen as a developmental task for practitioners and researchers which may be mastered increasingly better with more experience.
Combining formal and functional approaches to topic structure.
Zellers, Margaret; Post, Brechtje
2012-03-01
Fragmentation between formal and functional approaches to prosodic variation is an ongoing problem in linguistic research. In particular, the frameworks of the Phonetics of Talk-in-Interaction (PTI) and Empirical Phonology (EP) take very different theoretical and methodological approaches to this kind of variation. We argue that it is fruitful to adopt the insights of both PTI's qualitative analysis and EP's quantitative analysis and combine them into a multiple-methods approach. One realm in which it is possible to combine these frameworks is in the analysis of discourse topic structure and the prosodic cues relevant to it. By combining a quantitative and a qualitative approach to discourse topic structure, it is possible to give a better account of the observed variation in prosody, for example in the case of fundamental frequency (F0) peak timing, which can be explained in terms of pitch accent distribution over different topic structure categories. Similarly, local and global patterns in speech rate variation can be better explained and motivated by adopting insights from both PTI and EP in the study of topic structure. Combining PTI and EP can provide better accounts of speech data as well as opening up new avenues of investigation which would not have been possible in either approach alone.
Cuevas, Erik; Díaz, Margarita
2015-01-01
In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC) algorithm and the evolutionary method harmony search (HS). With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples) are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness.
A simple finite element method for non-divergence form elliptic equation
Mu, Lin; Ye, Xiu
2017-03-01
Here, we develop a simple finite element method for solving second order elliptic equations in non-divergence form by combining least squares concept with discontinuous approximations. This simple method has a symmetric and positive definite system and can be easily analyzed and implemented. We could have also used general meshes with polytopal element and hanging node in the method. We prove that our finite element solution approaches to the true solution when the mesh size approaches to zero. Numerical examples are tested that demonstrate the robustness and flexibility of the method.
A simple finite element method for non-divergence form elliptic equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Ye, Xiu
Here, we develop a simple finite element method for solving second order elliptic equations in non-divergence form by combining least squares concept with discontinuous approximations. This simple method has a symmetric and positive definite system and can be easily analyzed and implemented. We could have also used general meshes with polytopal element and hanging node in the method. We prove that our finite element solution approaches to the true solution when the mesh size approaches to zero. Numerical examples are tested that demonstrate the robustness and flexibility of the method.
Global Search Capabilities of Indirect Methods for Impulsive Transfers
NASA Astrophysics Data System (ADS)
Shen, Hong-Xin; Casalino, Lorenzo; Luo, Ya-Zhong
2015-09-01
An optimization method which combines an indirect method with homotopic approach is proposed and applied to impulsive trajectories. Minimum-fuel, multiple-impulse solutions, with either fixed or open time are obtained. The homotopic approach at hand is relatively straightforward to implement and does not require an initial guess of adjoints, unlike previous adjoints estimation methods. A multiple-revolution Lambert solver is used to find multiple starting solutions for the homotopic procedure; this approach can guarantee to obtain multiple local solutions without relying on the user's intuition, thus efficiently exploring the solution space to find the global optimum. The indirect/homotopic approach proves to be quite effective and efficient in finding optimal solutions, and outperforms the joint use of evolutionary algorithms and deterministic methods in the test cases.
Enhancing Institutional Assessment Efforts through Qualitative Methods
ERIC Educational Resources Information Center
Van Note Chism, Nancy; Banta, Trudy W.
2007-01-01
Qualitative methods can do much to describe context and illuminate the why behind patterns encountered in institutional assessment. Alone, or in combination with quantitative methods, they should be the approach of choice for many of the most important assessment questions. (Contains 1 table.)
Finding False Paths in Sequential Circuits
NASA Astrophysics Data System (ADS)
Matrosova, A. Yu.; Andreeva, V. V.; Chernyshov, S. V.; Rozhkova, S. V.; Kudin, D. V.
2018-02-01
Method of finding false paths in sequential circuits is developed. In contrast with heuristic approaches currently used abroad, the precise method based on applying operations on Reduced Ordered Binary Decision Diagrams (ROBDDs) extracted from the combinational part of a sequential controlling logic circuit is suggested. The method allows finding false paths when transfer sequence length is not more than the given value and obviates the necessity of investigation of combinational circuit equivalents of the given lengths. The possibilities of using of the developed method for more complicated circuits are discussed.
Composite load spectra for select space propulsion structural components
NASA Technical Reports Server (NTRS)
Newell, J. F.; Kurth, R. E.; Ho, H.
1991-01-01
The objective of this program is to develop generic load models with multiple levels of progressive sophistication to simulate the composite (combined) load spectra that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades, and liquid oxygen posts and system ducting. The first approach will consist of using state of the art probabilistic methods to describe the individual loading conditions and combinations of these loading conditions to synthesize the composite load spectra simulation. The second approach will consist of developing coupled models for composite load spectra simulation which combine the deterministic models for composite load dynamic, acoustic, high pressure, and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients will then be determined using advanced probabilistic simulation methods with and without strategically selected experimental data.
NASA Astrophysics Data System (ADS)
Darbandi, Masoud; Abrar, Bagher
2018-01-01
The spectral-line weighted-sum-of-gray-gases (SLW) model is considered as a modern global model, which can be used in predicting the thermal radiation heat transfer within the combustion fields. The past SLW model users have mostly employed the reference approach to calculate the local values of gray gases' absorption coefficient. This classical reference approach assumes that the absorption spectra of gases at different thermodynamic conditions are scalable with the absorption spectrum of gas at a reference thermodynamic state in the domain. However, this assumption cannot be reasonable in combustion fields, where the gas temperature is very different from the reference temperature. Consequently, the results of SLW model incorporated with the classical reference approach, say the classical SLW method, are highly sensitive to the reference temperature magnitude in non-isothermal combustion fields. To lessen this sensitivity, the current work combines the SLW model with a modified reference approach, which is a particular one among the eight possible reference approach forms reported recently by Solovjov, et al. [DOI: 10.1016/j.jqsrt.2017.01.034, 2017]. The combination is called "modified SLW method". This work shows that the modified reference approach can provide more accurate total emissivity calculation than the classical reference approach if it is coupled with the SLW method. This would be particularly helpful for more accurate calculation of radiation transfer in highly non-isothermal combustion fields. To approve this, we use both the classical and modified SLW methods and calculate the radiation transfer in such fields. It is shown that the modified SLW method can almost eliminate the sensitivity of achieved results to the chosen reference temperature in treating highly non-isothermal combustion fields.
A PDE Sensitivity Equation Method for Optimal Aerodynamic Design
NASA Technical Reports Server (NTRS)
Borggaard, Jeff; Burns, John
1996-01-01
The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.
Palmprint authentication using multiple classifiers
NASA Astrophysics Data System (ADS)
Kumar, Ajay; Zhang, David
2004-08-01
This paper investigates the performance improvement for palmprint authentication using multiple classifiers. The proposed methods on personal authentication using palmprints can be divided into three categories; appearance- , line -, and texture-based. A combination of these approaches can be used to achieve higher performance. We propose to simultaneously extract palmprint features from PCA, Line detectors and Gabor-filters and combine their corresponding matching scores. This paper also investigates the comparative performance of simple combination rules and the hybrid fusion strategy to achieve performance improvement. Our experimental results on the database of 100 users demonstrate the usefulness of such approach over those based on individual classifiers.
Alles, E. J.; Zhu, Y.; van Dongen, K. W. A.; McGough, R. J.
2013-01-01
The fast nearfield method, when combined with time-space decomposition, is a rapid and accurate approach for calculating transient nearfield pressures generated by ultrasound transducers. However, the standard time-space decomposition approach is only applicable to certain analytical representations of the temporal transducer surface velocity that, when applied to the fast nearfield method, are expressed as a finite sum of products of separate temporal and spatial terms. To extend time-space decomposition such that accelerated transient field simulations are enabled in the nearfield for an arbitrary transducer surface velocity, a new transient simulation method, frequency domain time-space decomposition (FDTSD), is derived. With this method, the temporal transducer surface velocity is transformed into the frequency domain, and then each complex-valued term is processed separately. Further improvements are achieved by spectral clipping, which reduces the number of terms and the computation time. Trade-offs between speed and accuracy are established for FDTSD calculations, and pressure fields obtained with the FDTSD method for a circular transducer are compared to those obtained with Field II and the impulse response method. The FDTSD approach, when combined with the fast nearfield method and spectral clipping, consistently achieves smaller errors in less time and requires less memory than Field II or the impulse response method. PMID:23160476
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grambow, Colin A.; Jamal, Adeel; Li, Yi -Pei
Ketohydroperoxides are important in liquid-phase autoxidation and in gas-phase partial oxidation and pre-ignition chemistry, but because of their low concentration, instability, and various analytical chemistry limitations, it has been challenging to experimentally determine their reactivity, and only a few pathways are known. In the present work, 75 elementary-step unimolecular reactions of the simplest γ-ketohydroperoxide, 3-hydroperoxypropanal, were discovered by a combination of density functional theory with several automated transition-state search algorithms: the Berny algorithm coupled with the freezing string method, single- and double-ended growing string methods, the heuristic KinBot algorithm, and the single-component artificial force induced reaction method (SC-AFIR). The presentmore » joint approach significantly outperforms previous manual and automated transition-state searches – 68 of the reactions of γ-ketohydroperoxide discovered here were previously unknown and completely unexpected. All of the methods found the lowest-energy transition state, which corresponds to the first step of the Korcek mechanism, but each algorithm except for SC-AFIR detected several reactions not found by any of the other methods. We show that the low-barrier chemical reactions involve promising new chemistry that may be relevant in atmospheric and combustion systems. Our study highlights the complexity of chemical space exploration and the advantage of combined application of several approaches. Altogether, the present work demonstrates both the power and the weaknesses of existing fully automated approaches for reaction discovery which suggest possible directions for further method development and assessment in order to enable reliable discovery of all important reactions of any specified reactant(s).« less
Grambow, Colin A.; Jamal, Adeel; Li, Yi -Pei; ...
2017-12-22
Ketohydroperoxides are important in liquid-phase autoxidation and in gas-phase partial oxidation and pre-ignition chemistry, but because of their low concentration, instability, and various analytical chemistry limitations, it has been challenging to experimentally determine their reactivity, and only a few pathways are known. In the present work, 75 elementary-step unimolecular reactions of the simplest γ-ketohydroperoxide, 3-hydroperoxypropanal, were discovered by a combination of density functional theory with several automated transition-state search algorithms: the Berny algorithm coupled with the freezing string method, single- and double-ended growing string methods, the heuristic KinBot algorithm, and the single-component artificial force induced reaction method (SC-AFIR). The presentmore » joint approach significantly outperforms previous manual and automated transition-state searches – 68 of the reactions of γ-ketohydroperoxide discovered here were previously unknown and completely unexpected. All of the methods found the lowest-energy transition state, which corresponds to the first step of the Korcek mechanism, but each algorithm except for SC-AFIR detected several reactions not found by any of the other methods. We show that the low-barrier chemical reactions involve promising new chemistry that may be relevant in atmospheric and combustion systems. Our study highlights the complexity of chemical space exploration and the advantage of combined application of several approaches. Altogether, the present work demonstrates both the power and the weaknesses of existing fully automated approaches for reaction discovery which suggest possible directions for further method development and assessment in order to enable reliable discovery of all important reactions of any specified reactant(s).« less
Effectiveness of Social Media for Communicating Health Messages in Ghana
ERIC Educational Resources Information Center
Bannor, Richard; Asare, Anthony Kwame; Bawole, Justice Nyigmah
2017-01-01
Purpose: The purpose of this paper is to develop an in-depth understanding of the effectiveness, evolution and dynamism of the current health communication media used in Ghana. Design/methodology/approach: This paper uses a multi-method approach which utilizes a combination of qualitative and quantitative approaches. In-depth interviews are…
Exploring a Flipped Classroom Approach in a Japanese Language Classroom: A Mixed Methods Study
ERIC Educational Resources Information Center
Prefume, Yuko Enomoto
2015-01-01
A flipped classroom approach promotes active learning and increases teacher-student interactions by maximizing face-to-face class time (Hamdan, McKnight, Mcknight, Arfstrom, & Arfstrom, 2013). In this study, "flipped classroom" is combined with the use of technology and is described as an instructional approach that provides lectures…
A Collaborative Approach to Family Literacy Evaluation Strategies.
ERIC Educational Resources Information Center
Landerholm, Elizabeth; Karr, Jo Ann; Mushi, Selina
A collaborative approach to program evaluation combined with the use of a variety of evaluation methods using currently available technology can yield valuable information about the effectiveness of family literacy programs. Such an approach was used for McCosh Even Start, a federally-funded family literacy program located at McCosh School in an…
A Comprehensive Planning Model
ERIC Educational Resources Information Center
Temkin, Sanford
1972-01-01
Combines elements of the problem solving approach inherent in methods of applied economics and operations research and the structural-functional analysis common in social science modeling to develop an approach for economic planning and resource allocation for schools and other public sector organizations. (Author)
Gabb, Henry A.; Blake, Catherine
2016-01-01
Background: Simultaneous or sequential exposure to multiple environmental stressors can affect chemical toxicity. Cumulative risk assessments consider multiple stressors but it is impractical to test every chemical combination to which people are exposed. New methods are needed to prioritize chemical combinations based on their prevalence and possible health impacts. Objectives: We introduce an informatics approach that uses publicly available data to identify chemicals that co-occur in consumer products, which account for a significant proportion of overall chemical load. Methods: Fifty-five asthma-associated and endocrine disrupting chemicals (target chemicals) were selected. A database of 38,975 distinct consumer products and 32,231 distinct ingredient names was created from online sources, and PubChem and the Unified Medical Language System were used to resolve synonymous ingredient names. Synonymous ingredient names are different names for the same chemical (e.g., vitamin E and tocopherol). Results: Nearly one-third of the products (11,688 products, 30%) contained ≥ 1 target chemical and 5,229 products (13%) contained > 1. Of the 55 target chemicals, 31 (56%) appear in ≥ 1 product and 19 (35%) appear under more than one name. The most frequent three-way chemical combination (2-phenoxyethanol, methyl paraben, and ethyl paraben) appears in 1,059 products. Further work is needed to assess combined chemical exposures related to the use of multiple products. Conclusions: The informatics approach increased the number of products considered in a traditional analysis by two orders of magnitude, but missing/incomplete product labels can limit the effectiveness of this approach. Such an approach must resolve synonymy to ensure that chemicals of interest are not missed. Commonly occurring chemical combinations can be used to prioritize cumulative toxicology risk assessments. Citation: Gabb HA, Blake C. 2016. An informatics approach to evaluating combined chemical exposures from consumer products: a case study of asthma-associated chemicals and potential endocrine disruptors. Environ Health Perspect 124:1155–1165; http://dx.doi.org/10.1289/ehp.1510529 PMID:26955064
Time Transfer from Combined Analysis of GPS and TWSTFT Data
2008-12-01
40th Annual Precise Time and Time Interval (PTTI) Meeting 565 TIME TRANSFER FROM COMBINED ANALYSIS OF GPS AND TWSTFT DATA...bipm.org Abstract This paper presents the time transfer results obtained from the combination of GPS data and TWSTFT data. Two different methods...view, constrained by TWSTFT data. Using the Vondrak-Cepek algorithm, the second approach (named PPP+TW) combines the TWSTFT time transfer data with
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rüger, Robert, E-mail: rueger@scm.com; Department of Theoretical Chemistry, Vrije Universiteit Amsterdam, De Boelelaan 1083, 1081 HV Amsterdam; Wilhelm-Ostwald-Institut für Physikalische und Theoretische Chemie, Linnéstr. 2, 04103 Leipzig
2016-05-14
We propose a new method of calculating electronically excited states that combines a density functional theory based ground state calculation with a linear response treatment that employs approximations used in the time-dependent density functional based tight binding (TD-DFTB) approach. The new method termed time-dependent density functional theory TD-DFT+TB does not rely on the DFTB parametrization and is therefore applicable to systems involving all combinations of elements. We show that the new method yields UV/Vis absorption spectra that are in excellent agreement with computationally much more expensive TD-DFT calculations. Errors in vertical excitation energies are reduced by a factor of twomore » compared to TD-DFTB.« less
2016-12-01
chosen rather than complex ones , and responds to the criticism of the DTA approach. Chapter IV provides three separate case studies in defense R&D...defense R&D projects. To this end, the first section describes the case study method and the advantages of using simple models over more complex ones ...the analysis lacked empirical data and relied on subjective data, the analysis successfully combined the DTA approach with the case study method and
A novel approach to identifying regulatory motifs in distantly related genomes
Van Hellemont, Ruth; Monsieurs, Pieter; Thijs, Gert; De Moor, Bart; Van de Peer, Yves; Marchal, Kathleen
2005-01-01
Although proven successful in the identification of regulatory motifs, phylogenetic footprinting methods still show some shortcomings. To assess these difficulties, most apparent when applying phylogenetic footprinting to distantly related organisms, we developed a two-step procedure that combines the advantages of sequence alignment and motif detection approaches. The results on well-studied benchmark datasets indicate that the presented method outperforms other methods when the sequences become either too long or too heterogeneous in size. PMID:16420672
Shang, Shang; Bai, Jing; Song, Xiaolei; Wang, Hongkai; Lau, Jaclyn
2007-01-01
Conjugate gradient method is verified to be efficient for nonlinear optimization problems of large-dimension data. In this paper, a penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography (FMT) is presented. The algorithm combines the linear conjugate gradient method and the nonlinear conjugate gradient method together based on a restart strategy, in order to take advantage of the two kinds of conjugate gradient methods and compensate for the disadvantages. A quadratic penalty method is adopted to gain a nonnegative constraint and reduce the illposedness of the problem. Simulation studies show that the presented algorithm is accurate, stable, and fast. It has a better performance than the conventional conjugate gradient-based reconstruction algorithms. It offers an effective approach to reconstruct fluorochrome information for FMT.
Association analysis of multiple traits by an approach of combining P values.
Chen, Lili; Wang, Yong; Zhou, Yajing
2018-03-01
Increasing evidence shows that one variant can affect multiple traits, which is a widespread phenomenon in complex diseases. Joint analysis of multiple traits can increase statistical power of association analysis and uncover the underlying genetic mechanism. Although there are many statistical methods to analyse multiple traits, most of these methods are usually suitable for detecting common variants associated with multiple traits. However, because of low minor allele frequency of rare variant, these methods are not optimal for rare variant association analysis. In this paper, we extend an adaptive combination of P values method (termed ADA) for single trait to test association between multiple traits and rare variants in the given region. For a given region, we use reverse regression model to test each rare variant associated with multiple traits and obtain the P value of single-variant test. Further, we take the weighted combination of these P values as the test statistic. Extensive simulation studies show that our approach is more powerful than several other comparison methods in most cases and is robust to the inclusion of a high proportion of neutral variants and the different directions of effects of causal variants.
Zhou, Jin J.; Cho, Michael H.; Lange, Christoph; Lutz, Sharon; Silverman, Edwin K.; Laird, Nan M.
2015-01-01
Many correlated disease variables are analyzed jointly in genetic studies in the hope of increasing power to detect causal genetic variants. One approach involves assessing the relationship between each phenotype and each single nucleotide polymorphism (SNP) individually and using a Bonferroni correction for the effective number of tests conducted. Alternatively, one can apply a multivariate regression or a dimension reduction technique, such as principal component analysis (PCA), and test for the association with the principal components (PC) of the phenotypes rather than the individual phenotypes. Inspired by the previous approaches of combining phenotypes to maximize heritability at individual SNPs, in this paper, we propose to construct a maximally heritable phenotype (MaxH) by taking advantage of the estimated total heritability and co-heritability. The heritability and co-heritability only need to be estimated once, therefore our method is applicable to genome-wide scans. MaxH phenotype is a linear combination of the individual phenotypes with increased heritability and power over the phenotypes being combined. Simulations show that the heritability and power achieved agree well with the theory for large samples and two phenotypes. We compare our approach with commonly used methods and assess both the heritability and the power of the MaxH phenotype. Moreover we provide suggestions for how to choose the phenotypes for combination. An application of our approach to a COPD genome-wide association study shows the practical relevance. PMID:26111731
Yuan, Yuwei; Hu, Guixian; Chen, Tianjin; Zhao, Ming; Zhang, Yongzhi; Li, Yong; Xu, Xiahong; Shao, Shengzhi; Zhu, Jiahong; Wang, Qiang; Rogers, Karyne M
2016-07-20
Multielement and stable isotope (δ(13)C, δ(15)N, δ(2)H, δ(18)O, (207)Pb/(206)Pb, and (208)Pb/(206)Pb) analyses were combined to provide a new chemometric approach to improve the discrimination between organic and conventional Brassica vegetable production. Different combinations of organic and conventional fertilizer treatments were used to demonstrate this authentication approach using Brassica chinensis planted in experimental test pots. Stable isotope analyses (δ(15)N and δ(13)C) of B. chinensis using elemental analyzer-isotope ratio mass spectrometry easily distinguished organic and chemical fertilizer treatments. However, for low-level application fertilizer treatments, this dual isotope approach became indistinguishable over time. Using a chemometric approach (combined isotope and elemental approach), organic and chemical fertilizer mixes and low-level applications of synthetic and organic fertilizers were detectable in B. chinensis and their associated soils, improving the detection limit beyond the capacity of individual isotopes or elemental characterization. LDA shows strong promise as an improved method to discriminate genuine organic Brassica vegetables from produce treated with chemical fertilizers and could be used as a robust test for organic produce authentication.
ERIC Educational Resources Information Center
Clark, Renee M.; Kaw, Autar; Besterfield-Sacre, Mary
2016-01-01
Blended, flipped, and semi-flipped instructional approaches were used in various sections of a numerical methods course for undergraduate mechanical engineers. During the spring of 2014, a blended approach was used; in the summer of 2014, a combination of blended and flipped instruction was used to deliver a semi-flipped course; and in the fall of…
Multi-Detection Events, Probability Density Functions, and Reduced Location Area
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslinger, Paul W.; Schrom, Brian T.
2016-03-01
Abstract Several efforts have been made in the Comprehensive Nuclear-Test-Ban Treaty (CTBT) community to assess the benefits of combining detections of radionuclides to improve the location estimates available from atmospheric transport modeling (ATM) backtrack calculations. We present a Bayesian estimation approach rather than a simple dilution field of regard approach to allow xenon detections and non-detections to be combined mathematically. This system represents one possible probabilistic approach to radionuclide event formation. Application of this method to a recent interesting radionuclide event shows a substantial reduction in the location uncertainty of that event.
Rice, J P; Saccone, N L; Corbett, J
2001-01-01
The lod score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential, so that pedigrees or lod curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders, where the maximum lod score (MLS) statistic shares some of the advantages of the traditional lod score approach but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the lod score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.
Combining multiple decisions: applications to bioinformatics
NASA Astrophysics Data System (ADS)
Yukinawa, N.; Takenouchi, T.; Oba, S.; Ishii, S.
2008-01-01
Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods.
Abyaneh, M H; Wildman, R D; Ashcroft, I A; Ruiz, P D
2013-11-01
An analysis of the material properties of porcine corneas has been performed. A simple stress relaxation test was performed to determine the viscoelastic properties and a rheological model was built based on the Generalized Maxwell (GM) approach. A validation experiment using nano-indentation showed that an isotropic GM model was insufficient for describing the corneal material behaviour when exposed to a complex stress state. A new technique was proposed for determining the properties, using a combination of nano-indentation experiment, an isotropic and orthotropic GM model and inverse finite element method. The good agreement using this method suggests that this is a promising technique for measuring material properties in vivo and further work should focus on the reliability of the approach in practice. © 2013 Elsevier Ltd. All rights reserved.
Field Science Ethnography: Methods For Systematic Observation on an Expedition
NASA Technical Reports Server (NTRS)
Clancey, William J.; Clancy, Daniel (Technical Monitor)
2001-01-01
The Haughton-Mars expedition is a multidisciplinary project, exploring an impact crater in an extreme environment to determine how people might live and work on Mars. The expedition seeks to understand and field test Mars facilities, crew roles, operations, and computer tools. I combine an ethnographic approach to establish a baseline understanding of how scientists prefer to live and work when relatively unemcumbered, with a participatory design approach of experimenting with procedures and tools in the context of use. This paper focuses on field methods for systematically recording and analyzing the expedition's activities. Systematic photography and time-lapse video are combined with concept mapping to organize and present information. This hybrid approach is generally applicable to the study of modern field expeditions having a dozen or more multidisciplinary participants, spread over a large terrain during multiple field seasons.
Combined non-parametric and parametric approach for identification of time-variant systems
NASA Astrophysics Data System (ADS)
Dziedziech, Kajetan; Czop, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz
2018-03-01
Identification of systems, structures and machines with variable physical parameters is a challenging task especially when time-varying vibration modes are involved. The paper proposes a new combined, two-step - i.e. non-parametric and parametric - modelling approach in order to determine time-varying vibration modes based on input-output measurements. Single-degree-of-freedom (SDOF) vibration modes from multi-degree-of-freedom (MDOF) non-parametric system representation are extracted in the first step with the use of time-frequency wavelet-based filters. The second step involves time-varying parametric representation of extracted modes with the use of recursive linear autoregressive-moving-average with exogenous inputs (ARMAX) models. The combined approach is demonstrated using system identification analysis based on the experimental mass-varying MDOF frame-like structure subjected to random excitation. The results show that the proposed combined method correctly captures the dynamics of the analysed structure, using minimum a priori information on the model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lisitsa, Vadim, E-mail: lisitsavv@ipgg.sbras.ru; Novosibirsk State University, Novosibirsk; Tcheverda, Vladimir
We present an algorithm for the numerical simulation of seismic wave propagation in models with a complex near surface part and free surface topography. The approach is based on the combination of finite differences with the discontinuous Galerkin method. The discontinuous Galerkin method can be used on polyhedral meshes; thus, it is easy to handle the complex surfaces in the models. However, this approach is computationally intense in comparison with finite differences. Finite differences are computationally efficient, but in general, they require rectangular grids, leading to the stair-step approximation of the interfaces, which causes strong diffraction of the wavefield. Inmore » this research we present a hybrid algorithm where the discontinuous Galerkin method is used in a relatively small upper part of the model and finite differences are applied to the main part of the model.« less
Combined qualitative and quantitative research designs.
Seymour, Jane
2012-12-01
Mixed methods research designs have been recognized as important in addressing complexity and are recommended particularly in the development and evaluation of complex interventions. This article reports a review of studies in palliative care published between 2010 and March 2012 that combine qualitative and quantitative approaches. A synthesis of approaches to mixed methods research taken in 28 examples of published research studies of relevance to palliative and supportive care is provided, using a typology based on a classic categorization put forward in 1992. Mixed-method studies are becoming more frequently employed in palliative care research and resonate with the complexity of the palliative care endeavour. Undertaking mixed methods research requires a sophisticated understanding of the research process and recognition of some of the underlying complexities encountered when working with different traditions and perspectives on issues of: sampling, validity, reliability and rigour, different sources of data and different data collection and analysis techniques.
Gui, Jiang; Andrew, Angeline S.; Andrews, Peter; Nelson, Heather M.; Kelsey, Karl T.; Karagas, Margaret R.; Moore, Jason H.
2010-01-01
A central goal of human genetics is to identify and characterize susceptibility genes for common complex human diseases. An important challenge in this endeavor is the modeling of gene-gene interaction or epistasis that can result in non-additivity of genetic effects. The multifactor dimensionality reduction (MDR) method was developed as machine learning alternative to parametric logistic regression for detecting interactions in absence of significant marginal effects. The goal of MDR is to reduce the dimensionality inherent in modeling combinations of polymorphisms using a computational approach called constructive induction. Here, we propose a Robust Multifactor Dimensionality Reduction (RMDR) method that performs constructive induction using a Fisher’s Exact Test rather than a predetermined threshold. The advantage of this approach is that only those genotype combinations that are determined to be statistically significant are considered in the MDR analysis. We use two simulation studies to demonstrate that this approach will increase the success rate of MDR when there are only a few genotype combinations that are significantly associated with case-control status. We show that there is no loss of success rate when this is not the case. We then apply the RMDR method to the detection of gene-gene interactions in genotype data from a population-based study of bladder cancer in New Hampshire. PMID:21091664
Experimental Design for Multi-drug Combination Studies Using Signaling Networks
Huang, Hengzhen; Fang, Hong-Bin; Tan, Ming T.
2017-01-01
Summary Combinations of multiple drugs are an important approach to maximize the chance for therapeutic success by inhibiting multiple pathways/targets. Analytic methods for studying drug combinations have received increasing attention because major advances in biomedical research have made available large number of potential agents for testing. The preclinical experiment on multi-drug combinations plays a key role in (especially cancer) drug development because of the complex nature of the disease, the need to reduce development time and costs. Despite recent progresses in statistical methods for assessing drug interaction, there is an acute lack of methods for designing experiments on multi-drug combinations. The number of combinations grows exponentially with the number of drugs and dose-levels and it quickly precludes laboratory testing. Utilizing experimental dose-response data of single drugs and a few combinations along with pathway/network information to obtain an estimate of the functional structure of the dose-response relationship in silico, we propose an optimal design that allows exploration of the dose-effect surface with the smallest possible sample size in this paper. The simulation studies show our proposed methods perform well. PMID:28960231
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seol, Daehee; Park, Seongjae; Varenyk, Olexandr V.
Hysteresis loop analysis via piezoresponse force microscopy (PFM) is typically performed to probe the existence of ferroelectricity at the nanoscale. But, such an approach is rather complex in accurately determining the pure contribution of ferroelectricity to the PFM. We suggest a facile method to discriminate the ferroelectric effect from the electromechanical (EM) response through the use of frequency dependent ac amplitude sweep with combination of hysteresis loops in PFM. This combined study through experimental and theoretical approaches verifies that this method can be used as a new tool to differentiate the ferroelectric effect from the other factors that contribute tomore » the EM response.« less
Combining Approach in Stages with Least Squares for fits of data in hyperelasticity
NASA Astrophysics Data System (ADS)
Beda, Tibi
2006-10-01
The present work concerns a method of continuous approximation by block of a continuous function; a method of approximation combining the Approach in Stages with the finite domains Least Squares. An identification procedure by sub-domains: basic generating functions are determined step-by-step permitting their weighting effects to be felt. This procedure allows one to be in control of the signs and to some extent of the optimal values of the parameters estimated, and consequently it provides a unique set of solutions that should represent the real physical parameters. Illustrations and comparisons are developed in rubber hyperelastic modeling. To cite this article: T. Beda, C. R. Mecanique 334 (2006).
Recognition of Similar Shaped Handwritten Marathi Characters Using Artificial Neural Network
NASA Astrophysics Data System (ADS)
Jane, Archana P.; Pund, Mukesh A.
2012-03-01
The growing need have handwritten Marathi character recognition in Indian offices such as passport, railways etc has made it vital area of a research. Similar shape characters are more prone to misclassification. In this paper a novel method is provided to recognize handwritten Marathi characters based on their features extraction and adaptive smoothing technique. Feature selections methods avoid unnecessary patterns in an image whereas adaptive smoothing technique form smooth shape of charecters.Combination of both these approaches leads to the better results. Previous study shows that, no one technique achieves 100% accuracy in handwritten character recognition area. This approach of combining both adaptive smoothing & feature extraction gives better results (approximately 75-100) and expected outcomes.
Seol, Daehee; Park, Seongjae; Varenyk, Olexandr V.; ...
2016-07-28
Hysteresis loop analysis via piezoresponse force microscopy (PFM) is typically performed to probe the existence of ferroelectricity at the nanoscale. But, such an approach is rather complex in accurately determining the pure contribution of ferroelectricity to the PFM. We suggest a facile method to discriminate the ferroelectric effect from the electromechanical (EM) response through the use of frequency dependent ac amplitude sweep with combination of hysteresis loops in PFM. This combined study through experimental and theoretical approaches verifies that this method can be used as a new tool to differentiate the ferroelectric effect from the other factors that contribute tomore » the EM response.« less
NASA Technical Reports Server (NTRS)
Liou, J.; Tezduyar, T. E.
1990-01-01
Adaptive implicit-explicit (AIE), grouped element-by-element (GEBE), and generalized minimum residuals (GMRES) solution techniques for incompressible flows are combined. In this approach, the GEBE and GMRES iteration methods are employed to solve the equation systems resulting from the implicitly treated elements, and therefore no direct solution effort is involved. The benchmarking results demonstrate that this approach can substantially reduce the CPU time and memory requirements in large-scale flow problems. Although the description of the concepts and the numerical demonstration are based on the incompressible flows, the approach presented here is applicable to larger class of problems in computational mechanics.
A novel visual saliency detection method for infrared video sequences
NASA Astrophysics Data System (ADS)
Wang, Xin; Zhang, Yuzhen; Ning, Chen
2017-12-01
Infrared video applications such as target detection and recognition, moving target tracking, and so forth can benefit a lot from visual saliency detection, which is essentially a method to automatically localize the ;important; content in videos. In this paper, a novel visual saliency detection method for infrared video sequences is proposed. Specifically, for infrared video saliency detection, both the spatial saliency and temporal saliency are considered. For spatial saliency, we adopt a mutual consistency-guided spatial cues combination-based method to capture the regions with obvious luminance contrast and contour features. For temporal saliency, a multi-frame symmetric difference approach is proposed to discriminate salient moving regions of interest from background motions. Then, the spatial saliency and temporal saliency are combined to compute the spatiotemporal saliency using an adaptive fusion strategy. Besides, to highlight the spatiotemporal salient regions uniformly, a multi-scale fusion approach is embedded into the spatiotemporal saliency model. Finally, a Gestalt theory-inspired optimization algorithm is designed to further improve the reliability of the final saliency map. Experimental results demonstrate that our method outperforms many state-of-the-art saliency detection approaches for infrared videos under various backgrounds.
Unidimensional and Bidimensional Approaches to Measuring Acculturation.
Shin, Cha-Nam; Todd, Michael; An, Kyungeh; Kim, Wonsun Sunny
2017-08-01
Researchers easily overlook the complexity of acculturation measurement in research. This study is to elaborate the shortcomings of unidimensional approaches to conceptualizing acculturation and highlight the importance of using bidimensional approaches in health research. We conducted a secondary data analysis on acculturation measures and eating habits obtained from 261 Korean American adults in a Midwestern city. Bidimensional approaches better conceptualized acculturation and explained more of the variance in eating habits than did unidimensional approaches. Bidimensional acculturation measures combined with appropriate analytical methods, such as a cluster analysis, are recommended in health research because they provide a more comprehensive understanding of acculturation and its association with health behaviors than do other methods.
A novel deep learning approach for classification of EEG motor imagery signals.
Tabar, Yousef Rezaei; Halici, Ugur
2017-02-01
Signal classification is an important issue in brain computer interface (BCI) systems. Deep learning approaches have been used successfully in many recent studies to learn features and classify different types of data. However, the number of studies that employ these approaches on BCI applications is very limited. In this study we aim to use deep learning methods to improve classification performance of EEG motor imagery signals. In this study we investigate convolutional neural networks (CNN) and stacked autoencoders (SAE) to classify EEG Motor Imagery signals. A new form of input is introduced to combine time, frequency and location information extracted from EEG signal and it is used in CNN having one 1D convolutional and one max-pooling layers. We also proposed a new deep network by combining CNN and SAE. In this network, the features that are extracted in CNN are classified through the deep network SAE. The classification performance obtained by the proposed method on BCI competition IV dataset 2b in terms of kappa value is 0.547. Our approach yields 9% improvement over the winner algorithm of the competition. Our results show that deep learning methods provide better classification performance compared to other state of art approaches. These methods can be applied successfully to BCI systems where the amount of data is large due to daily recording.
Computational modeling of RNA 3D structures, with the aid of experimental restraints
Magnus, Marcin; Matelska, Dorota; Łach, Grzegorz; Chojnowski, Grzegorz; Boniecki, Michal J; Purta, Elzbieta; Dawson, Wayne; Dunin-Horkawicz, Stanislaw; Bujnicki, Janusz M
2014-01-01
In addition to mRNAs whose primary function is transmission of genetic information from DNA to proteins, numerous other classes of RNA molecules exist, which are involved in a variety of functions, such as catalyzing biochemical reactions or performing regulatory roles. In analogy to proteins, the function of RNAs depends on their structure and dynamics, which are largely determined by the ribonucleotide sequence. Experimental determination of high-resolution RNA structures is both laborious and difficult, and therefore, the majority of known RNAs remain structurally uncharacterized. To address this problem, computational structure prediction methods were developed that simulate either the physical process of RNA structure formation (“Greek science” approach) or utilize information derived from known structures of other RNA molecules (“Babylonian science” approach). All computational methods suffer from various limitations that make them generally unreliable for structure prediction of long RNA sequences. However, in many cases, the limitations of computational and experimental methods can be overcome by combining these two complementary approaches with each other. In this work, we review computational approaches for RNA structure prediction, with emphasis on implementations (particular programs) that can utilize restraints derived from experimental analyses. We also list experimental approaches, whose results can be relatively easily used by computational methods. Finally, we describe case studies where computational and experimental analyses were successfully combined to determine RNA structures that would remain out of reach for each of these approaches applied separately. PMID:24785264
Probabilistic Design Storm Method for Improved Flood Estimation in Ungauged Catchments
NASA Astrophysics Data System (ADS)
Berk, Mario; Å pačková, Olga; Straub, Daniel
2017-12-01
The design storm approach with event-based rainfall-runoff models is a standard method for design flood estimation in ungauged catchments. The approach is conceptually simple and computationally inexpensive, but the underlying assumptions can lead to flawed design flood estimations. In particular, the implied average recurrence interval (ARI) neutrality between rainfall and runoff neglects uncertainty in other important parameters, leading to an underestimation of design floods. The selection of a single representative critical rainfall duration in the analysis leads to an additional underestimation of design floods. One way to overcome these nonconservative approximations is the use of a continuous rainfall-runoff model, which is associated with significant computational cost and requires rainfall input data that are often not readily available. As an alternative, we propose a novel Probabilistic Design Storm method that combines event-based flood modeling with basic probabilistic models and concepts from reliability analysis, in particular the First-Order Reliability Method (FORM). The proposed methodology overcomes the limitations of the standard design storm approach, while utilizing the same input information and models without excessive computational effort. Additionally, the Probabilistic Design Storm method allows deriving so-called design charts, which summarize representative design storm events (combinations of rainfall intensity and other relevant parameters) for floods with different return periods. These can be used to study the relationship between rainfall and runoff return periods. We demonstrate, investigate, and validate the method by means of an example catchment located in the Bavarian Pre-Alps, in combination with a simple hydrological model commonly used in practice.
An Interactive Approach to Learning and Teaching in Visual Arts Education
ERIC Educational Resources Information Center
Tomljenovic, Zlata
2015-01-01
The present research focuses on modernising the approach to learning and teaching the visual arts in teaching practice, as well as examining the performance of an interactive approach to learning and teaching in visual arts classes with the use of a combination of general and specific (visual arts) teaching methods. The study uses quantitative…
Use of a Colony of Cooperating Agents and MAPLE To Solve the Traveling Salesman Problem.
ERIC Educational Resources Information Center
Guerrieri, Bruno
This paper reviews an approach for finding optimal solutions to the traveling salesman problem, a well-known problem in combinational optimization, and describes implementing the approach using the MAPLE computer algebra system. The method employed in this approach to the problem is similar to the way ant colonies manage to establish shortest…
Kim, Tae Hyung; Setsompop, Kawin; Haldar, Justin P
2017-03-01
Parallel imaging and partial Fourier acquisition are two classical approaches for accelerated MRI. Methods that combine these approaches often rely on prior knowledge of the image phase, but the need to obtain this prior information can place practical restrictions on the data acquisition strategy. In this work, we propose and evaluate SENSE-LORAKS, which enables combined parallel imaging and partial Fourier reconstruction without requiring prior phase information. The proposed formulation is based on combining the classical SENSE model for parallel imaging data with the more recent LORAKS framework for MR image reconstruction using low-rank matrix modeling. Previous LORAKS-based methods have successfully enabled calibrationless partial Fourier parallel MRI reconstruction, but have been most successful with nonuniform sampling strategies that may be hard to implement for certain applications. By combining LORAKS with SENSE, we enable highly accelerated partial Fourier MRI reconstruction for a broader range of sampling trajectories, including widely used calibrationless uniformly undersampled trajectories. Our empirical results with retrospectively undersampled datasets indicate that when SENSE-LORAKS reconstruction is combined with an appropriate k-space sampling trajectory, it can provide substantially better image quality at high-acceleration rates relative to existing state-of-the-art reconstruction approaches. The SENSE-LORAKS framework provides promising new opportunities for highly accelerated MRI. Magn Reson Med 77:1021-1035, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Cost-effectiveness analysis of risk-reduction measures to reach water safety targets.
Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof; Pettersson, Thomas J R
2011-01-01
Identifying the most suitable risk-reduction measures in drinking water systems requires a thorough analysis of possible alternatives. In addition to the effects on the risk level, also the economic aspects of the risk-reduction alternatives are commonly considered important. Drinking water supplies are complex systems and to avoid sub-optimisation of risk-reduction measures, the entire system from source to tap needs to be considered. There is a lack of methods for quantification of water supply risk reduction in an economic context for entire drinking water systems. The aim of this paper is to present a novel approach for risk assessment in combination with economic analysis to evaluate risk-reduction measures based on a source-to-tap approach. The approach combines a probabilistic and dynamic fault tree method with cost-effectiveness analysis (CEA). The developed approach comprises the following main parts: (1) quantification of risk reduction of alternatives using a probabilistic fault tree model of the entire system; (2) combination of the modelling results with CEA; and (3) evaluation of the alternatives with respect to the risk reduction, the probability of not reaching water safety targets and the cost-effectiveness. The fault tree method and CEA enable comparison of risk-reduction measures in the same quantitative unit and consider costs and uncertainties. The approach provides a structured and thorough analysis of risk-reduction measures that facilitates transparency and long-term planning of drinking water systems in order to avoid sub-optimisation of available resources for risk reduction. Copyright © 2010 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Powell, Heather; Mihalas, Stephanie; Onwuegbuzie, Anthony J.; Suldo, Shannon; Daley, Christine E.
2008-01-01
This article illustrates the utility of mixed methods research (i.e., combining quantitative and qualitative techniques) to the field of school psychology. First, the use of mixed methods approaches in school psychology practice is discussed. Second, the mixed methods research process is described in terms of school psychology research. Third, the…
Maulidiani; Rudiyanto; Abas, Faridah; Ismail, Intan Safinar; Lajis, Nordin H
2018-06-01
Optimization process is an important aspect in the natural product extractions. Herein, an alternative approach is proposed for the optimization in extraction, namely, the Generalized Likelihood Uncertainty Estimation (GLUE). The approach combines the Latin hypercube sampling, the feasible range of independent variables, the Monte Carlo simulation, and the threshold criteria of response variables. The GLUE method is tested in three different techniques including the ultrasound, the microwave, and the supercritical CO 2 assisted extractions utilizing the data from previously published reports. The study found that this method can: provide more information on the combined effects of the independent variables on the response variables in the dotty plots; deal with unlimited number of independent and response variables; consider combined multiple threshold criteria, which is subjective depending on the target of the investigation for response variables; and provide a range of values with their distribution for the optimization. Copyright © 2018 Elsevier Ltd. All rights reserved.
Stable and low diffusive hybrid upwind splitting methods
NASA Technical Reports Server (NTRS)
Coquel, Frederic; Liou, Meng-Sing
1992-01-01
We introduce in this paper a new concept for upwinding: the Hybrid Upwind Splitting (HUS). This original strategy for upwinding is achieved by combining the two previous existing approaches, the Flux Vector (FVS) and Flux Difference Splittings (FDS), while retaining their own interesting features. Indeed, our approach yields upwind methods that share the robustness of FVS schemes in the capture of nonlinear waves and the accuracy of some FDS schemes in the capture of linear waves. We describe here some examples of such HUS methods obtained by hybridizing the Osher approach with FVS schemes. Numerical illustrations are displayed and will prove in particular the relevance of the HUS methods we propose for viscous calculations.
[HIV prevention program for young people--the WYSH Project as a model of "combination prevention"].
Ono-Kihara, Masako
2010-03-01
In face of the HIV pandemic that still grows, unsuccessful efforts of developing biomedical control measures or the failure of cognitive-behavioral approach to show sustained social level effectiveness, behavioral strategy is now expected to evolve into a structural prevention ("combination prevention") that involves multiple behavioral goals and multilevel approaches. WYSH Project is a combination prevention project for youth developed through socio-epidemiological approach that integrates epidemiology with social science such as social marketing and mixed method. WYSH Project includes mass education programs for youth in schools and programs for out-of-school youth through cyber network and peer communication. Started in 2002, it expanded nationwide with supports from related ministries and parent-teacher associations and has grown into a single largest youth prevention project in Japan.
A Hybrid Supervised/Unsupervised Machine Learning Approach to Solar Flare Prediction
NASA Astrophysics Data System (ADS)
Benvenuto, Federico; Piana, Michele; Campi, Cristina; Massone, Anna Maria
2018-01-01
This paper introduces a novel method for flare forecasting, combining prediction accuracy with the ability to identify the most relevant predictive variables. This result is obtained by means of a two-step approach: first, a supervised regularization method for regression, namely, LASSO is applied, where a sparsity-enhancing penalty term allows the identification of the significance with which each data feature contributes to the prediction; then, an unsupervised fuzzy clustering technique for classification, namely, Fuzzy C-Means, is applied, where the regression outcome is partitioned through the minimization of a cost function and without focusing on the optimization of a specific skill score. This approach is therefore hybrid, since it combines supervised and unsupervised learning; realizes classification in an automatic, skill-score-independent way; and provides effective prediction performances even in the case of imbalanced data sets. Its prediction power is verified against NOAA Space Weather Prediction Center data, using as a test set, data in the range between 1996 August and 2010 December and as training set, data in the range between 1988 December and 1996 June. To validate the method, we computed several skill scores typically utilized in flare prediction and compared the values provided by the hybrid approach with the ones provided by several standard (non-hybrid) machine learning methods. The results showed that the hybrid approach performs classification better than all other supervised methods and with an effectiveness comparable to the one of clustering methods; but, in addition, it provides a reliable ranking of the weights with which the data properties contribute to the forecast.
Meletiadis, Joseph; Mouton, Johan W.; Meis, Jacques F. G. M.; Verweij, Paul E.
2003-01-01
The in vitro interaction between terbinafine and the azoles voriconazole, miconazole, and itraconazole against five clinical Scedosporium prolificans isolates after 48 and 72 h of incubation was tested by a microdilution checkerboard (eight-by-twelve) technique. The antifungal effects of the drugs alone and in combination on the fungal biomass as well as on the metabolic activity of fungi were measured using a spectrophotometric method and two colorimetric methods, based on the lowest drug concentrations showed 75 and 50% growth inhibition (MIC-1 and MIC-2, respectively). The nature and the intensity of the interactions were assessed using a nonparametric approach (fractional inhibitory concentration [FIC] index model) and a fully parametric response surface approach (Greco model) of the Loewe additivity (LA) no-interaction theory as well as a nonparametric (Prichard model) and a semiparametric response surface approaches of the Bliss independence (BI) no-interaction theory. Statistically significant synergy was found between each of the three azoles and terbinafine in all cases, although with different intensities. A 27- to 64-fold and 16- to 90-fold reduction of the geometric mean of the azole and terbinafine MICs, respectively, was observed when they were combined, resulting in FIC indices of <1 to 0.02. Using the MIC-1 higher levels of synergy were obtained, , which were more consistent between the two incubation periods than using the MIC-2. The strongest synergy among the azoles was found with miconazole using the BI-based models and with voriconazole using the LA-based models. The synergistic effects both on fungal growth and metabolic activity were more potent after 72 h of incubation. Fully parametric approaches in combination with the modified colorimetric method might prove useful for testing the in vitro interaction of antifungal drugs against filamentous fungi. PMID:12499177
Optimal Combinations of Diagnostic Tests Based on AUC.
Huang, Xin; Qin, Gengsheng; Fang, Yixin
2011-06-01
When several diagnostic tests are available, one can combine them to achieve better diagnostic accuracy. This article considers the optimal linear combination that maximizes the area under the receiver operating characteristic curve (AUC); the estimates of the combination's coefficients can be obtained via a nonparametric procedure. However, for estimating the AUC associated with the estimated coefficients, the apparent estimation by re-substitution is too optimistic. To adjust for the upward bias, several methods are proposed. Among them the cross-validation approach is especially advocated, and an approximated cross-validation is developed to reduce the computational cost. Furthermore, these proposed methods can be applied for variable selection to select important diagnostic tests. The proposed methods are examined through simulation studies and applications to three real examples. © 2010, The International Biometric Society.
Di Fiore, Vincenzo; Cavuoto, Giuseppe; Punzo, Michele; Tarallo, Daniela; Casazza, Marco; Guarriello, Silvio Marco; Lega, Massimiliano
2017-10-01
This paper describes an approach to detect and investigate the main characteristics of a solid waste landfill through the integration of geological, geographical and geophysical methods. In particular, a multi-temporal analysis of the landfill morphological evolution was carried out using aerial and satellite photos, since there were no geological and geophysical data referring to the study area. Subsequently, a surface geophysical prospection was performed through geoelectric and geomagnetic methods. In particular, the combination of electrical resistivity, induced polarization and magnetic measurements removed some of the uncertainties, generally associated with a separate utilization of these techniques. This approach was successfully tested to support the Prosecutor Office of Salerno (S Italy) during a specific investigation about an illegal landfill. All the collected field data supported the reconstruction of the site-specific history, while the real quarry geometry and site geology were defined. Key elements of novelty of this method are the combination and the integration of different methodological approaches, as the parallel and combined use of satellite, aerial and in-situ collected data, that were validated in a real investigation and that revealed the effectiveness of this strategy. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Hoover, Mary Rhodes
1982-01-01
The Culturally Appropriate Teaching (C.A.T.) method combines the "Back to Basics" paradigm with a culturally oriented approach and has proved to be successful in Black colleges and adult education programs. The C.A.T. method improves the reading levels of students by two years per semester and gives them standard English as a skill in one or two…
Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.
Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja
2016-10-05
Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.
Novel approaches based on ultrasound for treatment of wastewater containing potassium ferrocyanide.
Jawale, Rajashree H; Tandale, Akash; Gogate, Parag R
2017-09-01
Industrial wastewaters containing biorefractory compounds like cyanide offer significant environmental problems attributed to the fact that the conventional methods have limited effectiveness and hence developing efficient treatment approaches is an important requirement. The present work investigates the use of novel treatment approach of ultrasound (US) combined with advanced oxidation techniques for the degradation of potassium ferrocyanide (KFC) for the first time. An ultrasonic bath equipped with longitudinal horn (1kW rated power and 25kHz frequency) has been used. The effect of initial pH (2-9) on the progress of degradation has been investigated initially and subsequently using the optimized pH, effect of addition of hydrogen peroxide (ratio of KFC:H 2 O 2 varied over the range of 1:0.5-1:5) and TiO 2 in the presence of H 2 O 2 (1:1 ratio by weight of TiO 2 ) as process intensifying approach has been studied. Combination of ultrasonic irradiation with ozone (O 3 ) (100-400mg/h) and ultraviolet irradiation (UV) has also been investigated. Use of combination of US with H 2 O 2, H 2 O 2 +TiO 2 and ozone resulted in extent of KFC degradation as 54.2%, 74.82% and 82.41% respectively. Combination of US with both UV and ozone was established to be the best approach yielding 92.47% degradation. The study also focused on establishing kinetic rate constants for all the treatment approaches which revealed that all the approaches followed first order kinetic mechanism with higher rate constants for the combination approaches. Overall, it has been conclusively established that ultrasound based combined treatment schemes are very effective for the treatment of KFC containing wastewaters. Copyright © 2017 Elsevier B.V. All rights reserved.
Transcaruncular Approach for Treatment of Medial Wall and Large Orbital Blowout Fractures.
Nguyen, Dennis C; Shahzad, Farooq; Snyder-Warwick, Alison; Patel, Kamlesh B; Woo, Albert S
2016-03-01
We evaluate the safety and efficacy of the transcaruncular approach for reconstruction of medial orbital wall fractures and the combined transcaruncular-transconjunctival approach for reconstruction of large orbital defects involving the medial wall and floor. A retrospective review of the clinical and radiographic data of patients who underwent either a transcaruncular or a combined transcaruncular-transconjunctival approach by a single surgeon for orbital fractures between June 2007 and June 2013 was undertaken. Seven patients with isolated medial wall fractures underwent a transcaruncular approach, and nine patients with combined medial wall and floor fractures underwent a transcaruncular-transconjunctival approach with a lateral canthotomy. Reconstruction was performed using a porous polyethylene implant. All patients with isolated medial wall fractures presented with enophthalmos. In the combined medial wall and floor group, five out of eight patients had enophthalmos with two also demonstrating hypoglobus. The size of the medial wall defect on preoperative computed tomography (CT) scan ranged from 2.6 to 4.6 cm(2); the defect size of combined medial wall and floor fractures was 4.5 to 12.7 cm(2). Of the 11 patients in whom postoperative CT scans were obtained, all were noted to have acceptable placement of the implant. All patients had correction of enophthalmos and hypoglobus. One complication was noted, with a retrobulbar hematoma having developed 2 days postoperatively. The transcaruncular approach is a safe and effective method for reconstruction of medial orbital floor fractures. Even large fractures involving the orbital medial wall and floor can be adequately exposed and reconstructed with a combined transcaruncular-transconjunctival-lateral canthotomy approach. The level of evidence of this study is IV (case series with pre/posttest).
Biomedical discovery acceleration, with applications to craniofacial development.
Leach, Sonia M; Tipney, Hannah; Feng, Weiguo; Baumgartner, William A; Kasliwal, Priyanka; Schuyler, Ronald P; Williams, Trevor; Spritz, Richard A; Hunter, Lawrence
2009-03-01
The profusion of high-throughput instruments and the explosion of new results in the scientific literature, particularly in molecular biomedicine, is both a blessing and a curse to the bench researcher. Even knowledgeable and experienced scientists can benefit from computational tools that help navigate this vast and rapidly evolving terrain. In this paper, we describe a novel computational approach to this challenge, a knowledge-based system that combines reading, reasoning, and reporting methods to facilitate analysis of experimental data. Reading methods extract information from external resources, either by parsing structured data or using biomedical language processing to extract information from unstructured data, and track knowledge provenance. Reasoning methods enrich the knowledge that results from reading by, for example, noting two genes that are annotated to the same ontology term or database entry. Reasoning is also used to combine all sources into a knowledge network that represents the integration of all sorts of relationships between a pair of genes, and to calculate a combined reliability score. Reporting methods combine the knowledge network with a congruent network constructed from experimental data and visualize the combined network in a tool that facilitates the knowledge-based analysis of that data. An implementation of this approach, called the Hanalyzer, is demonstrated on a large-scale gene expression array dataset relevant to craniofacial development. The use of the tool was critical in the creation of hypotheses regarding the roles of four genes never previously characterized as involved in craniofacial development; each of these hypotheses was validated by further experimental work.
NASA Astrophysics Data System (ADS)
Adiga, Shreemathi; Saraswathi, A.; Praveen Prakash, A.
2018-04-01
This paper aims an interlinking approach of new Triangular Fuzzy Cognitive Maps (TrFCM) and Combined Effective Time Dependent (CETD) matrix to find the ranking of the problems of Transgenders. Section one begins with an introduction that briefly describes the scope of Triangular Fuzzy Cognitive Maps (TrFCM) and CETD Matrix. Section two provides the process of causes of problems faced by Transgenders using Fuzzy Triangular Fuzzy Cognitive Maps (TrFCM) method and performs the calculations using the collected data among the Transgender. In Section 3, the reasons for the main causes for the problems of the Transgenders. Section 4 describes the Charles Spearmans coefficients of rank correlation method by interlinking of Triangular Fuzzy Cognitive Maps (TrFCM) Method and CETD Matrix. Section 5 shows the results based on our study.
Mapping the pathways of resistance to targeted therapies
Wood, Kris C.
2015-01-01
Resistance substantially limits the depth and duration of clinical responses to targeted anticancer therapies. Through the use of complementary experimental approaches, investigators have revealed that cancer cells can achieve resistance through adaptation or selection driven by specific genetic, epigenetic, or microenvironmental alterations. Ultimately, these diverse alterations often lead to the activation of signaling pathways that, when co-opted, enable cancer cells to survive drug treatments. Recently developed methods enable the direct and scalable identification of the signaling pathways capable of driving resistance in specific contexts. Using these methods, novel pathways of resistance to clinically approved drugs have been identified and validated. By combining systematic resistance pathway mapping methods with studies revealing biomarkers of specific resistance pathways and pharmacological approaches to block these pathways, it may be possible to rationally construct drug combinations that yield more penetrant and lasting responses in patients. PMID:26392071
Scarselli, Franco; Tsoi, Ah Chung; Hagenbuchner, Markus; Noi, Lucia Di
2013-12-01
This paper proposes the combination of two state-of-the-art algorithms for processing graph input data, viz., the probabilistic mapping graph self organizing map, an unsupervised learning approach, and the graph neural network, a supervised learning approach. We organize these two algorithms in a cascade architecture containing a probabilistic mapping graph self organizing map, and a graph neural network. We show that this combined approach helps us to limit the long-term dependency problem that exists when training the graph neural network resulting in an overall improvement in performance. This is demonstrated in an application to a benchmark problem requiring the detection of spam in a relatively large set of web sites. It is found that the proposed method produces results which reach the state of the art when compared with some of the best results obtained by others using quite different approaches. A particular strength of our method is its applicability towards any input domain which can be represented as a graph. Copyright © 2013 Elsevier Ltd. All rights reserved.
Identifying Advanced Technologies for Education's Future.
ERIC Educational Resources Information Center
Moore, Gwendolyn B.; Yin, Robert K.
A study to determine how three advanced technologies might be applied to the needs of special education students helped inspire the development of a new method for identifying such applications. This new method, named the "Hybrid Approach," combines features of the two traditional methods: technology-push and demand-pull. Technology-push involves…
Application of Mixed-Methods Approaches to Higher Education and Intersectional Analyses
ERIC Educational Resources Information Center
Griffin, Kimberly A.; Museus, Samuel D.
2011-01-01
In this article, the authors discuss the utility of combining quantitative and qualitative methods in conducting intersectional analyses. First, they discuss some of the paradigmatic underpinnings of qualitative and quantitative research, and how these methods can be used in intersectional analyses. They then consider how paradigmatic pragmatism…
NASA Astrophysics Data System (ADS)
Sweijen, Thomas; Aslannejad, Hamed; Hassanizadeh, S. Majid
2017-09-01
In studies of two-phase flow in complex porous media it is often desirable to have an estimation of the capillary pressure-saturation curve prior to measurements. Therefore, we compare in this research the capability of three pore-scale approaches in reproducing experimentally measured capillary pressure-saturation curves. To do so, we have generated 12 packings of spheres that are representative of four different glass-bead packings and eight different sand packings, for which we have found experimental data on the capillary pressure-saturation curve in the literature. In generating the packings, we matched the particle size distributions and porosity values of the granular materials. We have used three different pore-scale approaches for generating the capillary pressure-saturation curves of each packing: i) the Pore Unit Assembly (PUA) method in combination with the Mayer and Stowe-Princen (MS-P) approximation for estimating the entry pressures of pore throats, ii) the PUA method in combination with the hemisphere approximation, and iii) the Pore Morphology Method (PMM) in combination with the hemisphere approximation. The three approaches were also used to produce capillary pressure-saturation curves for the coating layer of paper, used in inkjet printing. Curves for such layers are extremely difficult to determine experimentally, due to their very small thickness and the presence of extremely small pores (less than one micrometer in size). Results indicate that the PMM and PUA-hemisphere method give similar capillary pressure-saturation curves, because both methods rely on a hemisphere to represent the air-water interface. The ability of the hemisphere approximation and the MS-P approximation to reproduce correct capillary pressure seems to depend on the type of particle size distribution, with the hemisphere approximation working well for narrowly distributed granular materials.
Rodomonte, Andrea Luca; Montinaro, Annalisa; Bartolomei, Monica
2006-09-11
A measurement result cannot be properly interpreted if not accompanied by its uncertainty. Several methods to estimate uncertainty have been developed. From those methods three in particular were chosen in this work to estimate the uncertainty of the Eu. Ph. chloroquine phosphate assay, a potentiometric titration commonly used in medicinal control laboratories. The famous error-budget approach (also called bottom-up or step-by-step) described by the ISO Guide to the expression of Uncertainty in Measurement (GUM) was the first method chosen. It is based on the combination of uncertainty contributions that have to be directly derived from the measurement process. The second method employed was the Analytical Method Committee top-down which estimates uncertainty through reproducibility obtained during inter-laboratory studies. Data for its application were collected in a proficiency testing study carried out by over 50 laboratories throughout Europe. The last method chosen was the one proposed by Barwick and Ellison. It uses a combination of precision, trueness and ruggedness data to estimate uncertainty. These data were collected from a validation process specifically designed for uncertainty estimation. All the three approaches presented a distinctive set of advantages and drawbacks in their implementation. An expanded uncertainty of about 1% was assessed for the assay investigated.
Cuevas, Erik; Díaz, Margarita
2015-01-01
In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC) algorithm and the evolutionary method harmony search (HS). With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples) are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness. PMID:26339228
Innovative visualization and segmentation approaches for telemedicine
NASA Astrophysics Data System (ADS)
Nguyen, D.; Roehrig, Hans; Borders, Marisa H.; Fitzpatrick, Kimberly A.; Roveda, Janet
2014-09-01
In health care applications, we obtain, manage, store and communicate using high quality, large volume of image data through integrated devices. In this paper we propose several promising methods that can assist physicians in image data process and communication. We design a new semi-automated segmentation approach for radiological images, such as CT and MRI to clearly identify the areas of interest. This approach combines the advantages from both the region-based method and boundary-based methods. It has three key steps compose: coarse segmentation by using fuzzy affinity and homogeneity operator, image division and reclassification using the Voronoi Diagram, and refining boundary lines using the level set model.
Non-Born-Oppenheimer self-consistent field calculations with cubic scaling
NASA Astrophysics Data System (ADS)
Moncada, Félix; Posada, Edwin; Flores-Moreno, Roberto; Reyes, Andrés
2012-05-01
An efficient nuclear molecular orbital methodology is presented. This approach combines an auxiliary density functional theory for electrons (ADFT) and a localized Hartree product (LHP) representation for the nuclear wave function. A series of test calculations conducted on small molecules exposed that energy and geometry errors introduced by the use of ADFT and LHP approximations are small and comparable to those obtained by the use of electronic ADFT. In addition, sample calculations performed on (HF)n chains disclosed that the combined ADFT/LHP approach scales cubically with system size (n) as opposed to the quartic scaling of Hartree-Fock/LHP or DFT/LHP methods. Even for medium size molecules the improved scaling of the ADFT/LHP approach resulted in speedups of at least 5x with respect to Hartree-Fock/LHP calculations. The ADFT/LHP method opens up the possibility of studying nuclear quantum effects on large size systems that otherwise would be impractical.
Reflective random indexing for semi-automatic indexing of the biomedical literature.
Vasuki, Vidya; Cohen, Trevor
2010-10-01
The rapid growth of biomedical literature is evident in the increasing size of the MEDLINE research database. Medical Subject Headings (MeSH), a controlled set of keywords, are used to index all the citations contained in the database to facilitate search and retrieval. This volume of citations calls for efficient tools to assist indexers at the US National Library of Medicine (NLM). Currently, the Medical Text Indexer (MTI) system provides assistance by recommending MeSH terms based on the title and abstract of an article using a combination of distributional and vocabulary-based methods. In this paper, we evaluate a novel approach toward indexer assistance by using nearest neighbor classification in combination with Reflective Random Indexing (RRI), a scalable alternative to the established methods of distributional semantics. On a test set provided by the NLM, our approach significantly outperforms the MTI system, suggesting that the RRI approach would make a useful addition to the current methodologies.
Complementary approaches to diagnosing marine diseases: a union of the modern and the classic
Burge, Colleen A.; Friedman, Carolyn S.; Getchell, Rodman; House, Marcia; Mydlarz, Laura D.; Prager, Katherine C.; Renault, Tristan; Kiryu, Ikunari; Vega-Thurber, Rebecca
2016-01-01
Linking marine epizootics to a specific aetiology is notoriously difficult. Recent diagnostic successes show that marine disease diagnosis requires both modern, cutting-edge technology (e.g. metagenomics, quantitative real-time PCR) and more classic methods (e.g. transect surveys, histopathology and cell culture). Here, we discuss how this combination of traditional and modern approaches is necessary for rapid and accurate identification of marine diseases, and emphasize how sole reliance on any one technology or technique may lead disease investigations astray. We present diagnostic approaches at different scales, from the macro (environment, community, population and organismal scales) to the micro (tissue, organ, cell and genomic scales). We use disease case studies from a broad range of taxa to illustrate diagnostic successes from combining traditional and modern diagnostic methods. Finally, we recognize the need for increased capacity of centralized databases, networks, data repositories and contingency plans for diagnosis and management of marine disease. PMID:26880839
Complementary approaches to diagnosing marine diseases: a union of the modern and the classic
Burge, Colleen A.; Friedman, Carolyn S.; Getchell, Rodman G.; House, Marcia; Lafferty, Kevin D.; Mydlarz, Laura D.; Prager, Katherine C.; Sutherland, Kathryn P.; Renault, Tristan; Kiryu, Ikunari; Vega-Thurber, Rebecca
2016-01-01
Linking marine epizootics to a specific aetiology is notoriously difficult. Recent diagnostic successes show that marine disease diagnosis requires both modern, cutting-edge technology (e.g. metagenomics, quantitative real-time PCR) and more classic methods (e.g. transect surveys, histopathology and cell culture). Here, we discuss how this combination of traditional and modern approaches is necessary for rapid and accurate identification of marine diseases, and emphasize how sole reliance on any one technology or technique may lead disease investigations astray. We present diagnostic approaches at different scales, from the macro (environment, community, population and organismal scales) to the micro (tissue, organ, cell and genomic scales). We use disease case studies from a broad range of taxa to illustrate diagnostic successes from combining traditional and modern diagnostic methods. Finally, we recognize the need for increased capacity of centralized databases, networks, data repositories and contingency plans for diagnosis and management of marine disease.
Schumann, A; Priegnitz, M; Schoene, S; Enghardt, W; Rohling, H; Fiedler, F
2016-10-07
Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.
Tkachenko, Pavlo; Kriukova, Galyna; Aleksandrova, Marharyta; Chertov, Oleg; Renard, Eric; Pereverzyev, Sergei V
2016-10-01
Nocturnal hypoglycemia (NH) is common in patients with insulin-treated diabetes. Despite the risk associated with NH, there are only a few methods aiming at the prediction of such events based on intermittent blood glucose monitoring data and none has been validated for clinical use. Here we propose a method of combining several predictors into a new one that will perform at the level of the best involved one, or even outperform all individual candidates. The idea of the method is to use a recently developed strategy for aggregating ranking algorithms. The method has been calibrated and tested on data extracted from clinical trials, performed in the European FP7-funded project DIAdvisor. Then we have tested the proposed approach on other datasets to show the portability of the method. This feature of the method allows its simple implementation in the form of a diabetic smartphone app. On the considered datasets the proposed approach exhibits good performance in terms of sensitivity, specificity and predictive values. Moreover, the resulting predictor automatically performs at the level of the best involved method or even outperforms it. We propose a strategy for a combination of NH predictors that leads to a method exhibiting a reliable performance and the potential for everyday use by any patient who performs self-monitoring of blood glucose. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Molins, C; Hogendoorn, E A; Dijkman, E; Heusinkveld, H A; Baumann, R A
2000-02-11
The combination of microwave-assisted solvent extraction (MASE) and reversed-phase liquid chromatography (RPLC) with UV detection has been investigated for the efficient determination of phenylurea herbicides in soils involving the single-residue method (SRM) approach (linuron) and the multi-residue method (MRM) approach (monuron, monolinuron, isoproturon, metobromuron, diuron and linuron). Critical parameters of MASE, viz, extraction temperature, water content and extraction solvent were varied in order to optimise recoveries of the analytes while simultaneously minimising co-extraction of soil interferences. The optimised extraction procedure was applied to different types of soil with an organic carbon content of 0.4-16.7%. Besides freshly spiked soil samples, method validation included the analysis of samples with aged residues. A comparative study between the applicability of RPLC-UV without and with the use of column switching for the processing of uncleaned extracts, was carried out. For some of the tested analyte/matrix combinations the one-column approach (LC mode) is feasible. In comparison to LC, coupled-column LC (LC-LC mode) provides high selectivity in single-residue analysis (linuron) and, although less pronounced in multi-residue analysis (all six phenylurea herbicides), the clean-up performance of LC-LC improves both time of analysis and sample throughput. In the MRM approach the developed procedure involving MASE and LC-LC-UV provided acceptable recoveries (range, 80-120%) and RSDs (<12%) at levels of 10 microg/kg (n=9) and 50 microg/kg (n=7), respectively, for most analyte/matrix combinations. Recoveries from aged residue samples spiked at a level of 100 microg/kg (n=7) ranged, depending of the analyte/soil type combination, from 41-113% with RSDs ranging from 1-35%. In the SRM approach the developed LC-LC procedure was applied for the determination of linuron in 28 sandy soil samples collected in a field study. Linuron could be determined in soil with a limit of quantitation of 10 microg/kg.
Eng, K.; Milly, P.C.D.; Tasker, Gary D.
2007-01-01
To facilitate estimation of streamflow characteristics at an ungauged site, hydrologists often define a region of influence containing gauged sites hydrologically similar to the estimation site. This region can be defined either in geographic space or in the space of the variables that are used to predict streamflow (predictor variables). These approaches are complementary, and a combination of the two may be superior to either. Here we propose a hybrid region-of-influence (HRoI) regression method that combines the two approaches. The new method was applied with streamflow records from 1,091 gauges in the southeastern United States to estimate the 50-year peak flow (Q50). The HRoI approach yielded lower root-mean-square estimation errors and produced fewer extreme errors than either the predictor-variable or geographic region-of-influence approaches. It is concluded, for Q50 in the study region, that similarity with respect to the basin characteristics considered (area, slope, and annual precipitation) is important, but incomplete, and that the consideration of geographic proximity of stations provides a useful surrogate for characteristics that are not included in the analysis. ?? 2007 ASCE.
A Hybrid Method for Opinion Finding Task (KUNLP at TREC 2008 Blog Track)
2008-11-01
retrieve relevant documents. For the Opinion Retrieval subtask, we propose a hybrid model of lexicon-based approach and machine learning approach for...estimating and ranking the opinionated documents. For the Polarized Opinion Retrieval subtask, we employ machine learning for predicting the polarity...and linear combination technique for ranking polar documents. The hybrid model which utilize both lexicon-based approach and machine learning approach
NASA Astrophysics Data System (ADS)
Fujimoto, Kazuhiro J.
2012-07-01
A transition-density-fragment interaction (TDFI) combined with a transfer integral (TI) method is proposed. The TDFI method was previously developed for describing electronic Coulomb interaction, which was applied to excitation-energy transfer (EET) [K. J. Fujimoto and S. Hayashi, J. Am. Chem. Soc. 131, 14152 (2009)] and exciton-coupled circular dichroism spectra [K. J. Fujimoto, J. Chem. Phys. 133, 124101 (2010)]. In the present study, the TDFI method is extended to the exchange interaction, and hence it is combined with the TI method for applying to the EET via charge-transfer (CT) states. In this scheme, the overlap correction is also taken into account. To check the TDFI-TI accuracy, several test calculations are performed to an ethylene dimer. As a result, the TDFI-TI method gives a much improved description of the electronic coupling, compared with the previous TDFI method. Based on the successful description of the electronic coupling, the decomposition analysis is also performed with the TDFI-TI method. The present analysis clearly shows a large contribution from the Coulomb interaction in most of the cases, and a significant influence of the CT states at the small separation. In addition, the exchange interaction is found to be small in this system. The present approach is useful for analyzing and understanding the mechanism of EET.
NASA Astrophysics Data System (ADS)
Errami, Youssef; Obbadi, Abdellatif; Sahnoun, Smail; Ouassaid, Mohammed; Maaroufi, Mohamed
2018-05-01
This paper proposes a Direct Torque Control (DTC) method for Wind Power System (WPS) based Permanent Magnet Synchronous Generator (PMSG) and Backstepping approach. In this work, generator side and grid-side converter with filter are used as the interface between the wind turbine and grid. Backstepping approach demonstrates great performance in complicated nonlinear systems control such as WPS. So, the control method combines the DTC to achieve Maximum Power Point Tracking (MPPT) and Backstepping approach to sustain the DC-bus voltage and to regulate the grid-side power factor. In addition, control strategy is developed in the sense of Lyapunov stability theorem for the WPS. Simulation results using MATLAB/Simulink validate the effectiveness of the proposed controllers.
Malay sentiment analysis based on combined classification approaches and Senti-lexicon algorithm.
Al-Saffar, Ahmed; Awang, Suryanti; Tao, Hai; Omar, Nazlia; Al-Saiagh, Wafaa; Al-Bared, Mohammed
2018-01-01
Sentiment analysis techniques are increasingly exploited to categorize the opinion text to one or more predefined sentiment classes for the creation and automated maintenance of review-aggregation websites. In this paper, a Malay sentiment analysis classification model is proposed to improve classification performances based on the semantic orientation and machine learning approaches. First, a total of 2,478 Malay sentiment-lexicon phrases and words are assigned with a synonym and stored with the help of more than one Malay native speaker, and the polarity is manually allotted with a score. In addition, the supervised machine learning approaches and lexicon knowledge method are combined for Malay sentiment classification with evaluating thirteen features. Finally, three individual classifiers and a combined classifier are used to evaluate the classification accuracy. In experimental results, a wide-range of comparative experiments is conducted on a Malay Reviews Corpus (MRC), and it demonstrates that the feature extraction improves the performance of Malay sentiment analysis based on the combined classification. However, the results depend on three factors, the features, the number of features and the classification approach.
Malay sentiment analysis based on combined classification approaches and Senti-lexicon algorithm
Awang, Suryanti; Tao, Hai; Omar, Nazlia; Al-Saiagh, Wafaa; Al-bared, Mohammed
2018-01-01
Sentiment analysis techniques are increasingly exploited to categorize the opinion text to one or more predefined sentiment classes for the creation and automated maintenance of review-aggregation websites. In this paper, a Malay sentiment analysis classification model is proposed to improve classification performances based on the semantic orientation and machine learning approaches. First, a total of 2,478 Malay sentiment-lexicon phrases and words are assigned with a synonym and stored with the help of more than one Malay native speaker, and the polarity is manually allotted with a score. In addition, the supervised machine learning approaches and lexicon knowledge method are combined for Malay sentiment classification with evaluating thirteen features. Finally, three individual classifiers and a combined classifier are used to evaluate the classification accuracy. In experimental results, a wide-range of comparative experiments is conducted on a Malay Reviews Corpus (MRC), and it demonstrates that the feature extraction improves the performance of Malay sentiment analysis based on the combined classification. However, the results depend on three factors, the features, the number of features and the classification approach. PMID:29684036
Combining in silico and in cerebro approaches for virtual screening and pose prediction in SAMPL4.
Voet, Arnout R D; Kumar, Ashutosh; Berenger, Francois; Zhang, Kam Y J
2014-04-01
The SAMPL challenges provide an ideal opportunity for unbiased evaluation and comparison of different approaches used in computational drug design. During the fourth round of this SAMPL challenge, we participated in the virtual screening and binding pose prediction on inhibitors targeting the HIV-1 integrase enzyme. For virtual screening, we used well known and widely used in silico methods combined with personal in cerebro insights and experience. Regular docking only performed slightly better than random selection, but the performance was significantly improved upon incorporation of additional filters based on pharmacophore queries and electrostatic similarities. The best performance was achieved when logical selection was added. For the pose prediction, we utilized a similar consensus approach that amalgamated the results of the Glide-XP docking with structural knowledge and rescoring. The pose prediction results revealed that docking displayed reasonable performance in predicting the binding poses. However, prediction performance can be improved utilizing scientific experience and rescoring approaches. In both the virtual screening and pose prediction challenges, the top performance was achieved by our approaches. Here we describe the methods and strategies used in our approaches and discuss the rationale of their performances.
Combining in silico and in cerebro approaches for virtual screening and pose prediction in SAMPL4
NASA Astrophysics Data System (ADS)
Voet, Arnout R. D.; Kumar, Ashutosh; Berenger, Francois; Zhang, Kam Y. J.
2014-04-01
The SAMPL challenges provide an ideal opportunity for unbiased evaluation and comparison of different approaches used in computational drug design. During the fourth round of this SAMPL challenge, we participated in the virtual screening and binding pose prediction on inhibitors targeting the HIV-1 integrase enzyme. For virtual screening, we used well known and widely used in silico methods combined with personal in cerebro insights and experience. Regular docking only performed slightly better than random selection, but the performance was significantly improved upon incorporation of additional filters based on pharmacophore queries and electrostatic similarities. The best performance was achieved when logical selection was added. For the pose prediction, we utilized a similar consensus approach that amalgamated the results of the Glide-XP docking with structural knowledge and rescoring. The pose prediction results revealed that docking displayed reasonable performance in predicting the binding poses. However, prediction performance can be improved utilizing scientific experience and rescoring approaches. In both the virtual screening and pose prediction challenges, the top performance was achieved by our approaches. Here we describe the methods and strategies used in our approaches and discuss the rationale of their performances.
ERIC Educational Resources Information Center
O'Halloran, Kay L.; Tan, Sabine; Pham, Duc-Son; Bateman, John; Vande Moere, Andrew
2018-01-01
This article demonstrates how a digital environment offers new opportunities for transforming qualitative data into quantitative data in order to use data mining and information visualization for mixed methods research. The digital approach to mixed methods research is illustrated by a framework which combines qualitative methods of multimodal…
ERIC Educational Resources Information Center
Davis, Eric J.; Pauls, Steve; Dick, Jonathan
2017-01-01
Presented is a project-based learning (PBL) laboratory approach for an upper-division environmental chemistry or quantitative analysis course. In this work, a combined laboratory class of 11 environmental chemistry students developed a method based on published EPA methods for the extraction of dichlorodiphenyltrichloroethane (DDT) and its…
Mapping Mixed Methods Research: Methods, Measures, and Meaning
ERIC Educational Resources Information Center
Wheeldon, J.
2010-01-01
This article explores how concept maps and mind maps can be used as data collection tools in mixed methods research to combine the clarity of quantitative counts with the nuance of qualitative reflections. Based on more traditional mixed methods approaches, this article details how the use of pre/post concept maps can be used to design qualitative…
Pivetta, Tiziana; Isaia, Francesco; Trudu, Federica; Pani, Alessandra; Manca, Matteo; Perra, Daniela; Amato, Filippo; Havel, Josef
2013-10-15
The combination of two or more drugs using multidrug mixtures is a trend in the treatment of cancer. The goal is to search for a synergistic effect and thereby reduce the required dose and inhibit the development of resistance. An advanced model-free approach for data exploration and analysis, based on artificial neural networks (ANN) and experimental design is proposed to predict and quantify the synergism of drugs. The proposed method non-linearly correlates the concentrations of drugs with the cytotoxicity of the mixture, providing the possibility of choosing the optimal drug combination that gives the maximum synergism. The use of ANN allows for the prediction of the cytotoxicity of each combination of drugs in the chosen concentration interval. The method was validated by preparing and experimentally testing the combinations with the predicted highest synergistic effect. In all cases, the data predicted by the network were experimentally confirmed. The method was applied to several binary mixtures of cisplatin and [Cu(1,10-orthophenanthroline)2(H2O)](ClO4)2, Cu(1,10-orthophenanthroline)(H2O)2(ClO4)2 or [Cu(1,10-orthophenanthroline)2(imidazolidine-2-thione)](ClO4)2. The cytotoxicity of the two drugs, alone and in combination, was determined against human acute T-lymphoblastic leukemia cells (CCRF-CEM). For all systems, a synergistic effect was found for selected combinations. © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hashimoto, S.; Iwamoto, Y.; Sato, T.; Niita, K.; Boudard, A.; Cugnon, J.; David, J.-C.; Leray, S.; Mancusi, D.
2014-08-01
A new approach to describing neutron spectra of deuteron-induced reactions in the Monte Carlo simulation for particle transport has been developed by combining the Intra-Nuclear Cascade of Liège (INCL) and the Distorted Wave Born Approximation (DWBA) calculation. We incorporated this combined method into the Particle and Heavy Ion Transport code System (PHITS) and applied it to estimate (d,xn) spectra on natLi, 9Be, and natC targets at incident energies ranging from 10 to 40 MeV. Double differential cross sections obtained by INCL and DWBA successfully reproduced broad peaks and discrete peaks, respectively, at the same energies as those observed in experimental data. Furthermore, an excellent agreement was observed between experimental data and PHITS-derived results using the combined method in thick target neutron yields over a wide range of neutron emission angles in the reactions. We also applied the new method to estimate (d,xp) spectra in the reactions, and discussed the validity for the proton emission spectra.
Comparative study of multimodal biometric recognition by fusion of iris and fingerprint.
Benaliouche, Houda; Touahria, Mohamed
2014-01-01
This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results.
Comparative Study of Multimodal Biometric Recognition by Fusion of Iris and Fingerprint
Benaliouche, Houda; Touahria, Mohamed
2014-01-01
This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results. PMID:24605065
Papamokos, George; Silins, Ilona
2016-01-01
There is an increasing need for new reliable non-animal based methods to predict and test toxicity of chemicals. Quantitative structure-activity relationship (QSAR), a computer-based method linking chemical structures with biological activities, is used in predictive toxicology. In this study, we tested the approach to combine QSAR data with literature profiles of carcinogenic modes of action automatically generated by a text-mining tool. The aim was to generate data patterns to identify associations between chemical structures and biological mechanisms related to carcinogenesis. Using these two methods, individually and combined, we evaluated 96 rat carcinogens of the hematopoietic system, liver, lung, and skin. We found that skin and lung rat carcinogens were mainly mutagenic, while the group of carcinogens affecting the hematopoietic system and the liver also included a large proportion of non-mutagens. The automatic literature analysis showed that mutagenicity was a frequently reported endpoint in the literature of these carcinogens, however, less common endpoints such as immunosuppression and hormonal receptor-mediated effects were also found in connection with some of the carcinogens, results of potential importance for certain target organs. The combined approach, using QSAR and text-mining techniques, could be useful for identifying more detailed information on biological mechanisms and the relation with chemical structures. The method can be particularly useful in increasing the understanding of structure and activity relationships for non-mutagens.
Papamokos, George; Silins, Ilona
2016-01-01
There is an increasing need for new reliable non-animal based methods to predict and test toxicity of chemicals. Quantitative structure-activity relationship (QSAR), a computer-based method linking chemical structures with biological activities, is used in predictive toxicology. In this study, we tested the approach to combine QSAR data with literature profiles of carcinogenic modes of action automatically generated by a text-mining tool. The aim was to generate data patterns to identify associations between chemical structures and biological mechanisms related to carcinogenesis. Using these two methods, individually and combined, we evaluated 96 rat carcinogens of the hematopoietic system, liver, lung, and skin. We found that skin and lung rat carcinogens were mainly mutagenic, while the group of carcinogens affecting the hematopoietic system and the liver also included a large proportion of non-mutagens. The automatic literature analysis showed that mutagenicity was a frequently reported endpoint in the literature of these carcinogens, however, less common endpoints such as immunosuppression and hormonal receptor-mediated effects were also found in connection with some of the carcinogens, results of potential importance for certain target organs. The combined approach, using QSAR and text-mining techniques, could be useful for identifying more detailed information on biological mechanisms and the relation with chemical structures. The method can be particularly useful in increasing the understanding of structure and activity relationships for non-mutagens. PMID:27625608
Transverse vibrations of non-uniform beams. [combined finite element and Rayleigh-Ritz methods
NASA Technical Reports Server (NTRS)
Klein, L.
1974-01-01
The free vibrations of elastic beams with nonuniform characteristics are investigated theoretically by a new method. The new method is seen to combine the advantages of a finite element approach and of a Rayleigh-Ritz analysis. Comparison with the known analytical results for uniform beams shows good convergence of the method for natural frequencies and modes. For internal shear forces and bending moments, the rate of convergence is less rapid. Results from experiments conducted with a cantilevered helicopter blade with strong nonuniformities and also from alternative theoretical methods, indicate that the theory adequately predicts natural frequencies and mode shapes. General guidelines for efficient use of the method are presented.
Balk, Benjamin; Elder, Kelly
2000-01-01
We model the spatial distribution of snow across a mountain basin using an approach that combines binary decision tree and geostatistical techniques. In April 1997 and 1998, intensive snow surveys were conducted in the 6.9‐km2 Loch Vale watershed (LVWS), Rocky Mountain National Park, Colorado. Binary decision trees were used to model the large‐scale variations in snow depth, while the small‐scale variations were modeled through kriging interpolation methods. Binary decision trees related depth to the physically based independent variables of net solar radiation, elevation, slope, and vegetation cover type. These decision tree models explained 54–65% of the observed variance in the depth measurements. The tree‐based modeled depths were then subtracted from the measured depths, and the resulting residuals were spatially distributed across LVWS through kriging techniques. The kriged estimates of the residuals were added to the tree‐based modeled depths to produce a combined depth model. The combined depth estimates explained 60–85% of the variance in the measured depths. Snow densities were mapped across LVWS using regression analysis. Snow‐covered area was determined from high‐resolution aerial photographs. Combining the modeled depths and densities with a snow cover map produced estimates of the spatial distribution of snow water equivalence (SWE). This modeling approach offers improvement over previous methods of estimating SWE distribution in mountain basins.
An efficient hybrid technique in RCS predictions of complex targets at high frequencies
NASA Astrophysics Data System (ADS)
Algar, María-Jesús; Lozano, Lorena; Moreno, Javier; González, Iván; Cátedra, Felipe
2017-09-01
Most computer codes in Radar Cross Section (RCS) prediction use Physical Optics (PO) and Physical theory of Diffraction (PTD) combined with Geometrical Optics (GO) and Geometrical Theory of Diffraction (GTD). The latter approaches are computationally cheaper and much more accurate for curved surfaces, but not applicable for the computation of the RCS of all surfaces of a complex object due to the presence of caustic problems in the analysis of concave surfaces or flat surfaces in the far field. The main contribution of this paper is the development of a hybrid method based on a new combination of two asymptotic techniques: GTD and PO, considering the advantages and avoiding the disadvantages of each of them. A very efficient and accurate method to analyze the RCS of complex structures at high frequencies is obtained with the new combination. The proposed new method has been validated comparing RCS results obtained for some simple cases using the proposed approach and RCS using the rigorous technique of Method of Moments (MoM). Some complex cases have been examined at high frequencies contrasting the results with PO. This study shows the accuracy and the efficiency of the hybrid method and its suitability for the computation of the RCS at really large and complex targets at high frequencies.
NASA Technical Reports Server (NTRS)
Demerdash, N. A.; Wang, R.; Secunde, R.
1992-01-01
A 3D finite element (FE) approach was developed and implemented for computation of global magnetic fields in a 14.3 kVA modified Lundell alternator. The essence of the new method is the combined use of magnetic vector and scalar potential formulations in 3D FEs. This approach makes it practical, using state of the art supercomputer resources, to globally analyze magnetic fields and operating performances of rotating machines which have truly 3D magnetic flux patterns. The 3D FE-computed fields and machine inductances as well as various machine performance simulations of the 14.3 kVA machine are presented in this paper and its two companion papers.
The index-flood and the GRADEX methods combination for flood frequency analysis.
NASA Astrophysics Data System (ADS)
Fuentes, Diana; Di Baldassarre, Giuliano; Quesada, Beatriz; Xu, Chong-Yu; Halldin, Sven; Beven, Keith
2017-04-01
Flood frequency analysis is used in many applications, including flood risk management, design of hydraulic structures, and urban planning. However, such analysis requires of long series of observed discharge data which are often not available in many basins around the world. In this study, we tested the usefulness of combining regional discharge and local precipitation data to estimate the event flood volume frequency curve for 63 catchments in Mexico, Central America and the Caribbean. This was achieved by combining two existing flood frequency analysis methods, the regionalization index-flood approach with the GRADEX method. For up to 10-years return period, similar shape of the scaled flood frequency curve for catchments with similar flood behaviour was assumed from the index-flood approach. For return periods larger than 10-years the probability distribution of rainfall and discharge volumes were assumed to be asymptotically and exponential-type functions with the same scale parameter from the GRADEX method. Results showed that if the mean annual flood (MAF), used as index-flood, is known, the index-flood approach performed well for up to 10 years return periods, resulting in 25% mean relative error in prediction. For larger return periods the prediction capability decreased but could be improved by the use of the GRADEX method. As the MAF is unknown at ungauged and short-period measured basins, we tested predicting the MAF using catchments climate-physical characteristics, and discharge statistics, the latter when observations were available for only 8 years. Only the use of discharge statistics resulted in acceptable predictions.
Cui, Ming; Xu, Lili; Wang, Huimin; Ju, Shaoqing; Xu, Shuizhu; Jing, Rongrong
2017-12-01
Measurement uncertainty (MU) is a metrological concept, which can be used for objectively estimating the quality of test results in medical laboratories. The Nordtest guide recommends an approach that uses both internal quality control (IQC) and external quality assessment (EQA) data to evaluate the MU. Bootstrap resampling is employed to simulate the unknown distribution based on the mathematical statistics method using an existing small sample of data, where the aim is to transform the small sample into a large sample. However, there have been no reports of the utilization of this method in medical laboratories. Thus, this study applied the Nordtest guide approach based on bootstrap resampling for estimating the MU. We estimated the MU for the white blood cell (WBC) count, red blood cell (RBC) count, hemoglobin (Hb), and platelets (Plt). First, we used 6months of IQC data and 12months of EQA data to calculate the MU according to the Nordtest method. Second, we combined the Nordtest method and bootstrap resampling with the quality control data and calculated the MU using MATLAB software. We then compared the MU results obtained using the two approaches. The expanded uncertainty results determined for WBC, RBC, Hb, and Plt using the bootstrap resampling method were 4.39%, 2.43%, 3.04%, and 5.92%, respectively, and 4.38%, 2.42%, 3.02%, and 6.00% with the existing quality control data (U [k=2]). For WBC, RBC, Hb, and Plt, the differences between the results obtained using the two methods were lower than 1.33%. The expanded uncertainty values were all less than the target uncertainties. The bootstrap resampling method allows the statistical analysis of the MU. Combining the Nordtest method and bootstrap resampling is considered a suitable alternative method for estimating the MU. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Risk assessments for mixtures: technical methods commonly used in the United States
A brief (20 minute) talk on the technical approaches used by EPA and other US agencies to assess risks posed by combined exposures to one or more chemicals. The talk systemically reviews the methodologies (whole-mixtures and component-based approaches) that are or have been used ...
One Approach to Teaching the Specific Language Disabled Adult Language Arts.
ERIC Educational Resources Information Center
Peterson, Binnie L.
1981-01-01
One approach never before used in adult language arts instruction--the Slingerland Simultaneous Multisensory Technique--has been found useful for specific language disabled adults in multisensory programs at Anchorage Community College. The Slingerland method builds from single sight, sound, and feel of letters through combinations, encoding,…
FIELD EVALUATION OF A METHOD FOR ESTIMATING GASEOUS FLUXES FROM AREA SOURCES USING OPEN-PATH FTIR
The paper gives preliminary results from a field evaluation of a new approach for quantifying gaseous fugitive emissions of area air pollution sources. The approach combines path-integrated concentration data acquired with any path-integrated optical remote sensing (PI-ORS) ...
An Approach to Integrating Interprofessional Education in Collaborative Mental Health Care
ERIC Educational Resources Information Center
Curran, Vernon; Heath, Olga; Adey, Tanis; Callahan, Terrance; Craig, David; Hearn, Taryn; White, Hubert; Hollett, Ann
2012-01-01
Objective: This article describes an evaluation of a curriculum approach to integrating interprofessional education (IPE) in collaborative mental health practice across the pre- to post-licensure continuum of medical education. Methods: A systematic evaluation of IPE activities was conducted, utilizing a combination of evaluation study designs,…
An Experimental Study of the Emergence of Human Communication Systems
ERIC Educational Resources Information Center
Galantucci, Bruno
2005-01-01
The emergence of human communication systems is typically investigated via 2 approaches with complementary strengths and weaknesses: naturalistic studies and computer simulations. This study was conducted with a method that combines these approaches. Pairs of participants played video games requiring communication. Members of a pair were…
The paper describes preliminary results from a field experiment designed to evaluate a new approach to quantifying gaseous fugitive emissions from area air pollution sources. The new approach combines path-integrated concentration data acquired with any path-integrated optical re...
ERIC Educational Resources Information Center
San Antonio, Diosdado M.; Gamage, David T.
2007-01-01
Purpose: The paper aims to examine the effect of implementing participatory school administration, leadership and management (PSALM) on the levels of empowerment among the educational stakeholders. Design/methodology/approach: A mixed method approach, combining the experimental design with empirical surveys, interviews and documentary analysis,…
Stewart, Gavin B.; Altman, Douglas G.; Askie, Lisa M.; Duley, Lelia; Simmonds, Mark C.; Stewart, Lesley A.
2012-01-01
Background Individual participant data (IPD) meta-analyses that obtain “raw” data from studies rather than summary data typically adopt a “two-stage” approach to analysis whereby IPD within trials generate summary measures, which are combined using standard meta-analytical methods. Recently, a range of “one-stage” approaches which combine all individual participant data in a single meta-analysis have been suggested as providing a more powerful and flexible approach. However, they are more complex to implement and require statistical support. This study uses a dataset to compare “two-stage” and “one-stage” models of varying complexity, to ascertain whether results obtained from the approaches differ in a clinically meaningful way. Methods and Findings We included data from 24 randomised controlled trials, evaluating antiplatelet agents, for the prevention of pre-eclampsia in pregnancy. We performed two-stage and one-stage IPD meta-analyses to estimate overall treatment effect and to explore potential treatment interactions whereby particular types of women and their babies might benefit differentially from receiving antiplatelets. Two-stage and one-stage approaches gave similar results, showing a benefit of using anti-platelets (Relative risk 0.90, 95% CI 0.84 to 0.97). Neither approach suggested that any particular type of women benefited more or less from antiplatelets. There were no material differences in results between different types of one-stage model. Conclusions For these data, two-stage and one-stage approaches to analysis produce similar results. Although one-stage models offer a flexible environment for exploring model structure and are useful where across study patterns relating to types of participant, intervention and outcome mask similar relationships within trials, the additional insights provided by their usage may not outweigh the costs of statistical support for routine application in syntheses of randomised controlled trials. Researchers considering undertaking an IPD meta-analysis should not necessarily be deterred by a perceived need for sophisticated statistical methods when combining information from large randomised trials. PMID:23056232
Approaches to cutaneous wound healing: basics and future directions.
Zeng, Ruijie; Lin, Chuangqiang; Lin, Zehuo; Chen, Hong; Lu, Weiye; Lin, Changmin; Li, Haihong
2018-04-10
The skin provides essential functions, such as thermoregulation, hydration, excretion and synthesis of vitamin D. Major disruptions of the skin cause impairment of critical functions, resulting in high morbidity and death, or leave one with life-changing cosmetic damage. Due to the complexity of the skin, diverse approaches are needed, including both traditional and advanced, to improve cutaneous wound healing. Cutaneous wounds undergo four phases of healing. Traditional management, including skin grafts and wound dressings, is still commonly used in current practice but in combination with newer technology, such as using engineered skin substitutes in skin grafts or combining traditional cotton gauze with anti-bacterial nanoparticles. Various upcoming methods, such as vacuum-assisted wound closure, engineered skin substitutes, stem cell therapy, growth factors and cytokine therapy, have emerged in recent years and are being used to assist wound healing, or even to replace traditional methods. However, many of these methods still lack assessment by large-scale studies and/or extensive application. Conceptual changes, for example, precision medicine and the rapid advancement of science and technology, such as RNA interference and 3D printing, offer tremendous potential. In this review, we focus on the basics of wound treatment and summarize recent developments involving both traditional and hi-tech therapeutic methods that lead to both rapid healing and better cosmetic results. Future studies should explore a more cost-effective, convenient and efficient approach to cutaneous wound healing. Graphical abstract Combination of various materials to create advanced wound dressings.
Cluster ensemble based on Random Forests for genetic data.
Alhusain, Luluah; Hafez, Alaaeldin M
2017-01-01
Clustering plays a crucial role in several application domains, such as bioinformatics. In bioinformatics, clustering has been extensively used as an approach for detecting interesting patterns in genetic data. One application is population structure analysis, which aims to group individuals into subpopulations based on shared genetic variations, such as single nucleotide polymorphisms. Advances in DNA sequencing technology have facilitated the obtainment of genetic datasets with exceptional sizes. Genetic data usually contain hundreds of thousands of genetic markers genotyped for thousands of individuals, making an efficient means for handling such data desirable. Random Forests (RFs) has emerged as an efficient algorithm capable of handling high-dimensional data. RFs provides a proximity measure that can capture different levels of co-occurring relationships between variables. RFs has been widely considered a supervised learning method, although it can be converted into an unsupervised learning method. Therefore, RF-derived proximity measure combined with a clustering technique may be well suited for determining the underlying structure of unlabeled data. This paper proposes, RFcluE, a cluster ensemble approach for determining the underlying structure of genetic data based on RFs. The approach comprises a cluster ensemble framework to combine multiple runs of RF clustering. Experiments were conducted on high-dimensional, real genetic dataset to evaluate the proposed approach. The experiments included an examination of the impact of parameter changes, comparing RFcluE performance against other clustering methods, and an assessment of the relationship between the diversity and quality of the ensemble and its effect on RFcluE performance. This paper proposes, RFcluE, a cluster ensemble approach based on RF clustering to address the problem of population structure analysis and demonstrate the effectiveness of the approach. The paper also illustrates that applying a cluster ensemble approach, combining multiple RF clusterings, produces more robust and higher-quality results as a consequence of feeding the ensemble with diverse views of high-dimensional genetic data obtained through bagging and random subspace, the two key features of the RF algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, Charles; Penchoff, Deborah A.; Wilson, Angela K., E-mail: wilson@chemistry.msu.edu
2015-11-21
An effective approach for the determination of lanthanide energetics, as demonstrated by application to the third ionization energy (in the gas phase) for the first half of the lanthanide series, has been developed. This approach uses a combination of highly correlated and fully relativistic ab initio methods to accurately describe the electronic structure of heavy elements. Both scalar and fully relativistic methods are used to achieve an approach that is both computationally feasible and accurate. The impact of basis set choice and the number of electrons included in the correlation space has also been examined.
Paturzo, Marco; Colaceci, Sofia; Clari, Marco; Mottola, Antonella; Alvaro, Rosaria; Vellone, Ercole
2016-01-01
. Mixed methods designs: an innovative methodological approach for nursing research. The mixed method research designs (MM) combine qualitative and quantitative approaches in the research process, in a single study or series of studies. Their use can provide a wider understanding of multifaceted phenomena. This article presents a general overview of the structure and design of MM to spread this approach in the Italian nursing research community. The MM designs most commonly used in the nursing field are the convergent parallel design, the sequential explanatory design, the exploratory sequential design and the embedded design. For each method a research example is presented. The use of MM can be an added value to improve clinical practices as, through the integration of qualitative and quantitative methods, researchers can better assess complex phenomena typical of nursing.
Application of meta-analysis methods for identifying proteomic expression level differences.
Amess, Bob; Kluge, Wolfgang; Schwarz, Emanuel; Haenisch, Frieder; Alsaif, Murtada; Yolken, Robert H; Leweke, F Markus; Guest, Paul C; Bahn, Sabine
2013-07-01
We present new statistical approaches for identification of proteins with expression levels that are significantly changed when applying meta-analysis to two or more independent experiments. We showed that the Euclidean distance measure has reduced risk of false positives compared to the rank product method. Our Ψ-ranking method has advantages over the traditional fold-change approach by incorporating both the fold-change direction as well as the p-value. In addition, the second novel method, Π-ranking, considers the ratio of the fold-change and thus integrates all three parameters. We further improved the latter by introducing our third technique, Σ-ranking, which combines all three parameters in a balanced nonparametric approach. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Liu, James K; Silva, Nicole A; Sevak, Ilesha A; Eloy, Jean Anderson
2018-04-01
OBJECTIVE There has been much debate regarding the optimal surgical approach for resecting olfactory groove meningiomas (OGMs). In this paper, the authors analyzed the factors involved in approach selection and reviewed the surgical outcomes in a series of OGMs. METHODS A retrospective review of 28 consecutive OGMs from a prospective database was conducted. Each tumor was treated via one of 3 approaches: transbasal approach (n = 15), pure endoscopic endonasal approach (EEA; n = 5), and combined (endoscope-assisted) transbasal-EEA (n = 8). RESULTS The mean tumor volume was greatest in the transbasal (92.02 cm 3 ) and combined (101.15 cm 3 ) groups. Both groups had significant lateral dural extension over the orbits (transbasal 73.3%, p < 0.001; combined 100%), while the transbasal group had the most cerebral edema (73.3%, p < 0.001) and vascular involvement (66.7%, p < 0.001), and the least presence of a cortical cuff (33.3%, p = 0.019). All tumors in the combined group were recurrent tumors that invaded into the sinonasal cavity. The purely EEA group had the smallest mean tumor volume (33.33 cm 3 ), all with a cortical cuff and no lateral dural extension. Gross-total resection was achieved in 80% of transbasal, 100% of EEA, and 62.5% of combined cases. Near-total resection (> 95%) was achieved in 20% of transbasal and 37.5% of combined cases, all due to tumor adherence to the critical neurovascular structures. The rate of CSF leakage was 0% in the transbasal and combined groups, and there was 1 leak in the EEA group (20%), resulting in an overall CSF leakage rate of 3.6%. Olfaction was preserved in 66.7% in the transbasal group. There was no significant difference in length of stay or 30-day readmission rate between the 3 groups. The mean modified Rankin Scale score was 0.79 after the transbasal approach, 2.0 after EEA, and 2.4 after the combined approach (p = 0.0604). The mean follow-up was 14.5 months (range 1-76 months). CONCLUSIONS The transbasal approach provided the best clinical outcomes with the lowest rate of complications for large tumors (> 40 mm) and for smaller tumors (< 40 mm) with intact olfaction. The role of EEA appears to be limited to smaller, appropriately selected tumors in which olfaction is already absent. EEA also plays an important adjunctive role when combined with the transbasal approach for recurrent OGMs invading the sinonasal cavity. Careful patient selection using an individualized, tailored strategy is important to optimize surgical outcomes.
Isgut, Monica; Rao, Mukkavilli; Yang, Chunhua; Subrahmanyam, Vangala; Rida, Padmashree C G; Aneja, Ritu
2018-03-01
Modern drug discovery efforts have had mediocre success rates with increasing developmental costs, and this has encouraged pharmaceutical scientists to seek innovative approaches. Recently with the rise of the fields of systems biology and metabolomics, network pharmacology (NP) has begun to emerge as a new paradigm in drug discovery, with a focus on multiple targets and drug combinations for treating disease. Studies on the benefits of drug combinations lay the groundwork for a renewed focus on natural products in drug discovery. Natural products consist of a multitude of constituents that can act on a variety of targets in the body to induce pharmacodynamic responses that may together culminate in an additive or synergistic therapeutic effect. Although natural products cannot be patented, they can be used as starting points in the discovery of potent combination therapeutics. The optimal mix of bioactive ingredients in natural products can be determined via phenotypic screening. The targets and molecular mechanisms of action of these active ingredients can then be determined using chemical proteomics, and by implementing a reverse pharmacokinetics approach. This review article provides evidence supporting the potential benefits of natural product-based combination drugs, and summarizes drug discovery methods that can be applied to this class of drugs. © 2017 Wiley Periodicals, Inc.
Making Sense of the Combined Degree Experience: The Example of Criminology Double Degrees
ERIC Educational Resources Information Center
Wimshurst, Kerry; Manning, Matthew
2017-01-01
Little research has been undertaken on student experiences of combined degrees. The few studies report that a considerable number of students experienced difficulty with the contrasting epistemic/disciplinary demands of the component programmes. A mixed-methods approach was employed to explore the experiences of graduates from four double degrees…
Comparison as an Approach to the Experimental Method
ERIC Educational Resources Information Center
Turner, David A.
2017-01-01
In his proposal for comparative education, Marc Antoinne Jullien de Paris argues that the comparative method offers a viable alternative to the experimental method. In an experiment, the scientist can manipulate the variables in such a way that he or she can see any possible combination of variables at will. In comparative education, or in…
An Evaluation of Teaching Introductory Geomorphology Using Computer-based Tools.
ERIC Educational Resources Information Center
Wentz, Elizabeth A.; Vender, Joann C.; Brewer, Cynthia A.
1999-01-01
Compares student reactions to traditional teaching methods and an approach where computer-based tools (GEODe CD-ROM and GIS-based exercises) were either integrated with or replaced the traditional methods. Reveals that the students found both of these tools valuable forms of instruction when used in combination with the traditional methods. (CMK)
Does Mixed Methods Research Matter to Understanding Childhood Well-Being?
ERIC Educational Resources Information Center
Jones, Nicola; Sumner, Andy
2009-01-01
There has been a rich debate in development studies on combining research methods in recent years. We explore the particular challenges and opportunities surrounding mixed methods approaches to childhood well-being. We argue that there are additional layers of complexity due to the distinctiveness of children's experiences of deprivation or…
A chance constraint estimation approach to optimizing resource management under uncertainty
Michael Bevers
2007-01-01
Chance-constrained optimization is an important method for managing risk arising from random variations in natural resource systems, but the probabilistic formulations often pose mathematical programming problems that cannot be solved with exact methods. A heuristic estimation method for these problems is presented that combines a formulation for order statistic...
Assessing Grammar Teaching Methods Using a Metacognitive Framework.
ERIC Educational Resources Information Center
Burkhalter, Nancy
A study examined 3 grammar teaching methods to understand why some methods may carry over into writing better than others. E. Bialystok and E. B. Ryan's (1985) metacognitive model of language skills was adapted to plot traditional grammar, sentence combining, and the functional/inductive approach according to the amount of analyzed knowledge and…
The composition of heterogeneous control laws
NASA Technical Reports Server (NTRS)
Kuipers, Benjamin; Astrom, Karl
1991-01-01
The fuzzy control literature and industrial practice provide certain nonlinear methods for combining heterogeneous control laws, but these methods have been very difficult to analyze theoretically. An alternate formulation and extension of this approach is presented that has several practical and theoretical benefits. An example of heterogeneous control is given and two alternate analysis methods are presented.
Nonlinear flap-lag axial equations of a rotating beam
NASA Technical Reports Server (NTRS)
Kaza, K. R. V.; Kvaternik, R. G.
1977-01-01
It is possible to identify essentially four approaches by which analysts have established either the linear or nonlinear governing equations of motion for a particular problem related to the dynamics of rotating elastic bodies. The approaches include the effective applied load artifice in combination with a variational principle and the use of Newton's second law, written as D'Alembert's principle, applied to the deformed configuration. A third approach is a variational method in which nonlinear strain-displacement relations and a first-degree displacement field are used. The method introduced by Vigneron (1975) for deriving the linear flap-lag equations of a rotating beam constitutes the fourth approach. The reported investigation shows that all four approaches make use of the geometric nonlinear theory of elasticity. An alternative method for deriving the nonlinear coupled flap-lag-axial equations of motion is also discussed.
Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech☆
Cao, Houwei; Verma, Ragini; Nenkova, Ani
2014-01-01
We introduce a ranking approach for emotion recognition which naturally incorporates information about the general expressivity of speakers. We demonstrate that our approach leads to substantial gains in accuracy compared to conventional approaches. We train ranking SVMs for individual emotions, treating the data from each speaker as a separate query, and combine the predictions from all rankers to perform multi-class prediction. The ranking method provides two natural benefits. It captures speaker specific information even in speaker-independent training/testing conditions. It also incorporates the intuition that each utterance can express a mix of possible emotion and that considering the degree to which each emotion is expressed can be productively exploited to identify the dominant emotion. We compare the performance of the rankers and their combination to standard SVM classification approaches on two publicly available datasets of acted emotional speech, Berlin and LDC, as well as on spontaneous emotional data from the FAU Aibo dataset. On acted data, ranking approaches exhibit significantly better performance compared to SVM classification both in distinguishing a specific emotion from all others and in multi-class prediction. On the spontaneous data, which contains mostly neutral utterances with a relatively small portion of less intense emotional utterances, ranking-based classifiers again achieve much higher precision in identifying emotional utterances than conventional SVM classifiers. In addition, we discuss the complementarity of conventional SVM and ranking-based classifiers. On all three datasets we find dramatically higher accuracy for the test items on whose prediction the two methods agree compared to the accuracy of individual methods. Furthermore on the spontaneous data the ranking and standard classification are complementary and we obtain marked improvement when we combine the two classifiers by late-stage fusion. PMID:25422534
NASA Astrophysics Data System (ADS)
Kroll, Christine; von der Werth, Monika; Leuck, Holger; Stahl, Christoph; Schertler, Klaus
2017-05-01
For Intelligence, Surveillance, Reconnaissance (ISR) missions of manned and unmanned air systems typical electrooptical payloads provide high-definition video data which has to be exploited with respect to relevant ground targets in real-time by automatic/assisted target recognition software. Airbus Defence and Space is developing required technologies for real-time sensor exploitation since years and has combined the latest advances of Deep Convolutional Neural Networks (CNN) with a proprietary high-speed Support Vector Machine (SVM) learning method into a powerful object recognition system with impressive results on relevant high-definition video scenes compared to conventional target recognition approaches. This paper describes the principal requirements for real-time target recognition in high-definition video for ISR missions and the Airbus approach of combining an invariant feature extraction using pre-trained CNNs and the high-speed training and classification ability of a novel frequency-domain SVM training method. The frequency-domain approach allows for a highly optimized implementation for General Purpose Computation on a Graphics Processing Unit (GPGPU) and also an efficient training of large training samples. The selected CNN which is pre-trained only once on domain-extrinsic data reveals a highly invariant feature extraction. This allows for a significantly reduced adaptation and training of the target recognition method for new target classes and mission scenarios. A comprehensive training and test dataset was defined and prepared using relevant high-definition airborne video sequences. The assessment concept is explained and performance results are given using the established precision-recall diagrams, average precision and runtime figures on representative test data. A comparison to legacy target recognition approaches shows the impressive performance increase by the proposed CNN+SVM machine-learning approach and the capability of real-time high-definition video exploitation.
Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech☆
Cao, Houwei; Verma, Ragini; Nenkova, Ani
2015-01-01
We introduce a ranking approach for emotion recognition which naturally incorporates information about the general expressivity of speakers. We demonstrate that our approach leads to substantial gains in accuracy compared to conventional approaches. We train ranking SVMs for individual emotions, treating the data from each speaker as a separate query, and combine the predictions from all rankers to perform multi-class prediction. The ranking method provides two natural benefits. It captures speaker specific information even in speaker-independent training/testing conditions. It also incorporates the intuition that each utterance can express a mix of possible emotion and that considering the degree to which each emotion is expressed can be productively exploited to identify the dominant emotion. We compare the performance of the rankers and their combination to standard SVM classification approaches on two publicly available datasets of acted emotional speech, Berlin and LDC, as well as on spontaneous emotional data from the FAU Aibo dataset. On acted data, ranking approaches exhibit significantly better performance compared to SVM classification both in distinguishing a specific emotion from all others and in multi-class prediction. On the spontaneous data, which contains mostly neutral utterances with a relatively small portion of less intense emotional utterances, ranking-based classifiers again achieve much higher precision in identifying emotional utterances than conventional SVM classifiers. In addition, we discuss the complementarity of conventional SVM and ranking-based classifiers. On all three datasets we find dramatically higher accuracy for the test items on whose prediction the two methods agree compared to the accuracy of individual methods. Furthermore on the spontaneous data the ranking and standard classification are complementary and we obtain marked improvement when we combine the two classifiers by late-stage fusion.
Brumfitt, W; Salton, M R J; Hamilton-Miller, J M T
2002-11-01
We have sought ways to circumvent resistance, by combining nisin with other antibiotics known to target bacterial cell wall biosynthesis. Twenty strains each of methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant enterococci (VRE) were tested in vitro by standardized methods against nisin alone and combined with bacitracin, ramoplanin and chloramphenicol. Ramoplanin was the most potent compound, and bacitracin had the least activity. Two-way synergy was observed with nisin and ramoplanin. However, chloramphenicol was clearly antagonistic to the activity of nisin. Observations of synergy between nisin and ramoplanin against MRSA and VRE offer a promising approach to the concept of combining nisin with inhibitors of cell wall peptidoglycan. Further investigations are needed in order to develop this approach as a clinical possibility.
Opletal, George; Drumm, Daniel W; Wang, Rong P; Russo, Salvy P
2014-07-03
Ternary glass structures are notoriously difficult to model accurately, and yet prevalent in several modern endeavors. Here, a novel combination of Reverse Monte Carlo (RMC) modeling and ab initio molecular dynamics (MD) is presented, rendering these complicated structures computationally tractable. A case study (Ge6.25As32.5Se61.25 glass) illustrates the effects of ab initio MD quench rates and equilibration temperatures, and the combined approach's efficacy over standard RMC or random insertion methods. Submelting point MD quenches achieve the most stable, realistic models, agreeing with both experimental and fully ab initio results. The simple approach of RMC followed by ab initio geometry optimization provides similar quality to the RMC-MD combination, for far fewer resources.
Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.
Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E
2018-06-01
An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.
Space construction base control system
NASA Technical Reports Server (NTRS)
Kaczynski, R. F.
1979-01-01
Several approaches for an attitude control system are studied and developed for a large space construction base that is structurally flexible. Digital simulations were obtained using the following techniques: (1) the multivariable Nyquist array method combined with closed loop pole allocation, (2) the linear quadratic regulator method. Equations for the three-axis simulation using the multilevel control method were generated and are presented. Several alternate control approaches are also described. A technique is demonstrated for obtaining the dynamic structural properties of a vehicle which is constructed of two or more submodules of known dynamic characteristics.
A community computational challenge to predict the activity of pairs of compounds.
Bansal, Mukesh; Yang, Jichen; Karan, Charles; Menden, Michael P; Costello, James C; Tang, Hao; Xiao, Guanghua; Li, Yajuan; Allen, Jeffrey; Zhong, Rui; Chen, Beibei; Kim, Minsoo; Wang, Tao; Heiser, Laura M; Realubit, Ronald; Mattioli, Michela; Alvarez, Mariano J; Shen, Yao; Gallahan, Daniel; Singer, Dinah; Saez-Rodriguez, Julio; Xie, Yang; Stolovitzky, Gustavo; Califano, Andrea
2014-12-01
Recent therapeutic successes have renewed interest in drug combinations, but experimental screening approaches are costly and often identify only small numbers of synergistic combinations. The DREAM consortium launched an open challenge to foster the development of in silico methods to computationally rank 91 compound pairs, from the most synergistic to the most antagonistic, based on gene-expression profiles of human B cells treated with individual compounds at multiple time points and concentrations. Using scoring metrics based on experimental dose-response curves, we assessed 32 methods (31 community-generated approaches and SynGen), four of which performed significantly better than random guessing. We highlight similarities between the methods. Although the accuracy of predictions was not optimal, we find that computational prediction of compound-pair activity is possible, and that community challenges can be useful to advance the field of in silico compound-synergy prediction.
NASA Astrophysics Data System (ADS)
Nanda, Tarun; Kumar, B. Ravi; Singh, Vishal
2017-11-01
Micromechanical modeling is used to predict material's tensile flow curve behavior based on microstructural characteristics. This research develops a simplified micromechanical modeling approach for predicting flow curve behavior of dual-phase steels. The existing literature reports on two broad approaches for determining tensile flow curve of these steels. The modeling approach developed in this work attempts to overcome specific limitations of the existing two approaches. This approach combines dislocation-based strain-hardening method with rule of mixtures. In the first step of modeling, `dislocation-based strain-hardening method' was employed to predict tensile behavior of individual phases of ferrite and martensite. In the second step, the individual flow curves were combined using `rule of mixtures,' to obtain the composite dual-phase flow behavior. To check accuracy of proposed model, four distinct dual-phase microstructures comprising of different ferrite grain size, martensite fraction, and carbon content in martensite were processed by annealing experiments. The true stress-strain curves for various microstructures were predicted with the newly developed micromechanical model. The results of micromechanical model matched closely with those of actual tensile tests. Thus, this micromechanical modeling approach can be used to predict and optimize the tensile flow behavior of dual-phase steels.
Wunderli, S; Fortunato, G; Reichmuth, A; Richard, Ph
2003-06-01
A new method to correct for the largest systematic influence in mass determination-air buoyancy-is outlined. A full description of the most relevant influence parameters is given and the combined measurement uncertainty is evaluated according to the ISO-GUM approach [1]. A new correction method for air buoyancy using an artefact is presented. This method has the advantage that only a mass artefact is used to correct for air buoyancy. The classical approach demands the determination of the air density and therefore suitable equipment to measure at least the air temperature, the air pressure and the relative air humidity within the demanded uncertainties (i.e. three independent measurement tasks have to be performed simultaneously). The calculated uncertainty is lower for the classical method. However a field laboratory may not always be in possession of fully traceable measurement systems for these room climatic parameters.A comparison of three approaches applied to the calculation of the combined uncertainty of mass values is presented. Namely the classical determination of air buoyancy, the artefact method, and the neglecting of this systematic effect as proposed in the new EURACHEM/CITAC guide [2]. The artefact method is suitable for high-precision measurement in analytical chemistry and especially for the production of certified reference materials, reference values and analytical chemical reference materials. The method could also be used either for volume determination of solids or for air density measurement by an independent method.
NASA Astrophysics Data System (ADS)
Chen, Y.; Luo, M.; Xu, L.; Zhou, X.; Ren, J.; Zhou, J.
2018-04-01
The RF method based on grid-search parameter optimization could achieve a classification accuracy of 88.16 % in the classification of images with multiple feature variables. This classification accuracy was higher than that of SVM and ANN under the same feature variables. In terms of efficiency, the RF classification method performs better than SVM and ANN, it is more capable of handling multidimensional feature variables. The RF method combined with object-based analysis approach could highlight the classification accuracy further. The multiresolution segmentation approach on the basis of ESP scale parameter optimization was used for obtaining six scales to execute image segmentation, when the segmentation scale was 49, the classification accuracy reached the highest value of 89.58 %. The classification accuracy of object-based RF classification was 1.42 % higher than that of pixel-based classification (88.16 %), and the classification accuracy was further improved. Therefore, the RF classification method combined with object-based analysis approach could achieve relatively high accuracy in the classification and extraction of land use information for industrial and mining reclamation areas. Moreover, the interpretation of remotely sensed imagery using the proposed method could provide technical support and theoretical reference for remotely sensed monitoring land reclamation.
Teo, Chin Chye; Tan, Swee Ngin; Yong, Jean Wan Hong; Hew, Choy Sin; Ong, Eng Shi
2009-02-01
An approach that combined green-solvent methods of extraction with chromatographic chemical fingerprint and pattern recognition tools such as principal component analysis (PCA) was used to evaluate the quality of medicinal plants. Pressurized hot water extraction (PHWE) and microwave-assisted extraction (MAE) were used and their extraction efficiencies to extract two bioactive compounds, namely stevioside (SV) and rebaudioside A (RA), from Stevia rebaudiana Bertoni (SB) under different cultivation conditions were compared. The proposed methods showed that SV and RA could be extracted from SB using pure water under optimized conditions. The extraction efficiency of the methods was observed to be higher or comparable to heating under reflux with water. The method precision (RSD, n = 6) was found to vary from 1.91 to 2.86% for the two different methods on different days. Compared to PHWE, MAE has higher extraction efficiency with shorter extraction time. MAE was also found to extract more chemical constituents and provide distinctive chemical fingerprints for quality control purposes. Thus, a combination of MAE with chromatographic chemical fingerprints and PCA provided a simple and rapid approach for the comparison and classification of medicinal plants from different growth conditions. Hence, the current work highlighted the importance of extraction method in chemical fingerprinting for the classification of medicinal plants from different cultivation conditions with the aid of pattern recognition tools used.
Monte Carlo Transport for Electron Thermal Transport
NASA Astrophysics Data System (ADS)
Chenhall, Jeffrey; Cao, Duc; Moses, Gregory
2015-11-01
The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.
Pluye, Pierre; Hong, Quan Nha
2014-01-01
This article provides an overview of mixed methods research and mixed studies reviews. These two approaches are used to combine the strengths of quantitative and qualitative methods and to compensate for their respective limitations. This article is structured in three main parts. First, the epistemological background for mixed methods will be presented. Afterward, we present the main types of mixed methods research designs and techniques as well as guidance for planning, conducting, and appraising mixed methods research. In the last part, we describe the main types of mixed studies reviews and provide a tool kit and examples. Future research needs to offer guidance for assessing mixed methods research and reporting mixed studies reviews, among other challenges.
DOT National Transportation Integrated Search
1997-08-01
An experimental construction method was evaluated at the Lost River Bridge in Klamath County to reduce the discontinuity between the bridge and the roadway. The method included combining soil in six 300-mm lifts interlaced with geotextile reinforceme...
CUMULATIVE RISK ASSESSMENT FOR QUANTITATIVE RESPONSE DATA
The Relative Potency Factor approach (RPF) is used to normalize and combine different toxic potencies among a group of chemicals selected for cumulative risk assessment. The RPF method assumes that the slopes of the dose-response functions are all equal; but this method depends o...
DOT National Transportation Integrated Search
2001-09-01
In two recent studies by Miaou, he proposed a method to estimate vehicle roadside encroachment rates using accident-based models. He further illustrated the use of this method to estimate roadside encroachment rates for rural two-lane undivided roads...
NASA Technical Reports Server (NTRS)
Collins, Jeffery D.; Volakis, John L.; Jin, Jian-Ming
1990-01-01
A new technique is presented for computing the scattering by 2-D structures of arbitrary composition. The proposed solution approach combines the usual finite element method with the boundary-integral equation to formulate a discrete system. This is subsequently solved via the conjugate gradient (CG) algorithm. A particular characteristic of the method is the use of rectangular boundaries to enclose the scatterer. Several of the resulting boundary integrals are therefore convolutions and may be evaluated via the fast Fourier transform (FFT) in the implementation of the CG algorithm. The solution approach offers the principal advantage of having O(N) memory demand and employs a 1-D FFT versus a 2-D FFT as required with a traditional implementation of the CGFFT algorithm. The speed of the proposed solution method is compared with that of the traditional CGFFT algorithm, and results for rectangular bodies are given and shown to be in excellent agreement with the moment method.
NASA Technical Reports Server (NTRS)
Collins, Jeffery D.; Volakis, John L.
1989-01-01
A new technique is presented for computing the scattering by 2-D structures of arbitrary composition. The proposed solution approach combines the usual finite element method with the boundary integral equation to formulate a discrete system. This is subsequently solved via the conjugate gradient (CG) algorithm. A particular characteristic of the method is the use of rectangular boundaries to enclose the scatterer. Several of the resulting boundary integrals are therefore convolutions and may be evaluated via the fast Fourier transform (FFT) in the implementation of the CG algorithm. The solution approach offers the principle advantage of having O(N) memory demand and employs a 1-D FFT versus a 2-D FFT as required with a traditional implementation of the CGFFT algorithm. The speed of the proposed solution method is compared with that of the traditional CGFFT algorithm, and results for rectangular bodies are given and shown to be in excellent agreement with the moment method.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Anirban; Ganguly, Anindita; Chatterjee, Saumya Deep
2018-04-01
In this paper the authors have dealt with seven kinds of non-linear Volterra and Fredholm classes of equations. The authors have formulated an algorithm for solving the aforementioned equation types via Hybrid Function (HF) and Triangular Function (TF) piecewise-linear orthogonal approach. In this approach the authors have reduced integral equation or integro-differential equation into equivalent system of simultaneous non-linear equation and have employed either Newton's method or Broyden's method to solve the simultaneous non-linear equations. The authors have calculated the L2-norm error and the max-norm error for both HF and TF method for each kind of equations. Through the illustrated examples, the authors have shown that the HF based algorithm produces stable result, on the contrary TF-computational method yields either stable, anomalous or unstable results.
NASA Astrophysics Data System (ADS)
Pacheco, Anderson; Fontana, Filipe; Viotti, Matias R.; Veiga, Celso L. N.; Lothhammer, Lívia R.; Albertazzi G., Armando, Jr.
2015-08-01
The authors developed an achromatic speckle pattern interferometer able to measure in-plane displacements in polar coordinates. It has been used to measure combined stresses resulting from the superposition of mechanical loading and residual stresses. Relaxation methods have been applied to produce on the surface of the specimen a displacement field that can be used to determine the amount of combined stresses. Two relaxation methods are explored in this work: blind hole-drilling and indentation. The first one results from a blind hole drilled with a high-speed drilling unit in the area of interest. The measured displacement data is fitted in an appropriate model to quantify the stress level using an indirect approach based on a set of finite element coefficients. The second approach uses indentation, where a hard spherical tip is firmly pressed against the surface to be measured with a predetermined indentation load. A plastic flow occurs around the indentation mark producing a radial in-plane displacement field that is related to the amount of combined stresses. Also in this case, displacements are measured by the radial interferometer and used to determine the stresses by least square fitting it to a displacement field determined by calibration. Both approaches are used to quantify the amount of bending stresses and moment in eight sections of a 12 m long 200 mm diameter steel pipe submitted to a known transverse loading. Reference values of bending stresses are also determined by strain gauges. The comparison between the four results is discussed in the paper.
Combination of intensity-based image registration with 3D simulation in radiation therapy.
Li, Pan; Malsch, Urban; Bendl, Rolf
2008-09-07
Modern techniques of radiotherapy like intensity modulated radiation therapy (IMRT) make it possible to deliver high dose to tumors of different irregular shapes at the same time sparing surrounding healthy tissue. However, internal tumor motion makes precise calculation of the delivered dose distribution challenging. This makes analysis of tumor motion necessary. One way to describe target motion is using image registration. Many registration methods have already been developed previously. However, most of them belong either to geometric approaches or to intensity approaches. Methods which take account of anatomical information and results of intensity matching can greatly improve the results of image registration. Based on this idea, a combined method of image registration followed by 3D modeling and simulation was introduced in this project. Experiments were carried out for five patients 4DCT lung datasets. In the 3D simulation, models obtained from images of end-exhalation were deformed to the state of end-inhalation. Diaphragm motions were around -25 mm in the cranial-caudal (CC) direction. To verify the quality of our new method, displacements of landmarks were calculated and compared with measurements in the CT images. Improvement of accuracy after simulations has been shown compared to the results obtained only by intensity-based image registration. The average improvement was 0.97 mm. The average Euclidean error of the combined method was around 3.77 mm. Unrealistic motions such as curl-shaped deformations in the results of image registration were corrected. The combined method required less than 30 min. Our method provides information about the deformation of the target volume, which we need for dose optimization and target definition in our planning system.
NASA Astrophysics Data System (ADS)
Asplund, Erik; Klüner, Thorsten
2012-03-01
In this paper, control of open quantum systems with emphasis on the control of surface photochemical reactions is presented. A quantum system in a condensed phase undergoes strong dissipative processes. From a theoretical viewpoint, it is important to model such processes in a rigorous way. In this work, the description of open quantum systems is realized within the surrogate Hamiltonian approach [R. Baer and R. Kosloff, J. Chem. Phys. 106, 8862 (1997)], 10.1063/1.473950. An efficient and accurate method to find control fields is optimal control theory (OCT) [W. Zhu, J. Botina, and H. Rabitz, J. Chem. Phys. 108, 1953 (1998), 10.1063/1.475576; Y. Ohtsuki, G. Turinici, and H. Rabitz, J. Chem. Phys. 120, 5509 (2004)], 10.1063/1.1650297. To gain control of open quantum systems, the surrogate Hamiltonian approach and OCT, with time-dependent targets, are combined. Three open quantum systems are investigated by the combined method, a harmonic oscillator immersed in an ohmic bath, CO adsorbed on a platinum surface, and NO adsorbed on a nickel oxide surface. Throughout this paper, atomic units, i.e., ℏ = me = e = a0 = 1, have been used unless otherwise stated.
BinQuasi: a peak detection method for ChIP-sequencing data with biological replicates.
Goren, Emily; Liu, Peng; Wang, Chao; Wang, Chong
2018-04-19
ChIP-seq experiments that are aimed at detecting DNA-protein interactions require biological replication to draw inferential conclusions, however there is no current consensus on how to analyze ChIP-seq data with biological replicates. Very few methodologies exist for the joint analysis of replicated ChIP-seq data, with approaches ranging from combining the results of analyzing replicates individually to joint modeling of all replicates. Combining the results of individual replicates analyzed separately can lead to reduced peak classification performance compared to joint modeling. Currently available methods for joint analysis may fail to control the false discovery rate at the nominal level. We propose BinQuasi, a peak caller for replicated ChIP-seq data, that jointly models biological replicates using a generalized linear model framework and employs a one-sided quasi-likelihood ratio test to detect peaks. When applied to simulated data and real datasets, BinQuasi performs favorably compared to existing methods, including better control of false discovery rate than existing joint modeling approaches. BinQuasi offers a flexible approach to joint modeling of replicated ChIP-seq data which is preferable to combining the results of replicates analyzed individually. Source code is freely available for download at https://cran.r-project.org/package=BinQuasi, implemented in R. pliu@iastate.edu or egoren@iastate.edu. Supplementary material is available at Bioinformatics online.
IMPLICIT DUAL CONTROL BASED ON PARTICLE FILTERING AND FORWARD DYNAMIC PROGRAMMING.
Bayard, David S; Schumitzky, Alan
2010-03-01
This paper develops a sampling-based approach to implicit dual control. Implicit dual control methods synthesize stochastic control policies by systematically approximating the stochastic dynamic programming equations of Bellman, in contrast to explicit dual control methods that artificially induce probing into the control law by modifying the cost function to include a term that rewards learning. The proposed implicit dual control approach is novel in that it combines a particle filter with a policy-iteration method for forward dynamic programming. The integration of the two methods provides a complete sampling-based approach to the problem. Implementation of the approach is simplified by making use of a specific architecture denoted as an H-block. Practical suggestions are given for reducing computational loads within the H-block for real-time applications. As an example, the method is applied to the control of a stochastic pendulum model having unknown mass, length, initial position and velocity, and unknown sign of its dc gain. Simulation results indicate that active controllers based on the described method can systematically improve closed-loop performance with respect to other more common stochastic control approaches.
Multi-fidelity methods for uncertainty quantification in transport problems
NASA Astrophysics Data System (ADS)
Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.
2016-12-01
We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.
Nonconforming mortar element methods: Application to spectral discretizations
NASA Technical Reports Server (NTRS)
Maday, Yvon; Mavriplis, Cathy; Patera, Anthony
1988-01-01
Spectral element methods are p-type weighted residual techniques for partial differential equations that combine the generality of finite element methods with the accuracy of spectral methods. Presented here is a new nonconforming discretization which greatly improves the flexibility of the spectral element approach as regards automatic mesh generation and non-propagating local mesh refinement. The method is based on the introduction of an auxiliary mortar trace space, and constitutes a new approach to discretization-driven domain decomposition characterized by a clean decoupling of the local, structure-preserving residual evaluations and the transmission of boundary and continuity conditions. The flexibility of the mortar method is illustrated by several nonconforming adaptive Navier-Stokes calculations in complex geometry.
Haque, Mohammad Nazmul; Noman, Nasimul; Berretta, Regina; Moscato, Pablo
2016-01-01
Classification of datasets with imbalanced sample distributions has always been a challenge. In general, a popular approach for enhancing classification performance is the construction of an ensemble of classifiers. However, the performance of an ensemble is dependent on the choice of constituent base classifiers. Therefore, we propose a genetic algorithm-based search method for finding the optimum combination from a pool of base classifiers to form a heterogeneous ensemble. The algorithm, called GA-EoC, utilises 10 fold-cross validation on training data for evaluating the quality of each candidate ensembles. In order to combine the base classifiers decision into ensemble's output, we used the simple and widely used majority voting approach. The proposed algorithm, along with the random sub-sampling approach to balance the class distribution, has been used for classifying class-imbalanced datasets. Additionally, if a feature set was not available, we used the (α, β) - k Feature Set method to select a better subset of features for classification. We have tested GA-EoC with three benchmarking datasets from the UCI-Machine Learning repository, one Alzheimer's disease dataset and a subset of the PubFig database of Columbia University. In general, the performance of the proposed method on the chosen datasets is robust and better than that of the constituent base classifiers and many other well-known ensembles. Based on our empirical study we claim that a genetic algorithm is a superior and reliable approach to heterogeneous ensemble construction and we expect that the proposed GA-EoC would perform consistently in other cases.
Haque, Mohammad Nazmul; Noman, Nasimul; Berretta, Regina; Moscato, Pablo
2016-01-01
Classification of datasets with imbalanced sample distributions has always been a challenge. In general, a popular approach for enhancing classification performance is the construction of an ensemble of classifiers. However, the performance of an ensemble is dependent on the choice of constituent base classifiers. Therefore, we propose a genetic algorithm-based search method for finding the optimum combination from a pool of base classifiers to form a heterogeneous ensemble. The algorithm, called GA-EoC, utilises 10 fold-cross validation on training data for evaluating the quality of each candidate ensembles. In order to combine the base classifiers decision into ensemble’s output, we used the simple and widely used majority voting approach. The proposed algorithm, along with the random sub-sampling approach to balance the class distribution, has been used for classifying class-imbalanced datasets. Additionally, if a feature set was not available, we used the (α, β) − k Feature Set method to select a better subset of features for classification. We have tested GA-EoC with three benchmarking datasets from the UCI-Machine Learning repository, one Alzheimer’s disease dataset and a subset of the PubFig database of Columbia University. In general, the performance of the proposed method on the chosen datasets is robust and better than that of the constituent base classifiers and many other well-known ensembles. Based on our empirical study we claim that a genetic algorithm is a superior and reliable approach to heterogeneous ensemble construction and we expect that the proposed GA-EoC would perform consistently in other cases. PMID:26764911
Meijer, Erik; Rohwedder, Susann; Wansbeek, Tom
2012-01-01
Survey data on earnings tend to contain measurement error. Administrative data are superior in principle, but they are worthless in case of a mismatch. We develop methods for prediction in mixture factor analysis models that combine both data sources to arrive at a single earnings figure. We apply the methods to a Swedish data set. Our results show that register earnings data perform poorly if there is a (small) probability of a mismatch. Survey earnings data are more reliable, despite their measurement error. Predictors that combine both and take conditional class probabilities into account outperform all other predictors.
ERIC Educational Resources Information Center
Edwards, Jeffrey R.; Lambert, Lisa Schurer
2007-01-01
Studies that combine moderation and mediation are prevalent in basic and applied psychology research. Typically, these studies are framed in terms of moderated mediation or mediated moderation, both of which involve similar analytical approaches. Unfortunately, these approaches have important shortcomings that conceal the nature of the moderated…
ERIC Educational Resources Information Center
Vaughn, Brandon K.
2009-01-01
This study considers the effectiveness of a "balanced amalgamated" approach to teaching graduate level introductory statistics. Although some research stresses replacing traditional lectures with more active learning methods, the approach of this study is to combine effective lecturing with active learning and team projects. The results of this…
DNA-based approach to aging martens (Martes americana and M. caurina)
Jonathan N. Pauli; John P. Whiteman; Bruce G. Marcot; Terry M. McClean; Merav Ben-David
2011-01-01
Demographic structure is central to understanding the dynamics of animal populations. However, determining the age of free-ranging mammals is difficult, and currently impossible when sampling with noninvasive, genetic-based approaches. We present a method to estimate age class by combining measures of telomere lengths with other biologically meaningful covariates in a...
The "Village" Model: A Consumer-Driven Approach for Aging in Place
ERIC Educational Resources Information Center
Scharlach, Andrew; Graham, Carrie; Lehning, Amanda
2012-01-01
Purpose of the Study: This study examines the characteristics of the "Village" model, an innovative consumer-driven approach that aims to promote aging in place through a combination of member supports, service referrals, and consumer engagement. Design and Methods: Thirty of 42 fully operational Villages completed 2 surveys. One survey examined…
ERIC Educational Resources Information Center
Mills, James W.; And Others
1973-01-01
The Study reported here tested an application of the Linear Programming Model at the Reading Clinic of Drew University. Results, while not conclusive, indicate that this approach yields greater gains in speed scores than a traditional approach for this population. (Author)
ERIC Educational Resources Information Center
Durston, Sarah; Konrad, Kerstin
2007-01-01
This paper aims to illustrate how combining multiple approaches can inform us about the neurobiology of ADHD. Converging evidence from genetic, psychopharmacological and functional neuroimaging studies has implicated dopaminergic fronto-striatal circuitry in ADHD. However, while the observation of converging evidence from multiple vantage points…
A Hybrid Approach to Protect Palmprint Templates
Sun, Dongmei; Xiong, Ke; Qiu, Zhengding
2014-01-01
Biometric template protection is indispensable to protect personal privacy in large-scale deployment of biometric systems. Accuracy, changeability, and security are three critical requirements for template protection algorithms. However, existing template protection algorithms cannot satisfy all these requirements well. In this paper, we propose a hybrid approach that combines random projection and fuzzy vault to improve the performances at these three points. Heterogeneous space is designed for combining random projection and fuzzy vault properly in the hybrid scheme. New chaff point generation method is also proposed to enhance the security of the heterogeneous vault. Theoretical analyses of proposed hybrid approach in terms of accuracy, changeability, and security are given in this paper. Palmprint database based experimental results well support the theoretical analyses and demonstrate the effectiveness of proposed hybrid approach. PMID:24982977
A hybrid approach to protect palmprint templates.
Liu, Hailun; Sun, Dongmei; Xiong, Ke; Qiu, Zhengding
2014-01-01
Biometric template protection is indispensable to protect personal privacy in large-scale deployment of biometric systems. Accuracy, changeability, and security are three critical requirements for template protection algorithms. However, existing template protection algorithms cannot satisfy all these requirements well. In this paper, we propose a hybrid approach that combines random projection and fuzzy vault to improve the performances at these three points. Heterogeneous space is designed for combining random projection and fuzzy vault properly in the hybrid scheme. New chaff point generation method is also proposed to enhance the security of the heterogeneous vault. Theoretical analyses of proposed hybrid approach in terms of accuracy, changeability, and security are given in this paper. Palmprint database based experimental results well support the theoretical analyses and demonstrate the effectiveness of proposed hybrid approach.
ERIC Educational Resources Information Center
Viertel, David C.; Burns, Diane M.
2012-01-01
Unique integrative learning approaches represent a fundamental opportunity for undergraduate students and faculty alike to combine interdisciplinary methods with applied spatial research. Geography and geoscience-related disciplines are particularly well-suited to adapt multiple methods within a holistic and reflective mentored research paradigm.…
Using mixed methods effectively in prevention science: designs, procedures, and examples.
Zhang, Wanqing; Watanabe-Galloway, Shinobu
2014-10-01
There is growing interest in using a combination of quantitative and qualitative methods to generate evidence about the effectiveness of health prevention, services, and intervention programs. With the emerging importance of mixed methods research across the social and health sciences, there has been an increased recognition of the value of using mixed methods for addressing research questions in different disciplines. We illustrate the mixed methods approach in prevention research, showing design procedures used in several published research articles. In this paper, we focused on two commonly used mixed methods designs: concurrent and sequential mixed methods designs. We discuss the types of mixed methods designs, the reasons for, and advantages of using a particular type of design, and the procedures of qualitative and quantitative data collection and integration. The studies reviewed in this paper show that the essence of qualitative research is to explore complex dynamic phenomena in prevention science, and the advantage of using mixed methods is that quantitative data can yield generalizable results and qualitative data can provide extensive insights. However, the emphasis of methodological rigor in a mixed methods application also requires considerable expertise in both qualitative and quantitative methods. Besides the necessary skills and effective interdisciplinary collaboration, this combined approach also requires an open-mindedness and reflection from the involved researchers.
ERIC Educational Resources Information Center
Cheriani, Cheriani; Mahmud, Alimuddin; Tahmir, Suradi; Manda, Darman; Dirawan, Gufran Darma
2015-01-01
This study aims to determine the differences in learning output by using Problem Based Model combines with the "Buginese" Local Cultural Knowledge (PBL-Culture). It is also explores the students activities in learning mathematics subject by using PBL-Culture Models. This research is using Mixed Methods approach that combined quantitative…
Combining Primary Prevention and Risk Reduction Approaches in Sexual Assault Protection Programming
ERIC Educational Resources Information Center
Menning, Chadwick; Holtzman, Mellisa
2015-01-01
Objective: The object of this study is to extend prior evaluations of Elemental, a sexual assault protection program that combines primary prevention and risk reduction strategies within a single program. Participants and Methods: During 2012 and 2013, program group and control group students completed pretest, posttest, and 6-week and 6-month…
Herbert Ssegane; Devendra M. Amatya; E.W. Tollner; Zhaohua Dai; Jami E. Nettles
2013-01-01
Commonly used methods to predict streamflow at ungauged watersheds implicitly predict streamflow magnitude and temporal sequence concurrently. An alternative approach that has not been fully explored is the conceptualization of streamflow as a composite of two separable components of magnitude and sequence, where each component is estimated separately and then combined...
Sampling-based ensemble segmentation against inter-operator variability
NASA Astrophysics Data System (ADS)
Huo, Jing; Okada, Kazunori; Pope, Whitney; Brown, Matthew
2011-03-01
Inconsistency and a lack of reproducibility are commonly associated with semi-automated segmentation methods. In this study, we developed an ensemble approach to improve reproducibility and applied it to glioblastoma multiforme (GBM) brain tumor segmentation on T1-weigted contrast enhanced MR volumes. The proposed approach combines samplingbased simulations and ensemble segmentation into a single framework; it generates a set of segmentations by perturbing user initialization and user-specified internal parameters, then fuses the set of segmentations into a single consensus result. Three combination algorithms were applied: majority voting, averaging and expectation-maximization (EM). The reproducibility of the proposed framework was evaluated by a controlled experiment on 16 tumor cases from a multicenter drug trial. The ensemble framework had significantly better reproducibility than the individual base Otsu thresholding method (p<.001).
NASA Astrophysics Data System (ADS)
Ganiev, R. F.; Reviznikov, D. L.; Rogoza, A. N.; Slastushenskiy, Yu. V.; Ukrainskiy, L. E.
2017-03-01
A description of a complex approach to investigation of nonlinear wave processes in the human cardiovascular system based on a combination of high-precision methods of measuring a pulse wave, mathematical methods of processing the empirical data, and methods of direct numerical modeling of hemodynamic processes in an arterial tree is given.
A Mixed Methods Content Analysis of the Research Literature in Science Education
ERIC Educational Resources Information Center
Schram, Asta B.
2014-01-01
In recent years, more and more researchers in science education have been turning to the practice of combining qualitative and quantitative methods in the same study. This approach of using mixed methods creates possibilities to study the various issues that science educators encounter in more depth. In this content analysis, I evaluated 18…
NASA Astrophysics Data System (ADS)
Matthews, Thomas P.; Anastasio, Mark A.
2017-12-01
The initial pressure and speed of sound (SOS) distributions cannot both be stably recovered from photoacoustic computed tomography (PACT) measurements alone. Adjunct ultrasound computed tomography (USCT) measurements can be employed to estimate the SOS distribution. Under the conventional image reconstruction approach for combined PACT/USCT systems, the SOS is estimated from the USCT measurements alone and the initial pressure is estimated from the PACT measurements by use of the previously estimated SOS. This approach ignores the acoustic information in the PACT measurements and may require many USCT measurements to accurately reconstruct the SOS. In this work, a joint reconstruction method where the SOS and initial pressure distributions are simultaneously estimated from combined PACT/USCT measurements is proposed. This approach allows accurate estimation of both the initial pressure distribution and the SOS distribution while requiring few USCT measurements.
Visual affective classification by combining visual and text features.
Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming
2017-01-01
Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task.
Visual affective classification by combining visual and text features
Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming
2017-01-01
Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task. PMID:28850566
Specifications of a continual reassessment method design for phase I trials of combined drugs†
Wages, Nolan A.; Conaway, Mark R.
2013-01-01
In studies of combinations of agents in phase I oncology trials, the dose–toxicity relationship may not be monotone for all combinations, in which case the toxicity probabilities follow a partial order. The continual reassessment method for partial orders (PO-CRM) is a design for phase I trials of combinations that leans upon identifying possible complete orders associated with the partial order. This article addresses some practical design considerations not previously undertaken when describing the PO-CRM. We describe an approach in choosing a proper subset of possible orderings, formulated according to the known toxicity relationships within a matrix of combination therapies. Other design issues, such as working model selection and stopping rules, are also discussed. We demonstrate the practical ability of PO-CRM as a phase I design for combinations through its use in a recent trial designed at the University of Virginia Cancer Center. PMID:23729323
Li, Haichen; Yaron, David J
2016-11-08
A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.
The potential for increased power from combining P-values testing the same hypothesis.
Ganju, Jitendra; Julie Ma, Guoguang
2017-02-01
The conventional approach to hypothesis testing for formal inference is to prespecify a single test statistic thought to be optimal. However, we usually have more than one test statistic in mind for testing the null hypothesis of no treatment effect but we do not know which one is the most powerful. Rather than relying on a single p-value, combining p-values from prespecified multiple test statistics can be used for inference. Combining functions include Fisher's combination test and the minimum p-value. Using randomization-based tests, the increase in power can be remarkable when compared with a single test and Simes's method. The versatility of the method is that it also applies when the number of covariates exceeds the number of observations. The increase in power is large enough to prefer combined p-values over a single p-value. The limitation is that the method does not provide an unbiased estimator of the treatment effect and does not apply to situations when the model includes treatment by covariate interaction.
Meta‐analysis of test accuracy studies using imputation for partial reporting of multiple thresholds
Deeks, J.J.; Martin, E.C.; Riley, R.D.
2017-01-01
Introduction For tests reporting continuous results, primary studies usually provide test performance at multiple but often different thresholds. This creates missing data when performing a meta‐analysis at each threshold. A standard meta‐analysis (no imputation [NI]) ignores such missing data. A single imputation (SI) approach was recently proposed to recover missing threshold results. Here, we propose a new method that performs multiple imputation of the missing threshold results using discrete combinations (MIDC). Methods The new MIDC method imputes missing threshold results by randomly selecting from the set of all possible discrete combinations which lie between the results for 2 known bounding thresholds. Imputed and observed results are then synthesised at each threshold. This is repeated multiple times, and the multiple pooled results at each threshold are combined using Rubin's rules to give final estimates. We compared the NI, SI, and MIDC approaches via simulation. Results Both imputation methods outperform the NI method in simulations. There was generally little difference in the SI and MIDC methods, but the latter was noticeably better in terms of estimating the between‐study variances and generally gave better coverage, due to slightly larger standard errors of pooled estimates. Given selective reporting of thresholds, the imputation methods also reduced bias in the summary receiver operating characteristic curve. Simulations demonstrate the imputation methods rely on an equal threshold spacing assumption. A real example is presented. Conclusions The SI and, in particular, MIDC methods can be used to examine the impact of missing threshold results in meta‐analysis of test accuracy studies. PMID:29052347
A Fully Automated Method to Detect and Segment a Manufactured Object in an Underwater Color Image
NASA Astrophysics Data System (ADS)
Barat, Christian; Phlypo, Ronald
2010-12-01
We propose a fully automated active contours-based method for the detection and the segmentation of a moored manufactured object in an underwater image. Detection of objects in underwater images is difficult due to the variable lighting conditions and shadows on the object. The proposed technique is based on the information contained in the color maps and uses the visual attention method, combined with a statistical approach for the detection and an active contour for the segmentation of the object to overcome the above problems. In the classical active contour method the region descriptor is fixed and the convergence of the method depends on the initialization. With our approach, this dependence is overcome with an initialization using the visual attention results and a criterion to select the best region descriptor. This approach improves the convergence and the processing time while providing the advantages of a fully automated method.
Understanding Design Tradeoffs for Health Technologies: A Mixed-Methods Approach
O’Leary, Katie; Eschler, Jordan; Kendall, Logan; Vizer, Lisa M.; Ralston, James D.; Pratt, Wanda
2017-01-01
We introduce a mixed-methods approach for determining how people weigh tradeoffs in values related to health and technologies for health self-management. Our approach combines interviews with Q-methodology, a method from psychology uniquely suited to quantifying opinions. We derive the framework for structured data collection and analysis for the Q-methodology from theories of self-management of chronic illness and technology adoption. To illustrate the power of this new approach, we used it in a field study of nine older adults with type 2 diabetes, and nine mothers of children with asthma. Our mixed-methods approach provides three key advantages for health design science in HCI: (1) it provides a structured health sciences theoretical framework to guide data collection and analysis; (2) it enhances the coding of unstructured data with statistical patterns of polarizing and consensus views; and (3) it empowers participants to actively weigh competing values that are most personally significant to them. PMID:28804794
A hadoop-based method to predict potential effective drug combination.
Sun, Yifan; Xiong, Yi; Xu, Qian; Wei, Dongqing
2014-01-01
Combination drugs that impact multiple targets simultaneously are promising candidates for combating complex diseases due to their improved efficacy and reduced side effects. However, exhaustive screening of all possible drug combinations is extremely time-consuming and impractical. Here, we present a novel Hadoop-based approach to predict drug combinations by taking advantage of the MapReduce programming model, which leads to an improvement of scalability of the prediction algorithm. By integrating the gene expression data of multiple drugs, we constructed data preprocessing and the support vector machines and naïve Bayesian classifiers on Hadoop for prediction of drug combinations. The experimental results suggest that our Hadoop-based model achieves much higher efficiency in the big data processing steps with satisfactory performance. We believed that our proposed approach can help accelerate the prediction of potential effective drugs with the increasing of the combination number at an exponential rate in future. The source code and datasets are available upon request.
A Hadoop-Based Method to Predict Potential Effective Drug Combination
Xiong, Yi; Xu, Qian; Wei, Dongqing
2014-01-01
Combination drugs that impact multiple targets simultaneously are promising candidates for combating complex diseases due to their improved efficacy and reduced side effects. However, exhaustive screening of all possible drug combinations is extremely time-consuming and impractical. Here, we present a novel Hadoop-based approach to predict drug combinations by taking advantage of the MapReduce programming model, which leads to an improvement of scalability of the prediction algorithm. By integrating the gene expression data of multiple drugs, we constructed data preprocessing and the support vector machines and naïve Bayesian classifiers on Hadoop for prediction of drug combinations. The experimental results suggest that our Hadoop-based model achieves much higher efficiency in the big data processing steps with satisfactory performance. We believed that our proposed approach can help accelerate the prediction of potential effective drugs with the increasing of the combination number at an exponential rate in future. The source code and datasets are available upon request. PMID:25147789
NASA Astrophysics Data System (ADS)
Senthil Kumar, A.; Keerthi, V.; Manjunath, A. S.; Werff, Harald van der; Meer, Freek van der
2010-08-01
Classification of hyperspectral images has been receiving considerable attention with many new applications reported from commercial and military sectors. Hyperspectral images are composed of a large number of spectral channels, and have the potential to deliver a great deal of information about a remotely sensed scene. However, in addition to high dimensionality, hyperspectral image classification is compounded with a coarse ground pixel size of the sensor for want of adequate sensor signal to noise ratio within a fine spectral passband. This makes multiple ground features jointly occupying a single pixel. Spectral mixture analysis typically begins with pixel classification with spectral matching techniques, followed by the use of spectral unmixing algorithms for estimating endmembers abundance values in the pixel. The spectral matching techniques are analogous to supervised pattern recognition approaches, and try to estimate some similarity between spectral signatures of the pixel and reference target. In this paper, we propose a spectral matching approach by combining two schemes—variable interval spectral average (VISA) method and spectral curve matching (SCM) method. The VISA method helps to detect transient spectral features at different scales of spectral windows, while the SCM method finds a match between these features of the pixel and one of library spectra by least square fitting. Here we also compare the performance of the combined algorithm with other spectral matching techniques using a simulated and the AVIRIS hyperspectral data sets. Our results indicate that the proposed combination technique exhibits a stronger performance over the other methods in the classification of both the pure and mixed class pixels simultaneously.
An ensemble method for extracting adverse drug events from social media.
Liu, Jing; Zhao, Songzheng; Zhang, Xiaodi
2016-06-01
Because adverse drug events (ADEs) are a serious health problem and a leading cause of death, it is of vital importance to identify them correctly and in a timely manner. With the development of Web 2.0, social media has become a large data source for information on ADEs. The objective of this study is to develop a relation extraction system that uses natural language processing techniques to effectively distinguish between ADEs and non-ADEs in informal text on social media. We develop a feature-based approach that utilizes various lexical, syntactic, and semantic features. Information-gain-based feature selection is performed to address high-dimensional features. Then, we evaluate the effectiveness of four well-known kernel-based approaches (i.e., subset tree kernel, tree kernel, shortest dependency path kernel, and all-paths graph kernel) and several ensembles that are generated by adopting different combination methods (i.e., majority voting, weighted averaging, and stacked generalization). All of the approaches are tested using three data sets: two health-related discussion forums and one general social networking site (i.e., Twitter). When investigating the contribution of each feature subset, the feature-based approach attains the best area under the receiver operating characteristics curve (AUC) values, which are 78.6%, 72.2%, and 79.2% on the three data sets. When individual methods are used, we attain the best AUC values of 82.1%, 73.2%, and 77.0% using the subset tree kernel, shortest dependency path kernel, and feature-based approach on the three data sets, respectively. When using classifier ensembles, we achieve the best AUC values of 84.5%, 77.3%, and 84.5% on the three data sets, outperforming the baselines. Our experimental results indicate that ADE extraction from social media can benefit from feature selection. With respect to the effectiveness of different feature subsets, lexical features and semantic features can enhance the ADE extraction capability. Kernel-based approaches, which can stay away from the feature sparsity issue, are qualified to address the ADE extraction problem. Combining different individual classifiers using suitable combination methods can further enhance the ADE extraction effectiveness. Copyright © 2016 Elsevier B.V. All rights reserved.
Development of algorithms for detecting citrus canker based on hyperspectral reflectance imaging.
Li, Jiangbo; Rao, Xiuqin; Ying, Yibin
2012-01-15
Automated discrimination of fruits with canker from other fruit with normal surface and different type of peel defects has become a helpful task to enhance the competitiveness and profitability of the citrus industry. Over the last several years, hyperspectral imaging technology has received increasing attention in the agricultural products inspection field. This paper studied the feasibility of classification of citrus canker from other peel conditions including normal surface and nine peel defects by hyperspectal imaging. A combination algorithm based on principal component analysis and the two-band ratio (Q(687/630)) method was proposed. Since fewer wavelengths were desired in order to develop a rapid multispectral imaging system, the canker classification performance of the two-band ratio (Q(687/630)) method alone was also evaluated. The proposed combination approach and two-band ratio method alone resulted in overall classification accuracy for training set samples and test set samples of 99.5%, 84.5% and 98.2%, 82.9%, respectively. The proposed combination approach was more efficient for classifying canker against various conditions under reflectance hyperspectral imagery. However, the two-band ratio (Q(687/630)) method alone also demonstrated effectiveness in discriminating citrus canker from normal fruit and other peel diseases except for copper burn and anthracnose. Copyright © 2011 Society of Chemical Industry.
Detecting bursts in the EEG of very and extremely premature infants using a multi-feature approach.
O'Toole, John M; Boylan, Geraldine B; Lloyd, Rhodri O; Goulding, Robert M; Vanhatalo, Sampsa; Stevenson, Nathan J
2017-07-01
To develop a method that segments preterm EEG into bursts and inter-bursts by extracting and combining multiple EEG features. Two EEG experts annotated bursts in individual EEG channels for 36 preterm infants with gestational age < 30 weeks. The feature set included spectral, amplitude, and frequency-weighted energy features. Using a consensus annotation, feature selection removed redundant features and a support vector machine combined features. Area under the receiver operator characteristic (AUC) and Cohen's kappa (κ) evaluated performance within a cross-validation procedure. The proposed channel-independent method improves AUC by 4-5% over existing methods (p < 0.001, n=36), with median (95% confidence interval) AUC of 0.989 (0.973-0.997) and sensitivity-specificity of 95.8-94.4%. Agreement rates between the detector and experts' annotations, κ=0.72 (0.36-0.83) and κ=0.65 (0.32-0.81), are comparable to inter-rater agreement, κ=0.60 (0.21-0.74). Automating the visual identification of bursts in preterm EEG is achievable with a high level of accuracy. Multiple features, combined using a data-driven approach, improves on existing single-feature methods. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Quintana, Leonardo; Arias, Claudia; Cordoba, Jorge; Moroy, Magda; Pulido, Jean; Ramirez, Angela
2012-01-01
The aim of this study was to combine three different analytical methods from three different disciplines to diagnose the ergonomic conditions, manufacturing and supply chain operation of a baking company. The study explores a summary of comprehensive working methods that combines the ergonomics, automation and logistics study methods in the diagnosis of working conditions and productivity. The participatory approach of this type of study that involves the feelings and first-hand knowledge of workers of the operation are determining factors in defining points of action and ergonomic interventions, as well as defining opportunities in the automation of manufacturing and logistics, to cope with the needs of the company. The study identified an ergonomic situation (high prevalence of wrist-hand pain), and the combination of interdisciplinary techniques applied allowed to improve this condition in the company. This type of study allows a primary basis of the opportunities presented by the combination of specialized methods of different disciplines, for the definition of comprehensive action plans for the company. Additionally, it outlines opportunities for improvement and recommendations to mitigate the burden associated with occupational diseases and as an end result improve the quality of life and productivity of workers.
Wang, Gang; Briskot, Till; Hahn, Tobias; Baumann, Pascal; Hubbuch, Jürgen
2017-03-03
Mechanistic modeling has been repeatedly successfully applied in process development and control of protein chromatography. For each combination of adsorbate and adsorbent, the mechanistic models have to be calibrated. Some of the model parameters, such as system characteristics, can be determined reliably by applying well-established experimental methods, whereas others cannot be measured directly. In common practice of protein chromatography modeling, these parameters are identified by applying time-consuming methods such as frontal analysis combined with gradient experiments, curve-fitting, or combined Yamamoto approach. For new components in the chromatographic system, these traditional calibration approaches require to be conducted repeatedly. In the presented work, a novel method for the calibration of mechanistic models based on artificial neural network (ANN) modeling was applied. An in silico screening of possible model parameter combinations was performed to generate learning material for the ANN model. Once the ANN model was trained to recognize chromatograms and to respond with the corresponding model parameter set, it was used to calibrate the mechanistic model from measured chromatograms. The ANN model's capability of parameter estimation was tested by predicting gradient elution chromatograms. The time-consuming model parameter estimation process itself could be reduced down to milliseconds. The functionality of the method was successfully demonstrated in a study with the calibration of the transport-dispersive model (TDM) and the stoichiometric displacement model (SDM) for a protein mixture. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Hybrid approach for detection of dental caries based on the methods FCM and level sets
NASA Astrophysics Data System (ADS)
Chaabene, Marwa; Ben Ali, Ramzi; Ejbali, Ridha; Zaied, Mourad
2017-03-01
This paper presents a new technique for detection of dental caries that is a bacterial disease that destroys the tooth structure. In our approach, we have achieved a new segmentation method that combines the advantages of fuzzy C mean algorithm and level set method. The results obtained by the FCM algorithm will be used by Level sets algorithm to reduce the influence of the noise effect on the working of each of these algorithms, to facilitate level sets manipulation and to lead to more robust segmentation. The sensitivity and specificity confirm the effectiveness of proposed method for caries detection.
DeepSynergy: predicting anti-cancer drug synergy with Deep Learning
Preuer, Kristina; Lewis, Richard P I; Hochreiter, Sepp; Bender, Andreas; Bulusu, Krishna C; Klambauer, Günter
2018-01-01
Abstract Motivation While drug combination therapies are a well-established concept in cancer treatment, identifying novel synergistic combinations is challenging due to the size of combinatorial space. However, computational approaches have emerged as a time- and cost-efficient way to prioritize combinations to test, based on recently available large-scale combination screening data. Recently, Deep Learning has had an impact in many research areas by achieving new state-of-the-art model performance. However, Deep Learning has not yet been applied to drug synergy prediction, which is the approach we present here, termed DeepSynergy. DeepSynergy uses chemical and genomic information as input information, a normalization strategy to account for input data heterogeneity, and conical layers to model drug synergies. Results DeepSynergy was compared to other machine learning methods such as Gradient Boosting Machines, Random Forests, Support Vector Machines and Elastic Nets on the largest publicly available synergy dataset with respect to mean squared error. DeepSynergy significantly outperformed the other methods with an improvement of 7.2% over the second best method at the prediction of novel drug combinations within the space of explored drugs and cell lines. At this task, the mean Pearson correlation coefficient between the measured and the predicted values of DeepSynergy was 0.73. Applying DeepSynergy for classification of these novel drug combinations resulted in a high predictive performance of an AUC of 0.90. Furthermore, we found that all compared methods exhibit low predictive performance when extrapolating to unexplored drugs or cell lines, which we suggest is due to limitations in the size and diversity of the dataset. We envision that DeepSynergy could be a valuable tool for selecting novel synergistic drug combinations. Availability and implementation DeepSynergy is available via www.bioinf.jku.at/software/DeepSynergy. Contact klambauer@bioinf.jku.at Supplementary information Supplementary data are available at Bioinformatics online. PMID:29253077
ERIC Educational Resources Information Center
Shannon, Kathleen
2018-01-01
This paper describes, as an alternative to the Moore Method or a purely flipped classroom, a student-driven, textbook-supported method for teaching that allows movement through the standard course material with differing depths, but the same pace. This method, which includes a combination of board work followed by class discussion, on-demand brief…
Sun, Shujuan; Li, Yan; Guo, Qiongjie; Shi, Changwen; Yu, Jinlong; Ma, Lin
2008-01-01
Combination therapy could be of use for the treatment of fungal infections, especially those caused by drug-resistant fungi. However, the methods and approaches used for data generation and result interpretation need further optimizing. The fractional inhibitory concentration index (FICI) is the most commonly used method, but it has several drawbacks in characterizing antifungal drug interaction. Alternatively, some new methods can be used such as the ΔE model (difference between the predicted and measured fungal growth percentages) and the response surface approach, which uses the concentration-effect relationship over the whole concentration range instead of just the MIC. In the present study, in vitro interactions between tacrolimus (FK506) and three azoles—fluconazole (FLC), itraconazole (ITR), and voriconazole (VRC)-against Candida albicans were evaluated by the checkerboard microdilution method and time-killing test. The intensity of the interactions was determined by visual reading and the spectrophotometric method in a checkerboard assay, and the nature of the interactions was assessed by nonparametric models of FICI and ΔE. Colony counting and colorimetric viable detection methods (2,3-bis {2-methoxy-4-nitro-5-[(sulfenylamino) carbonyl]-2H-tetrazolium hydroxide} [XTT] reduction test) were used for evaluating the combination antifungal effects over time. Synergistic and indifferent effects were found for the combination of FK506 and azoles against azole-sensitive strains, while strong synergy was found against azole-resistant strains analyzed by FICI. The ΔE model gave more consistent results with FICI. The positive interactions were also confirmed by the time-killing test. Our findings suggest a potential role for combination therapy with calcineurin pathway inhibitors and azoles to augment activity against resistant C. albicans. PMID:18056277
Assessment of the safety of foods derived from genetically modified (GM) crops.
König, A; Cockburn, A; Crevel, R W R; Debruyne, E; Grafstroem, R; Hammerling, U; Kimber, I; Knudsen, I; Kuiper, H A; Peijnenburg, A A C M; Penninks, A H; Poulsen, M; Schauzu, M; Wal, J M
2004-07-01
This paper provides guidance on how to assess the safety of foods derived from genetically modified crops (GM crops); it summarises conclusions and recommendations of Working Group 1 of the ENTRANSFOOD project. The paper provides an approach for adapting the test strategy to the characteristics of the modified crop and the introduced trait, and assessing potential unintended effects from the genetic modification. The proposed approach to safety assessment starts with the comparison of the new GM crop with a traditional counterpart that is generally accepted as safe based on a history of human food use (the concept of substantial equivalence). This case-focused approach ensures that foods derived from GM crops that have passed this extensive test-regime are as safe and nutritious as currently consumed plant-derived foods. The approach is suitable for current and future GM crops with more complex modifications. First, the paper reviews test methods developed for the risk assessment of chemicals, including food additives and pesticides, discussing which of these methods are suitable for the assessment of recombinant proteins and whole foods. Second, the paper presents a systematic approach to combine test methods for the safety assessment of foods derived from a specific GM crop. Third, the paper provides an overview on developments in this area that may prove of use in the safety assessment of GM crops, and recommendations for research priorities. It is concluded that the combination of existing test methods provides a sound test-regime to assess the safety of GM crops. Advances in our understanding of molecular biology, biochemistry, and nutrition may in future allow further improvement of test methods that will over time render the safety assessment of foods even more effective and informative. Copryright 2004 Elsevier Ltd.
Mixed Methods in Biomedical and Health Services Research
Curry, Leslie A.; Krumholz, Harlan M.; O’Cathain, Alicia; Plano Clark, Vicki L.; Cherlin, Emily; Bradley, Elizabeth H.
2013-01-01
Mixed methods studies, in which qualitative and quantitative methods are combined in a single program of inquiry, can be valuable in biomedical and health services research, where the complementary strengths of each approach can yield greater insight into complex phenomena than either approach alone. Although interest in mixed methods is growing among science funders and investigators, written guidance on how to conduct and assess rigorous mixed methods studies is not readily accessible to the general readership of peer-reviewed biomedical and health services journals. Furthermore, existing guidelines for publishing mixed methods studies are not well known or applied by researchers and journal editors. Accordingly, this paper is intended to serve as a concise, practical resource for readers interested in core principles and practices of mixed methods research. We briefly describe mixed methods approaches and present illustrations from published biomedical and health services literature, including in cardiovascular care, summarize standards for the design and reporting of these studies, and highlight four central considerations for investigators interested in using these methods. PMID:23322807
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hedegård, Erik Donovan, E-mail: erik.hedegard@phys.chem.ethz.ch; Knecht, Stefan; Reiher, Markus, E-mail: markus.reiher@phys.chem.ethz.ch
2015-06-14
We present a new hybrid multiconfigurational method based on the concept of range-separation that combines the density matrix renormalization group approach with density functional theory. This new method is designed for the simultaneous description of dynamical and static electron-correlation effects in multiconfigurational electronic structure problems.
Group Inquiry Techniques for Teaching Writing.
ERIC Educational Resources Information Center
Hawkins, Thom
The small size of college composition classes encourages exciting and meaningful interaction, especially when students are divided into smaller, autonomous groups for all or part of the hour. This booklet discusses the advantages of combining the inquiry method (sometimes called the discovery method) with a group approach and describes specific…
More on Chemical Reaction Balancing.
ERIC Educational Resources Information Center
Swinehart, D. F.
1985-01-01
A previous article stated that only the matrix method was powerful enough to balance a particular chemical equation. Shows how this equation can be balanced without using the matrix method. The approach taken involves writing partial mathematical reactions and redox half-reactions, and combining them to yield the final balanced reaction. (JN)
USDA-ARS?s Scientific Manuscript database
Skin sensitization is an important toxicological end-point in the risk assessment of chemical allergens. Because of the complexity of the biological mechanisms associated with skin sensitization integrated approaches combining different chemical, biological and in silico methods are recommended to r...
Virtualising the Quantitative Research Methods Course: An Island-Based Approach
ERIC Educational Resources Information Center
Baglin, James; Reece, John; Baker, Jenalle
2015-01-01
Many recent improvements in pedagogical practice have been enabled by the rapid development of innovative technologies, particularly for teaching quantitative research methods and statistics. This study describes the design, implementation, and evaluation of a series of specialised computer laboratory sessions. The sessions combined the use of an…
MOLECULAR TRACKING FECAL CONTAMINATION IN SURFACE WATERS: 16S RDNA VERSUS METAGENOMICS APPROACHES
Microbial source tracking methods need to be sensitive and exhibit temporal and geographic stability in order to provide meaningful data in field studies. The objective of this study was to use a combination of PCR-based methods to track cow fecal contamination in two watersheds....
Biosynthesis and genetic encoding of phosphothreonine through parallel selection and deep sequencing
Huguenin-Dezot, Nicolas; Liang, Alexandria D.; Schmied, Wolfgang H.; Rogerson, Daniel T.; Chin, Jason W.
2017-01-01
The phosphorylation of threonine residues in proteins regulates diverse processes in eukaryotic cells, and thousands of threonine phosphorylations have been identified. An understanding of how threonine phosphorylation regulates biological function will be accelerated by general methods to bio-synthesize defined phospho-proteins. Here we address limitations in current methods for discovering aminoacyl-tRNA synthetase/tRNA pairs for incorporating non-natural amino acids into proteins, by combining parallel positive selections with deep sequencing and statistical analysis, to create a rapid approach for directly discovering aminoacyl-tRNA synthetase/tRNA pairs that selectively incorporate non-natural substrates. Our approach is scalable and enables the direct discovery of aminoacyl-tRNA synthetase/tRNA pairs with mutually orthogonal substrate specificity. We biosynthesize phosphothreonine in cells, and use our new selection approach to discover a phosphothreonyl-tRNA synthetase/tRNACUA pair. By combining these advances we create an entirely biosynthetic route to incorporating phosphothreonine in proteins and biosynthesize several phosphoproteins; enabling phosphoprotein structure determination and synthetic protein kinase activation. PMID:28553966
Baudart, Julia; Coallier, Josée; Laurent, Patrick; Prévost, Michèle
2002-01-01
Water quality assessment involves the specific, sensitive, and rapid detection of bacterial indicators and pathogens in water samples, including viable but nonculturable (VBNC) cells. This work evaluates the specificity and sensitivity of a new method which combines a fluorescent in situ hybridization (FISH) approach with a physiological assay (direct viable count [DVC]) for the direct enumeration, at the single-cell level, of highly diluted viable cells of members of the family Enterobacteriaceae in freshwater and drinking water after membrane filtration. The approach (DVC-FISH) uses a new direct detection device, the laser scanning cytometer (Scan RDI). Combining the DVC-FISH method on a membrane with Scan RDI detection makes it possible to detect as few as one targeted cell in approximately 108 nontargeted cells spread over the membrane. The ability of this new approach to detect and enumerate VBNC enterobacterial cells in freshwater and drinking water distribution systems was investigated and is discussed. PMID:12324357
Moon, Andrea F; Mueller, Geoffrey A; Zhong, Xuejun; Pedersen, Lars C
2010-01-01
Protein crystallographers are often confronted with recalcitrant proteins not readily crystallizable, or which crystallize in problematic forms. A variety of techniques have been used to surmount such obstacles: crystallization using carrier proteins or antibody complexes, chemical modification, surface entropy reduction, proteolytic digestion, and additive screening. Here we present a synergistic approach for successful crystallization of proteins that do not form diffraction quality crystals using conventional methods. This approach combines favorable aspects of carrier-driven crystallization with surface entropy reduction. We have generated a series of maltose binding protein (MBP) fusion constructs containing different surface mutations designed to reduce surface entropy and encourage crystal lattice formation. The MBP advantageously increases protein expression and solubility, and provides a streamlined purification protocol. Using this technique, we have successfully solved the structures of three unrelated proteins that were previously unattainable. This crystallization technique represents a valuable rescue strategy for protein structure solution when conventional methods fail. PMID:20196072
Image enhancement and color constancy for a vehicle-mounted change detection system
NASA Astrophysics Data System (ADS)
Tektonidis, Marco; Monnin, David
2016-10-01
Vehicle-mounted change detection systems allow to improve situational awareness on outdoor itineraries of inter- est. Since the visibility of acquired images is often affected by illumination effects (e.g., shadows) it is important to enhance local contrast. For the analysis and comparison of color images depicting the same scene at different time points it is required to compensate color and lightness inconsistencies caused by the different illumination conditions. We have developed an approach for image enhancement and color constancy based on the center/surround Retinex model and the Gray World hypothesis. The combination of the two methods using a color processing function improves color rendition, compared to both methods. The use of stacked integral images (SII) allows to efficiently perform local image processing. Our combined Retinex/Gray World approach has been successfully applied to image sequences acquired on outdoor itineraries at different time points and a comparison with previous Retinex-based approaches has been carried out.
Groves, Ethan; Palenik, Skip; Palenik, Christopher S
2018-04-18
While color is arguably the most important optical property of evidential fibers, the actual dyestuffs responsible for its expression in them are, in forensic trace evidence examinations, rarely analyzed and still less often identified. This is due, primarily, to the exceedingly small quantities of dye present in a single fiber as well as to the fact that dye identification is a challenging analytical problem, even when large quantities are available for analysis. Among the practical reasons for this are the wide range of dyestuffs available (and the even larger number of trade names), the low total concentration of dyes in the finished product, the limited amount of sample typically available for analysis in forensic cases, and the complexity of the dye mixtures that may exist within a single fiber. Literature on the topic of dye analysis is often limited to a specific method, subset of dyestuffs, or an approach that is not applicable given the constraints of a forensic analysis. Here, we present a generalized approach to dye identification that ( 1 ) combines several robust analytical methods, ( 2 ) is broadly applicable to a wide range of dye chemistries, application classes, and fiber types, and ( 3 ) can be scaled down to forensic casework-sized samples. The approach is based on the development of a reference collection of 300 commercially relevant textile dyes that have been characterized by a variety of microanalytical methods (HPTLC, Raman microspectroscopy, infrared microspectroscopy, UV-Vis spectroscopy, and visible microspectrophotometry). Although there is no single approach that is applicable to all dyes on every type of fiber, a combination of these analytical methods has been applied using a reproducible approach that permits the use of reference libraries to constrain the identity of and, in many cases, identify the dye (or dyes) present in a textile fiber sample.
Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †
Murdani, Muhammad Harist; Hong, Bonghee
2018-01-01
In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes (Ad-Hoc) and neighborhood proximity (Top-K). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space. PMID:29587366
Urban Multisensory Laboratory, AN Approach to Model Urban Space Human Perception
NASA Astrophysics Data System (ADS)
González, T.; Sol, D.; Saenz, J.; Clavijo, D.; García, H.
2017-09-01
An urban sensory lab (USL or LUS an acronym in Spanish) is a new and avant-garde approach for studying and analyzing a city. The construction of this approach allows the development of new methodologies to identify the emotional response of public space users. The laboratory combines qualitative analysis proposed by urbanists and quantitative measures managed by data analysis applications. USL is a new approach to go beyond the borders of urban knowledge. The design thinking strategy allows us to implement methods to understand the results provided by our technique. In this first approach, the interpretation is made by hand. However, our goal is to combine design thinking and machine learning in order to analyze the qualitative and quantitative data automatically. Now, the results are being used by students from the Urbanism and Architecture courses in order to get a better understanding of public spaces in Puebla, Mexico and its interaction with people.
NASA Astrophysics Data System (ADS)
Jagadeesha, C. B.
2017-12-01
Even though friction stir welding was invented long back (1991) by TWI England, till now there has no method or procedure or approach developed, which helps to obtain quickly optimum or exact parameters yielding good or sound weld. An approach has developed in which an equation has been derived, by which approximate rpm can be obtained and by setting range of rpm ±100 or 50 rpm over approximate rpm and by setting welding speed equal to 60 mm/min or 50 mm/min one can conduct FSW experiment to reach optimum parameters; one can reach quickly to optimum parameters, i.e. desired rpm, and welding speed, which yield sound weld by the approach. This approach can be effectively used to obtain sound welds for all similar and dissimilar combinations of materials such as Steel, Al, Mg, Ti, etc.
NASA Astrophysics Data System (ADS)
Grosenick, Dirk; Cantow, Kathleen; Arakelyan, Karen; Wabnitz, Heidrun; Flemming, Bert; Skalweit, Angela; Ladwig, Mechthild; Macdonald, Rainer; Niendorf, Thoralf; Seeliger, Erdmann
2015-07-01
We have developed a hybrid approach to investigate the dynamics of perfusion and oxygenation in the kidney of rats under pathophysiologically relevant conditions. Our approach combines near-infrared spectroscopy to quantify hemoglobin concentration and oxygen saturation in the renal cortex, and an invasive probe method for measuring total renal blood flow by an ultrasonic probe, perfusion by laser-Doppler fluxmetry, and tissue oxygen tension via fluorescence quenching. Hemoglobin concentration and oxygen saturation were determined from experimental data by a Monte Carlo model. The hybrid approach was applied to investigate and compare temporal changes during several types of interventions such as arterial and venous occlusions, as well as hyperoxia, hypoxia and hypercapnia induced by different mixtures of the inspired gas. The approach was also applied to study the effects of the x-ray contrast medium iodixanol on the kidney.
Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †.
Murdani, Muhammad Harist; Kwon, Joonho; Choi, Yoon-Ho; Hong, Bonghee
2018-03-24
In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes ( Ad-Hoc ) and neighborhood proximity ( Top-K ). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space.
Climatological Observations for Maritime Prediction and Analysis Support Service (COMPASS)
NASA Astrophysics Data System (ADS)
OConnor, A.; Kirtman, B. P.; Harrison, S.; Gorman, J.
2016-02-01
Current US Navy forecasting systems cannot easily incorporate extended-range forecasts that can improve mission readiness and effectiveness; ensure safety; and reduce cost, labor, and resource requirements. If Navy operational planners had systems that incorporated these forecasts, they could plan missions using more reliable and longer-term weather and climate predictions. Further, using multi-model forecast ensembles instead of single forecasts would produce higher predictive performance. Extended-range multi-model forecast ensembles, such as those available in the North American Multi-Model Ensemble (NMME), are ideal for system integration because of their high skill predictions; however, even higher skill predictions can be produced if forecast model ensembles are combined correctly. While many methods for weighting models exist, the best method in a given environment requires expert knowledge of the models and combination methods.We present an innovative approach that uses machine learning to combine extended-range predictions from multi-model forecast ensembles and generate a probabilistic forecast for any region of the globe up to 12 months in advance. Our machine-learning approach uses 30 years of hindcast predictions to learn patterns of forecast model successes and failures. Each model is assigned a weight for each environmental condition, 100 km2 region, and day given any expected environmental information. These weights are then applied to the respective predictions for the region and time of interest to effectively stitch together a single, coherent probabilistic forecast. Our experimental results demonstrate the benefits of our approach to produce extended-range probabilistic forecasts for regions and time periods of interest that are superior, in terms of skill, to individual NMME forecast models and commonly weighted models. The probabilistic forecast leverages the strengths of three NMME forecast models to predict environmental conditions for an area spanning from San Diego, CA to Honolulu, HI, seven months in-advance. Key findings include: weighted combinations of models are strictly better than individual models; machine-learned combinations are especially better; and forecasts produced using our approach have the highest rank probability skill score most often.
[Introduction of active learning and student readership in teaching by the pharmaceutical faculty].
Sekiguchi, Masaki; Yamato, Ippei; Kato, Tetsuta; Torigoe, Kojyun
2005-07-01
We have introduced improvements and new approaches into our teaching methods by exploiting 4 active learning methods for pharmacy students of first year. The 4 teaching methods for each lesson or take home assignment are follows: 1) problem-based learning (clinical case) including a student presentation of the clinical case, 2) schematic drawings of the human organs, one drawing done in 15-20 min during the week following a lecture and a second drawing done with reference to a professional textbook, 3) learning of professional themes in take home assignments, and 4) short test in order to confirm the understanding of technical terms by using paper or computer. These improvements and new methods provide active approaches for pharmacy students (as opposed to passive memorization of words and image study). In combination, they have proven to be useful as a learning method to acquire expert knowledge and to convert from passive learning approach to active learning approach of pharmacy students in the classroom.
Gogate, Parag R; Patil, Pankaj N
2015-07-01
The present work highlights the novel approach of combination of hydrodynamic cavitation and advanced oxidation processes for wastewater treatment. The initial part of the work concentrates on the critical analysis of the literature related to the combined approaches based on hydrodynamic cavitation followed by a case study of triazophos degradation using different approaches. The analysis of different combinations based on hydrodynamic cavitation with the Fenton chemistry, advanced Fenton chemistry, ozonation, photocatalytic oxidation, and use of hydrogen peroxide has been highlighted with recommendations for important design parameters. Subsequently degradation of triazophos pesticide in aqueous solution (20 ppm solution of commercially available triazophos pesticide) has been investigated using hydrodynamic cavitation and ozonation operated individually and in combination for the first time. Effect of different operating parameters like inlet pressure (1-8 bar) and initial pH (2.5-8) have been investigated initially. The effect of addition of Fenton's reagent at different loadings on the extent of degradation has also been investigated. The combined method of hydrodynamic cavitation and ozone has been studied using two approaches of injecting ozone in the solution tank and at the orifice (at the flow rate of 0.576 g/h and 1.95 g/h). About 50% degradation of triazophos was achieved by hydrodynamic cavitation alone under optimized operating parameters. About 80% degradation of triazophos was achieved by combination of hydrodynamic cavitation and Fenton's reagent whereas complete degradation was achieved using combination of hydrodynamic cavitation and ozonation. TOC removal of 96% was also obtained for the combination of ozone and hydrodynamic cavitation making it the best treatment strategy for removal of triazophos. Copyright © 2014 Elsevier B.V. All rights reserved.
Forecasting peaks of seasonal influenza epidemics.
Nsoesie, Elaine; Mararthe, Madhav; Brownstein, John
2013-06-21
We present a framework for near real-time forecast of influenza epidemics using a simulation optimization approach. The method combines an individual-based model and a simple root finding optimization method for parameter estimation and forecasting. In this study, retrospective forecasts were generated for seasonal influenza epidemics using web-based estimates of influenza activity from Google Flu Trends for 2004-2005, 2007-2008 and 2012-2013 flu seasons. In some cases, the peak could be forecasted 5-6 weeks ahead. This study adds to existing resources for influenza forecasting and the proposed method can be used in conjunction with other approaches in an ensemble framework.
Al-Khatib, Ra'ed M; Rashid, Nur'Aini Abdul; Abdullah, Rosni
2011-08-01
The secondary structure of RNA pseudoknots has been extensively inferred and scrutinized by computational approaches. Experimental methods for determining RNA structure are time consuming and tedious; therefore, predictive computational approaches are required. Predicting the most accurate and energy-stable pseudoknot RNA secondary structure has been proven to be an NP-hard problem. In this paper, a new RNA folding approach, termed MSeeker, is presented; it includes KnotSeeker (a heuristic method) and Mfold (a thermodynamic algorithm). The global optimization of this thermodynamic heuristic approach was further enhanced by using a case-based reasoning technique as a local optimization method. MSeeker is a proposed algorithm for predicting RNA pseudoknot structure from individual sequences, especially long ones. This research demonstrates that MSeeker improves the sensitivity and specificity of existing RNA pseudoknot structure predictions. The performance and structural results from this proposed method were evaluated against seven other state-of-the-art pseudoknot prediction methods. The MSeeker method had better sensitivity than the DotKnot, FlexStem, HotKnots, pknotsRG, ILM, NUPACK and pknotsRE methods, with 79% of the predicted pseudoknot base-pairs being correct.
Hansen, Matthew; O’Brien, Kerth; Meckler, Garth; Chang, Anna Marie; Guise, Jeanne-Marie
2016-01-01
Mixed methods research has significant potential to broaden the scope of emergency care and specifically emergency medical services investigation. Mixed methods studies involve the coordinated use of qualitative and quantitative research approaches to gain a fuller understanding of practice. By combining what is learnt from multiple methods, these approaches can help to characterise complex healthcare systems, identify the mechanisms of complex problems such as medical errors and understand aspects of human interaction such as communication, behaviour and team performance. Mixed methods approaches may be particularly useful for out-of-hospital care researchers because care is provided in complex systems where equipment, interpersonal interactions, societal norms, environment and other factors influence patient outcomes. The overall objectives of this paper are to (1) introduce the fundamental concepts and approaches of mixed methods research and (2) describe the interrelation and complementary features of the quantitative and qualitative components of mixed methods studies using specific examples from the Children’s Safety Initiative-Emergency Medical Services (CSI-EMS), a large National Institutes of Health-funded research project conducted in the USA. PMID:26949970
Combining of different data pools for calculating a reliable POD for real defects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanzler, Daniel, E-mail: daniel.kanzler@bam.de, E-mail: christina.mueller@bam.de; Müller, Christina, E-mail: daniel.kanzler@bam.de, E-mail: christina.mueller@bam.de; Pitkänen, Jorma, E-mail: jorma.pitkanen@posiva.fi
2015-03-31
Real defects are essential for the evaluation of the reliability of non destructive testing (NDT) methods, especially in relation to the integrity of components. But in most of the cases the amount of available real defects is not sufficient to evaluate the system. Model-assisted and transfer functions are one way to handle that challenge. This study is focused on a combination of different data pools to create a sufficient amount of data for the reliability estimation. A widespread approach for calculating the Probability of Detection (POD) was used on a radiographic testing (RT) method. The highest contrast to noise ratiomore » (CNR) of each indication is usually selected as the signal in the 'â vs. a' (signal-response) approach for RT. By combining real and artificial defects (flat bottom holes, side drill holes, flat bottom squares, notches, etc) in RT the highest signals are close to each other, but the process of creating and evaluating real defects is much more complex. The solution is seen in the combination of real and artificial data using a weighted least square approach. The weights for real or artificial data were based on the importance, the value and the different detection behavior of the different data. For comparison, the alternative combination through the Bayesian Updating was also applied. As verification, a data pool with a large amount of real data was available. In an advanced approach for evaluating the digital RT data, the size of the indication (perpendicular to the X-ray beam) was introduced as additional information. The signal now consists of the CNR and the area of the indication. The detectability is changing depending on the area of the indication, a fact that was ignored in the previous POD calculations for RT. This points out that a weighted least square approach to pool the data might no longer be adequate. The Bayesian Updating of the estimated parameters of the relationship between the signal field (the area of the indication) and the geometry of the defects is seen as the appropriate model to combine the different defect types in a useful and meaningful way. This work was carried out together with the Finnish company for spent nuclear fuel and waste management - Posiva Oy. The digital RT is one of the NDT methods that might be used for the inspection of the weld of the copper canister to be used for the spent nuclear fuel in the Scandinavian concept of final disposal.« less
Analytical study to define a helicopter stability derivative extraction method, volume 1
NASA Technical Reports Server (NTRS)
Molusis, J. A.
1973-01-01
A method is developed for extracting six degree-of-freedom stability and control derivatives from helicopter flight data. Different combinations of filtering and derivative estimate are investigated and used with a Bayesian approach for derivative identification. The combination of filtering and estimate found to yield the most accurate time response match to flight test data is determined and applied to CH-53A and CH-54B flight data. The method found to be most accurate consists of (1) filtering flight test data with a digital filter, followed by an extended Kalman filter (2) identifying a derivative estimate with a least square estimator, and (3) obtaining derivatives with the Bayesian derivative extraction method.
NASA Astrophysics Data System (ADS)
Delbary, Fabrice; Aramini, Riccardo; Bozza, Giovanni; Brignone, Massimo; Piana, Michele
2008-11-01
Microwave tomography is a non-invasive approach to the early diagnosis of breast cancer. However the problem of visualizing tumors from diffracted microwaves is a difficult nonlinear ill-posed inverse scattering problem. We propose a qualitative approach to the solution of such a problem, whereby the shape and location of cancerous tissues can be detected by means of a combination of the Reciprocity Gap Functional method and the Linear Sampling method. We validate this approach to synthetic near-fields produced by a finite element method for boundary integral equations, where the breast is mimicked by the axial view of two nested cylinders, the external one representing the skin and the internal one representing the fat tissue.
Multidisciplinary optimization in aircraft design using analytic technology models
NASA Technical Reports Server (NTRS)
Malone, Brett; Mason, W. H.
1991-01-01
An approach to multidisciplinary optimization is presented which combines the Global Sensitivity Equation method, parametric optimization, and analytic technology models. The result is a powerful yet simple procedure for identifying key design issues. It can be used both to investigate technology integration issues very early in the design cycle, and to establish the information flow framework between disciplines for use in multidisciplinary optimization projects using much more computational intense representations of each technology. To illustrate the approach, an examination of the optimization of a short takeoff heavy transport aircraft is presented for numerous combinations of performance and technology constraints.
The Nucleon-Mission: A New Approach to Cosmic Rays Investigation
NASA Technical Reports Server (NTRS)
Adams, James H., Jr.; Bashindzhagyan, G.; Bashindzhagyan, P.; Chilingarian, A.; Donnelly, J.; Drury, L.; Egorov, N.; Golubkov, S.; Grebenyuk, V.; Hasebe, N.;
2001-01-01
A new approach to Cosmic Rays Investigation is proposed. The main idea is to combine two experimental methods (KLEM and UHIS) for the NUCLEON Project. The KLEM (Kinematic Lightweight Energy Meter) is aimed to study of chemical composition and elemental energy spectra of galactic CRs in extremely wide energy range 10(exp 11) - 10(exp 16) eV. The UHIS (Ultra Heavy Isotope Spectrometer) is suggested to use for the ultra heavy CR nuclei fluxes registration beyond the iron peak. Combination of the two techniques would give a unique instrument, with a number of advantages.
Reduction method with system analysis for multiobjective optimization-based design
NASA Technical Reports Server (NTRS)
Azarm, S.; Sobieszczanski-Sobieski, J.
1993-01-01
An approach for reducing the number of variables and constraints, which is combined with System Analysis Equations (SAE), for multiobjective optimization-based design is presented. In order to develop a simplified analysis model, the SAE is computed outside an optimization loop and then approximated for use by an operator. Two examples are presented to demonstrate the approach.
Pupils' Views of Religious Education in a Pluralistic Educational Context
ERIC Educational Resources Information Center
Kuusisto, Arniika; Kallioniemi, Arto
2014-01-01
This article examines Finnish pupils' views of religious education (RE) in a pluralistic educational context. The focus is on pupils' views of the aims and different approaches to RE in a multi-faith school. The study utilised a mixed method approach, combining quantitative and qualitative data. It employed a survey (n = 1301) and interviews (n =…
The Physics of Music with Interdisciplinary Approach: A Case of Prospective Music Teachers
ERIC Educational Resources Information Center
Turna, Özge; Bolat, Mualla
2016-01-01
Physics of music is an area that is covered by interdisciplinary approach. In this study it is aimed to determine prospective music teachers' level of association with physics concepts which are related to music. The research is a case study which combines qualitative and quantitative methods. Eighty-four students who were studying at the…
Iliyasu, Abdullah M; Fatichah, Chastine
2017-12-19
A quantum hybrid (QH) intelligent approach that blends the adaptive search capability of the quantum-behaved particle swarm optimisation (QPSO) method with the intuitionistic rationality of traditional fuzzy k -nearest neighbours (Fuzzy k -NN) algorithm (known simply as the Q-Fuzzy approach) is proposed for efficient feature selection and classification of cells in cervical smeared (CS) images. From an initial multitude of 17 features describing the geometry, colour, and texture of the CS images, the QPSO stage of our proposed technique is used to select the best subset features (i.e., global best particles) that represent a pruned down collection of seven features. Using a dataset of almost 1000 images, performance evaluation of our proposed Q-Fuzzy approach assesses the impact of our feature selection on classification accuracy by way of three experimental scenarios that are compared alongside two other approaches: the All-features (i.e., classification without prior feature selection) and another hybrid technique combining the standard PSO algorithm with the Fuzzy k -NN technique (P-Fuzzy approach). In the first and second scenarios, we further divided the assessment criteria in terms of classification accuracy based on the choice of best features and those in terms of the different categories of the cervical cells. In the third scenario, we introduced new QH hybrid techniques, i.e., QPSO combined with other supervised learning methods, and compared the classification accuracy alongside our proposed Q-Fuzzy approach. Furthermore, we employed statistical approaches to establish qualitative agreement with regards to the feature selection in the experimental scenarios 1 and 3. The synergy between the QPSO and Fuzzy k -NN in the proposed Q-Fuzzy approach improves classification accuracy as manifest in the reduction in number cell features, which is crucial for effective cervical cancer detection and diagnosis.
NASA Astrophysics Data System (ADS)
Kim, R. S.; Durand, M. T.; Li, D.; Baldo, E.; Margulis, S. A.; Dumont, M.; Morin, S.
2017-12-01
This paper presents a newly-proposed snow depth retrieval approach for mountainous deep snow using airborne multifrequency passive microwave (PM) radiance observation. In contrast to previous snow depth estimations using satellite PM radiance assimilation, the newly-proposed method utilized single flight observation and deployed the snow hydrologic models. This method is promising since the satellite-based retrieval methods have difficulties to estimate snow depth due to their coarse resolution and computational effort. Indeed, this approach consists of particle filter using combinations of multiple PM frequencies and multi-layer snow physical model (i.e., Crocus) to resolve melt-refreeze crusts. The method was performed over NASA Cold Land Processes Experiment (CLPX) area in Colorado during 2002 and 2003. Results showed that there was a significant improvement over the prior snow depth estimates and the capability to reduce the prior snow depth biases. When applying our snow depth retrieval algorithm using a combination of four PM frequencies (10.7,18.7, 37.0 and 89.0 GHz), the RMSE values were reduced by 48 % at the snow depth transects sites where forest density was less than 5% despite deep snow conditions. This method displayed a sensitivity to different combinations of frequencies, model stratigraphy (i.e. different number of layering scheme for snow physical model) and estimation methods (particle filter and Kalman filter). The prior RMSE values at the forest-covered areas were reduced by 37 - 42 % even in the presence of forest cover.
The most effective strategy for controlling pests in your lawn and garden may be to combine methods in an approach known as Integrated Pest Management. See videos and find tips for implementing IPM at your residence.
Multiple-Symbol Noncoherent Decoding of Uncoded and Convolutionally Codes Continous Phase Modulation
NASA Technical Reports Server (NTRS)
Divsalar, D.; Raphaeli, D.
2000-01-01
Recently, a method for combined noncoherent detection and decoding of trellis-codes (noncoherent coded modulation) has been proposed, which can practically approach the performance of coherent detection.
ERIC Educational Resources Information Center
Quennerstedt, Mikael; Annerstedt, Claes; Barker, Dean; Karlefors, Inger; Larsson, Håkan; Redelius, Karin; Öhman, Marie
2014-01-01
This paper outlines a method for exploring learning in educational practice. The suggested method combines an explicit learning theory with robust methodological steps in order to explore aspects of learning in school physical education. The design of the study is based on sociocultural learning theory, and the approach adds to previous research…
A new diagnostic approach to popliteal artery entrapment syndrome
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Charles; Kennedy, Dominic; Bastian-Jordan, Matthew
A new method of diagnosing and defining functional popliteal artery entrapment syndrome is described. By combining ultrasonography and magnetic resonance imaging techniques with dynamic plantarflexion of the ankle against resistance, functional entrapment can be demonstrated and the location of the arterial occlusion identified. This combination of imaging modalities will also define muscular anatomy for guiding intervention such as surgery or Botox injection.
ERIC Educational Resources Information Center
Alorda, B.; Suenaga, K.; Pons, P.
2011-01-01
This paper reports on the design, implementation and assessment of a new approach course structure based on the combination of three cooperative methodologies. The main goal is to reduce the percentage of non-passed students focusing the learning process on students by offering different alternatives and motivational activities based on working in…
NASA Astrophysics Data System (ADS)
Jones, Lisa M.; Zhang, Hao; Cui, Weidong; Kumar, Sandeep; Sperry, Justin B.; Carroll, James A.; Gross, Michael L.
2013-06-01
As therapeutic monoclonal antibodies (mAbs) become a major focus in biotechnology and a source of the next-generation drugs, new analytical methods or combination methods are needed for monitoring changes in higher order structure and effects of post-translational modifications. The complexity of these molecules and their vulnerability to structural change provide a serious challenge. We describe here the use of complementary mass spectrometry methods that not only characterize mutant mAbs but also may provide a general framework for characterizing higher order structure of other protein therapeutics and biosimilars. To frame the challenge, we selected members of the IgG2 subclass that have distinct disulfide isomeric structures as a model to evaluate an overall approach that uses ion mobility, top-down MS sequencing, and protein footprinting in the form of fast photochemical oxidation of proteins (FPOP). These three methods are rapid, sensitive, respond to subtle changes in conformation of Cys → Ser mutants of an IgG2, each representing a single disulfide isoform, and may be used in series to probe higher order structure. The outcome suggests that this approach of using various methods in combination can assist the development and quality control of protein therapeutics.
Ensemble stacking mitigates biases in inference of synaptic connectivity.
Chambers, Brendan; Levy, Maayan; Dechery, Joseph B; MacLean, Jason N
2018-01-01
A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.
Thompson, E.M.; Wald, D.J.
2012-01-01
Despite obvious limitations as a proxy for site amplification, the use of time-averaged shear-wave velocity over the top 30 m (VS30) remains widely practiced, most notably through its use as an explanatory variable in ground motion prediction equations (and thus hazard maps and ShakeMaps, among other applications). As such, we are developing an improved strategy for producing VS30 maps given the common observational constraints. Using the abundant VS30 measurements in Taiwan, we compare alternative mapping methods that combine topographic slope, surface geology, and spatial correlation structure. The different VS30 mapping algorithms are distinguished by the way that slope and geology are combined to define a spatial model of VS30. We consider the globally applicable slope-only model as a baseline to which we compare two methods of combining both slope and geology. For both hybrid approaches, we model spatial correlation structure of the residuals using the kriging-with-a-trend technique, which brings the map into closer agreement with the observations. Cross validation indicates that we can reduce the uncertainty of the VS30 map by up to 16% relative to the slope-only approach.
Ngo, Tuan Anh; Lu, Zhi; Carneiro, Gustavo
2017-01-01
We introduce a new methodology that combines deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance (MR) data. This combination is relevant for segmentation problems, where the visual object of interest presents large shape and appearance variations, but the annotated training set is small, which is the case for various medical image analysis applications, including the one considered in this paper. In particular, level set methods are based on shape and appearance terms that use small training sets, but present limitations for modelling the visual object variations. Deep learning methods can model such variations using relatively small amounts of annotated training, but they often need to be regularised to produce good generalisation. Therefore, the combination of these methods brings together the advantages of both approaches, producing a methodology that needs small training sets and produces accurate segmentation results. We test our methodology on the MICCAI 2009 left ventricle segmentation challenge database (containing 15 sequences for training, 15 for validation and 15 for testing), where our approach achieves the most accurate results in the semi-automated problem and state-of-the-art results for the fully automated challenge. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images.
Gumaei, Abdu; Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed
2018-05-15
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang's method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used.
Grégory, Dubourg; Chaudet, Hervé; Lagier, Jean-Christophe; Raoult, Didier
2018-03-01
Describing the human hut gut microbiota is one the most exciting challenges of the 21 st century. Currently, high-throughput sequencing methods are considered as the gold standard for this purpose, however, they suffer from several drawbacks, including their inability to detect minority populations. The advent of mass-spectrometric (MS) approaches to identify cultured bacteria in clinical microbiology enabled the creation of the culturomics approach, which aims to establish a comprehensive repertoire of cultured prokaryotes from human specimens using extensive culture conditions. Areas covered: This review first underlines how mass spectrometric approaches have revolutionized clinical microbiology. It then highlights the contribution of MS-based methods to culturomics studies, paying particular attention to the extension of the human gut microbiota repertoire through the discovery of new bacterial species. Expert commentary: MS-based approaches have enabled cultivation methods to be resuscitated to study the human gut microbiota and thus to fill in the blanks left by high-throughput sequencing methods in terms of culturing minority populations. Continued efforts to recover new taxa using culture methods, combined with their rapid implementation in genomic databases, would allow for an exhaustive analysis of the gut microbiota through the use of a comprehensive approach.
A Cognitive Computing Approach for Classification of Complaints in the Insurance Industry
NASA Astrophysics Data System (ADS)
Forster, J.; Entrup, B.
2017-10-01
In this paper we present and evaluate a cognitive computing approach for classification of dissatisfaction and four complaint specific complaint classes in correspondence documents between insurance clients and an insurance company. A cognitive computing approach includes the combination classical natural language processing methods, machine learning algorithms and the evaluation of hypothesis. The approach combines a MaxEnt machine learning algorithm with language modelling, tf-idf and sentiment analytics to create a multi-label text classification model. The result is trained and tested with a set of 2500 original insurance communication documents written in German, which have been manually annotated by the partnering insurance company. With a F1-Score of 0.9, a reliable text classification component has been implemented and evaluated. A final outlook towards a cognitive computing insurance assistant is given in the end.
Numerical integration of discontinuous functions: moment fitting and smart octree
NASA Astrophysics Data System (ADS)
Hubrich, Simeon; Di Stolfo, Paolo; Kudela, László; Kollmannsberger, Stefan; Rank, Ernst; Schröder, Andreas; Düster, Alexander
2017-11-01
A fast and simple grid generation can be achieved by non-standard discretization methods where the mesh does not conform to the boundary or the internal interfaces of the problem. However, this simplification leads to discontinuous integrands for intersected elements and, therefore, standard quadrature rules do not perform well anymore. Consequently, special methods are required for the numerical integration. To this end, we present two approaches to obtain quadrature rules for arbitrary domains. The first approach is based on an extension of the moment fitting method combined with an optimization strategy for the position and weights of the quadrature points. In the second approach, we apply the smart octree, which generates curved sub-cells for the integration mesh. To demonstrate the performance of the proposed methods, we consider several numerical examples, showing that the methods lead to efficient quadrature rules, resulting in less integration points and in high accuracy.
NASA Astrophysics Data System (ADS)
Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing
2018-05-01
The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.
Subject-Specific Sparse Dictionary Learning for Atlas-Based Brain MRI Segmentation.
Roy, Snehashis; He, Qing; Sweeney, Elizabeth; Carass, Aaron; Reich, Daniel S; Prince, Jerry L; Pham, Dzung L
2015-09-01
Quantitative measurements from segmentations of human brain magnetic resonance (MR) images provide important biomarkers for normal aging and disease progression. In this paper, we propose a patch-based tissue classification method from MR images that uses a sparse dictionary learning approach and atlas priors. Training data for the method consists of an atlas MR image, prior information maps depicting where different tissues are expected to be located, and a hard segmentation. Unlike most atlas-based classification methods that require deformable registration of the atlas priors to the subject, only affine registration is required between the subject and training atlas. A subject-specific patch dictionary is created by learning relevant patches from the atlas. Then the subject patches are modeled as sparse combinations of learned atlas patches leading to tissue memberships at each voxel. The combination of prior information in an example-based framework enables us to distinguish tissues having similar intensities but different spatial locations. We demonstrate the efficacy of the approach on the application of whole-brain tissue segmentation in subjects with healthy anatomy and normal pressure hydrocephalus, as well as lesion segmentation in multiple sclerosis patients. For each application, quantitative comparisons are made against publicly available state-of-the art approaches.
Gabb, Henry A; Blake, Catherine
2016-08-01
Simultaneous or sequential exposure to multiple environmental stressors can affect chemical toxicity. Cumulative risk assessments consider multiple stressors but it is impractical to test every chemical combination to which people are exposed. New methods are needed to prioritize chemical combinations based on their prevalence and possible health impacts. We introduce an informatics approach that uses publicly available data to identify chemicals that co-occur in consumer products, which account for a significant proportion of overall chemical load. Fifty-five asthma-associated and endocrine disrupting chemicals (target chemicals) were selected. A database of 38,975 distinct consumer products and 32,231 distinct ingredient names was created from online sources, and PubChem and the Unified Medical Language System were used to resolve synonymous ingredient names. Synonymous ingredient names are different names for the same chemical (e.g., vitamin E and tocopherol). Nearly one-third of the products (11,688 products, 30%) contained ≥ 1 target chemical and 5,229 products (13%) contained > 1. Of the 55 target chemicals, 31 (56%) appear in ≥ 1 product and 19 (35%) appear under more than one name. The most frequent three-way chemical combination (2-phenoxyethanol, methyl paraben, and ethyl paraben) appears in 1,059 products. Further work is needed to assess combined chemical exposures related to the use of multiple products. The informatics approach increased the number of products considered in a traditional analysis by two orders of magnitude, but missing/incomplete product labels can limit the effectiveness of this approach. Such an approach must resolve synonymy to ensure that chemicals of interest are not missed. Commonly occurring chemical combinations can be used to prioritize cumulative toxicology risk assessments. Gabb HA, Blake C. 2016. An informatics approach to evaluating combined chemical exposures from consumer products: a case study of asthma-associated chemicals and potential endocrine disruptors. Environ Health Perspect 124:1155-1165; http://dx.doi.org/10.1289/ehp.1510529.
An Ensemble Approach for Improved Short-to-Intermediate-Term Seismic Potential Evaluation
NASA Astrophysics Data System (ADS)
Yu, Huaizhong; Zhu, Qingyong; Zhou, Faren; Tian, Lei; Zhang, Yongxian
2017-06-01
Pattern informatics (PI), load/unload response ratio (LURR), state vector (SV), and accelerating moment release (AMR) are four previously unrelated subjects, which are sensitive, in varying ways, to the earthquake's source. Previous studies have indicated that the spatial extent of the stress perturbation caused by an earthquake scales with the moment of the event, allowing us to combine these methods for seismic hazard evaluation. The long-range earthquake forecasting method PI is applied to search for the seismic hotspots and identify the areas where large earthquake could be expected. And the LURR and SV methods are adopted to assess short-to-intermediate-term seismic potential in each of the critical regions derived from the PI hotspots, while the AMR method is used to provide us with asymptotic estimates of time and magnitude of the potential earthquakes. This new approach, by combining the LURR, SV and AMR methods with the choice of identified area of PI hotspots, is devised to augment current techniques for seismic hazard estimation. Using the approach, we tested the strong earthquakes occurred in Yunnan-Sichuan region, China between January 1, 2013 and December 31, 2014. We found that most of the large earthquakes, especially the earthquakes with magnitude greater than 6.0 occurred in the seismic hazard regions predicted. Similar results have been obtained in the prediction of annual earthquake tendency in Chinese mainland in 2014 and 2015. The studies evidenced that the ensemble approach could be a useful tool to detect short-to-intermediate-term precursory information of future large earthquakes.
Evaluation of hydrate-screening methods.
Cui, Yong; Yao, Erica
2008-07-01
The purpose of this work is to evaluate the effectiveness and reliability of several common hydrate-screening techniques, and to provide guidelines for designing hydrate-screening programs for new drug candidates. Ten hydrate-forming compounds were selected as model compounds and six hydrate-screening approaches were applied to these compounds in an effort to generate their hydrate forms. The results prove that no screening approach is universally effective in finding hydrates for small organic compounds. Rather, a combination of different methods should be used to improve screening reliability. Among the approaches tested, the dynamic water vapor sorption/desorption isotherm (DVI) method and storage under high humidity (HH) yielded 60-70% success ratios, the lowest among all techniques studied. The risk of false negatives arises in particular for nonhygroscopic compounds. On the other hand, both slurry in water (Slurry) and temperature cycling of aqueous suspension (TCS) showed high success rates (90%) with some exceptions. The mixed solvent systems (MSS) procedure also achieved high success rates (90%), and was found to be more suitable for water-insoluble compounds. For water-soluble compounds, MSS may not be the best approach because recrystallization is difficult in solutions with high water activity. Finally, vapor diffusion (VD) yielded a reasonably high success ratio in finding hydrates (80%). However, this method suffers from experimental difficulty and unreliable results for either highly water-soluble or water-insoluble compounds. This study indicates that a reliable hydrate-screening strategy should take into consideration the solubility and hygroscopicity of the compounds studied. A combination of the Slurry or TCS method with the MSS procedure could provide a screening strategy with reasonable reliability.
NASA Astrophysics Data System (ADS)
Mesbah, Mostefa; Balakrishnan, Malarvili; Colditz, Paul B.; Boashash, Boualem
2012-12-01
This article proposes a new method for newborn seizure detection that uses information extracted from both multi-channel electroencephalogram (EEG) and a single channel electrocardiogram (ECG). The aim of the study is to assess whether additional information extracted from ECG can improve the performance of seizure detectors based solely on EEG. Two different approaches were used to combine this extracted information. The first approach, known as feature fusion, involves combining features extracted from EEG and heart rate variability (HRV) into a single feature vector prior to feeding it to a classifier. The second approach, called classifier or decision fusion, is achieved by combining the independent decisions of the EEG and the HRV-based classifiers. Tested on recordings obtained from eight newborns with identified EEG seizures, the proposed neonatal seizure detection algorithms achieved 95.20% sensitivity and 88.60% specificity for the feature fusion case and 95.20% sensitivity and 94.30% specificity for the classifier fusion case. These results are considerably better than those involving classifiers using EEG only (80.90%, 86.50%) or HRV only (85.70%, 84.60%).
Mixed Methods Designs for Sports Medicine Research.
Kay, Melissa C; Kucera, Kristen L
2018-07-01
Mixed methods research is a relatively new approach in the field of sports medicine, where the benefits of qualitative and quantitative research are combined while offsetting the other's flaws. Despite its known and successful use in other populations, it has been used minimally in sports medicine, including studies of the clinician perspective, concussion, and patient outcomes. Therefore, there is a need for this approach to be applied in other topic areas not easily addressed by one type of research approach in isolation, such as the retirement from sport, effects of and return from injury, and catastrophic injury. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Pliutau, Denis; Prasad, Narashimha S.
2013-01-01
Current approaches to satellite observation data storage and distribution implement separate visualization and data access methodologies which often leads to the need in time consuming data ordering and coding for applications requiring both visual representation as well as data handling and modeling capabilities. We describe an approach we implemented for a data-encoded web map service based on storing numerical data within server map tiles and subsequent client side data manipulation and map color rendering. The approach relies on storing data using the lossless compression Portable Network Graphics (PNG) image data format which is natively supported by web-browsers allowing on-the-fly browser rendering and modification of the map tiles. The method is easy to implement using existing software libraries and has the advantage of easy client side map color modifications, as well as spatial subsetting with physical parameter range filtering. This method is demonstrated for the ASTER-GDEM elevation model and selected MODIS data products and represents an alternative to the currently used storage and data access methods. One additional benefit includes providing multiple levels of averaging due to the need in generating map tiles at varying resolutions for various map magnification levels. We suggest that such merged data and mapping approach may be a viable alternative to existing static storage and data access methods for a wide array of combined simulation, data access and visualization purposes.
D'Abramo, Marco; Aschi, Massimiliano; Amadei, Andrea
2014-04-28
Here, we extend a recently introduced theoretical-computational procedure [M. D'Alessandro, M. Aschi, C. Mazzuca, A. Palleschi, and A. Amadei, J. Chem. Phys. 139, 114102 (2013)] to include quantum vibrational transitions in modelling electronic spectra of atomic molecular systems in condensed phase. The method is based on the combination of Molecular Dynamics simulations and quantum chemical calculations within the Perturbed Matrix Method approach. The main aim of the presented approach is to reproduce as much as possible the spectral line shape which results from a subtle combination of environmental and intrinsic (chromophore) mechanical-dynamical features. As a case study, we were able to model the low energy UV-vis transitions of pyrene in liquid acetonitrile in good agreement with the experimental data.
Factorization-based texture segmentation
Yuan, Jiangye; Wang, Deliang; Cheriyadat, Anil M.
2015-06-17
This study introduces a factorization-based approach that efficiently segments textured images. We use local spectral histograms as features, and construct an M × N feature matrix using M-dimensional feature vectors in an N-pixel image. Based on the observation that each feature can be approximated by a linear combination of several representative features, we factor the feature matrix into two matrices-one consisting of the representative features and the other containing the weights of representative features at each pixel used for linear combination. The factorization method is based on singular value decomposition and nonnegative matrix factorization. The method uses local spectral histogramsmore » to discriminate region appearances in a computationally efficient way and at the same time accurately localizes region boundaries. Finally, the experiments conducted on public segmentation data sets show the promise of this simple yet powerful approach.« less
Combining Radiography and Passive Measurements for Radiological Threat Detection in Cargo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.
Abstract Radiography is widely understood to provide information complimentary to passive detection: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions which may mask a passive radiological signal. We present a method for combining radiographic and passive data which uses the radiograph to provide an estimate of scatter and attenuation for possible sources. This approach allows quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present first results for this method for a simple modeled test case of a cargo container drivingmore » through a PVT portal. With this inversion approach, we address criteria for an integrated passive and radiographic screening system and how detection of SNM threats might be improved in such a system.« less
Sardo, Mariana; Siegel, Renée; Santos, Sérgio M; Rocha, João; Gomes, José R B; Mafra, Luis
2012-06-28
We present a complete set of experimental approaches for the NMR assignment of powdered tripeptide glutathione at natural isotopic abundance, based on J-coupling and dipolar NMR techniques combined with (1)H CRAMPS decoupling. To fully assign the spectra, two-dimensional (2D) high-resolution methods, such as (1)H-(13)C INEPT-HSQC/PRESTO heteronuclear correlations (HETCOR), (1)H-(1)H double-quantum (DQ), and (1)H-(14)N D-HMQC correlation experiments, have been used. To support the interpretation of the experimental data, periodic density functional theory calculations together with the GIPAW approach have been used to calculate the (1)H and (13)C chemical shifts. It is found that the shifts calculated with two popular plane wave codes (CASTEP and Quantum ESPRESSO) are in excellent agreement with the experimental results.
Modern methods for the quality management of high-rate melt solidification
NASA Astrophysics Data System (ADS)
Vasiliev, V. A.; Odinokov, S. A.; Serov, M. M.
2016-12-01
The quality management of high-rate melt solidification needs combined solution obtained by methods and approaches adapted to a certain situation. Technological audit is recommended to estimate the possibilities of the process. Statistical methods are proposed with the choice of key parameters. Numerical methods, which can be used to perform simulation under multifactor technological conditions, and an increase in the quality of decisions are of particular importance.
McKenna, James E.; Carlson, Douglas M.; Payne-Wynne, Molly L.
2013-01-01
Aim: Rare aquatic species are a substantial component of biodiversity, and their conservation is a major objective of many management plans. However, they are difficult to assess, and their optimal habitats are often poorly known. Methods to effectively predict the likely locations of suitable rare aquatic species habitats are needed. We combine two modelling approaches to predict occurrence and general abundance of several rare fish species. Location: Allegheny watershed of western New York State (USA) Methods: Our method used two empirical neural network modelling approaches (species specific and assemblage based) to predict stream-by-stream occurrence and general abundance of rare darters, based on broad-scale habitat conditions. Species-specific models were developed for longhead darter (Percina macrocephala), spotted darter (Etheostoma maculatum) and variegate darter (Etheostoma variatum) in the Allegheny drainage. An additional model predicted the type of rare darter-containing assemblage expected in each stream reach. Predictions from both models were then combined inclusively and exclusively and compared with additional independent data. Results Example rare darter predictions demonstrate the method's effectiveness. Models performed well (R2 ≥ 0.79), identified where suitable darter habitat was most likely to occur, and predictions matched well to those of collection sites. Additional independent data showed that the most conservative (exclusive) model slightly underestimated the distributions of these rare darters or predictions were displaced by one stream reach, suggesting that new darter habitat types were detected in the later collections. Main conclusions Broad-scale habitat variables can be used to effectively identify rare species' habitats. Combining species-specific and assemblage-based models enhances our ability to make use of the sparse data on rare species and to identify habitat units most likely and least likely to support those species. This hybrid approach may assist managers with the prioritization of habitats to be examined or conserved for rare species.
Surgical gesture classification from video and kinematic data.
Zappella, Luca; Béjar, Benjamín; Hager, Gregory; Vidal, René
2013-10-01
Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on dynamic cues (e.g., time to completion, speed, forces, torque) or kinematic data (e.g., robot trajectories and velocities). While videos could be equally or more discriminative (e.g., videos contain semantic information not present in kinematic data), they are typically not used because of the difficulties associated with automatic video interpretation. In this paper, we propose several methods for automatic surgical gesture classification from video data. We assume that the video of a surgical task (e.g., suturing) has been segmented into video clips corresponding to a single gesture (e.g., grabbing the needle, passing the needle) and propose three methods to classify the gesture of each video clip. In the first one, we model each video clip as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words, and use a bag-of-features (BoF) approach to classify new video clips. In the third one, we use multiple kernel learning (MKL) to combine the LDS and BoF approaches. Since the LDS approach is also applicable to kinematic data, we also use MKL to combine both types of data in order to exploit their complementarity. Our experiments on a typical surgical training setup show that methods based on video data perform equally well, if not better, than state-of-the-art approaches based on kinematic data. In turn, the combination of both kinematic and video data outperforms any other algorithm based on one type of data alone. Copyright © 2013 Elsevier B.V. All rights reserved.
Gray-world-assumption-based illuminant color estimation using color gamuts with high and low chroma
NASA Astrophysics Data System (ADS)
Kawamura, Harumi; Yonemura, Shunichi; Ohya, Jun; Kojima, Akira
2013-02-01
A new approach is proposed for estimating illuminant colors from color images under an unknown scene illuminant. The approach is based on a combination of a gray-world-assumption-based illuminant color estimation method and a method using color gamuts. The former method, which is one we had previously proposed, improved on the original method that hypothesizes that the average of all the object colors in a scene is achromatic. Since the original method estimates scene illuminant colors by calculating the average of all the image pixel values, its estimations are incorrect when certain image colors are dominant. Our previous method improves on it by choosing several colors on the basis of an opponent-color property, which is that the average color of opponent colors is achromatic, instead of using all colors. However, it cannot estimate illuminant colors when there are only a few image colors or when the image colors are unevenly distributed in local areas in the color space. The approach we propose in this paper combines our previous method and one using high chroma and low chroma gamuts, which makes it possible to find colors that satisfy the gray world assumption. High chroma gamuts are used for adding appropriate colors to the original image and low chroma gamuts are used for narrowing down illuminant color possibilities. Experimental results obtained using actual images show that even if the image colors are localized in a certain area in the color space, the illuminant colors are accurately estimated, with smaller estimation error average than that generated in the conventional method.
NASA Astrophysics Data System (ADS)
Sosa, Germán. D.; Cruz-Roa, Angel; González, Fabio A.
2015-01-01
This work addresses the problem of lung sound classification, in particular, the problem of distinguishing between wheeze and normal sounds. Wheezing sound detection is an important step to associate lung sounds with an abnormal state of the respiratory system, usually associated with tuberculosis or another chronic obstructive pulmonary diseases (COPD). The paper presents an approach for automatic lung sound classification, which uses different state-of-the-art sound features in combination with a C-weighted support vector machine (SVM) classifier that works better for unbalanced data. Feature extraction methods used here are commonly applied in speech recognition and related problems thanks to the fact that they capture the most informative spectral content from the original signals. The evaluated methods were: Fourier transform (FT), wavelet decomposition using Wavelet Packet Transform bank of filters (WPT) and Mel Frequency Cepstral Coefficients (MFCC). For comparison, we evaluated and contrasted the proposed approach against previous works using different combination of features and/or classifiers. The different methods were evaluated on a set of lung sounds including normal and wheezing sounds. A leave-two-out per-case cross-validation approach was used, which, in each fold, chooses as validation set a couple of cases, one including normal sounds and the other including wheezing sounds. Experimental results were reported in terms of traditional classification performance measures: sensitivity, specificity and balanced accuracy. Our best results using the suggested approach, C-weighted SVM and MFCC, achieve a 82.1% of balanced accuracy obtaining the best result for this problem until now. These results suggest that supervised classifiers based on kernel methods are able to learn better models for this challenging classification problem even using the same feature extraction methods.
FY16 Status Report on Development of Integrated EPP and SMT Design Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jetter, R. I.; Sham, T. -L.; Wang, Y.
2016-08-01
The goal of the Elastic-Perfectly Plastic (EPP) combined integrated creep-fatigue damage evaluation approach is to incorporate a Simplified Model Test (SMT) data based approach for creep-fatigue damage evaluation into the EPP methodology to avoid the separate evaluation of creep and fatigue damage and eliminate the requirement for stress classification in current methods; thus greatly simplifying evaluation of elevated temperature cyclic service. The EPP methodology is based on the idea that creep damage and strain accumulation can be bounded by a properly chosen “pseudo” yield strength used in an elastic-perfectly plastic analysis, thus avoiding the need for stress classification. The originalmore » SMT approach is based on the use of elastic analysis. The experimental data, cycles to failure, is correlated using the elastically calculated strain range in the test specimen and the corresponding component strain is also calculated elastically. The advantage of this approach is that it is no longer necessary to use the damage interaction, or D-diagram, because the damage due to the combined effects of creep and fatigue are accounted in the test data by means of a specimen that is designed to replicate or bound the stress and strain redistribution that occurs in actual components when loaded in the creep regime. The reference approach to combining the two methodologies and the corresponding uncertainties and validation plans are presented. Results from recent key feature tests are discussed to illustrate the applicability of the EPP methodology and the behavior of materials at elevated temperature when undergoing stress and strain redistribution due to plasticity and creep.« less
Andrić, Filip; Héberger, Károly
2015-02-06
Lipophilicity (logP) represents one of the most studied and most frequently used fundamental physicochemical properties. At present there are several possibilities for its quantitative expression and many of them stems from chromatographic experiments. Numerous attempts have been made to compare different computational methods, chromatographic methods vs. computational approaches, as well as chromatographic methods and direct shake-flask procedure without definite results or these findings are not accepted generally. In the present work numerous chromatographically derived lipophilicity measures in combination with diverse computational methods were ranked and clustered using the novel variable discrimination and ranking approaches based on the sum of ranking differences and the generalized pair correlation method. Available literature logP data measured on HILIC, and classical reversed-phase combining different classes of compounds have been compared with most frequently used multivariate data analysis techniques (principal component and hierarchical cluster analysis) as well as with the conclusions in the original sources. Chromatographic lipophilicity measures obtained under typical reversed-phase conditions outperform the majority of computationally estimated logPs. Oppositely, in the case of HILIC none of the many proposed chromatographic indices overcomes any of the computationally assessed logPs. Only two of them (logkmin and kmin) may be selected as recommended chromatographic lipophilicity measures. Both ranking approaches, sum of ranking differences and generalized pair correlation method, although based on different backgrounds, provides highly similar variable ordering and grouping leading to the same conclusions. Copyright © 2015. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Kumar, Keshav; Shukla, Sumitra; Singh, Sachin Kumar
2018-04-01
Periodic impulses arise due to localised defects in rolling element bearing. At the early stage of defects, the weak impulses are immersed in strong machinery vibration. This paper proposes a combined approach based upon Hilbert envelop and zero frequency resonator for the detection of the weak periodic impulses. In the first step, the strength of impulses is increased by taking normalised Hilbert envelop of the signal. It also helps in better localization of these impulses on time axis. In the second step, Hilbert envelope of the signal is passed through the zero frequency resonator for the exact localization of the periodic impulses. Spectrum of the resonator output gives peak at the fault frequency. Simulated noisy signal with periodic impulses is used to explain the working of the algorithm. The proposed technique is verified with experimental data also. A comparison of the proposed method with Hilbert-Haung transform (HHT) based method is presented to establish the effectiveness of the proposed method.
Brandt, Marc; Becker, Eva; Jöhncke, Ulrich; Sättler, Daniel; Schulte, Christoph
2016-01-01
One important purpose of the European REACH Regulation (EC No. 1907/2006) is to promote the use of alternative methods for assessment of hazards of substances in order to avoid animal testing. Experience with environmental hazard assessment under REACH shows that efficient alternative methods are needed in order to assess chemicals when standard test data are missing. One such assessment method is the weight-of-evidence (WoE) approach. In this study, the WoE approach was used to assess the persistence of certain phenolic benzotriazoles, a group of substances including also such of very high concern (SVHC). For phenolic benzotriazoles, assessment of the environmental persistence is challenging as standard information, i.e. simulation tests on biodegradation are not available. Thus, the WoE approach was used: overall information resulting from many sources was considered, and individual uncertainties of each source analysed separately. In a second step, all information was aggregated giving an overall picture of persistence to assess the degradability of the phenolic benzotriazoles under consideration although the reliability of individual sources was incomplete. Overall, the evidence suggesting that phenolic benzotriazoles are very persistent in the environment is unambiguous. This was demonstrated by a WoE approach considering the prerequisites of REACH by combining several limited information sources. The combination enabled a clear overall assessment which can be reliably used for SVHC identification. Finally, it is recommended to include WoE approaches as an important tool in future environmental risk assessments.
Multiscale Multilevel Approach to Solution of Nanotechnology Problems
NASA Astrophysics Data System (ADS)
Polyakov, Sergey; Podryga, Viktoriia
2018-02-01
The paper is devoted to a multiscale multilevel approach for the solution of nanotechnology problems on supercomputer systems. The approach uses the combination of continuum mechanics models and the Newton dynamics for individual particles. This combination includes three scale levels: macroscopic, mesoscopic and microscopic. For gas-metal technical systems the following models are used. The quasihydrodynamic system of equations is used as a mathematical model at the macrolevel for gas and solid states. The system of Newton equations is used as a mathematical model at the mesoand microlevels; it is written for nanoparticles of the medium and larger particles moving in the medium. The numerical implementation of the approach is based on the method of splitting into physical processes. The quasihydrodynamic equations are solved by the finite volume method on grids of different types. The Newton equations of motion are solved by Verlet integration in each cell of the grid independently or in groups of connected cells. In the framework of the general methodology, four classes of algorithms and methods of their parallelization are provided. The parallelization uses the principles of geometric parallelism and the efficient partitioning of the computational domain. A special dynamic algorithm is used for load balancing the solvers. The testing of the developed approach was made by the example of the nitrogen outflow from a balloon with high pressure to a vacuum chamber through a micronozzle and a microchannel. The obtained results confirm the high efficiency of the developed methodology.
ERIC Educational Resources Information Center
Lou, Shi-Jer; Chen, Nai-Ci; Tsai, Huei-Yin; Tseng, Kuo-Hung; Shih, Ru-Chu
2012-01-01
This study combined traditional classroom teaching methods and blogs with blended creative teaching as a new teaching method for the course "Design and Applications of Teaching Aids for Young Children." It aimed to improve the shortcomings of the traditional teaching approach by incorporating the "Asking, Thinking, Doing, and…
A Bayesian trans-dimensional approach for the fusion of multiple geophysical datasets
NASA Astrophysics Data System (ADS)
JafarGandomi, Arash; Binley, Andrew
2013-09-01
We propose a Bayesian fusion approach to integrate multiple geophysical datasets with different coverage and sensitivity. The fusion strategy is based on the capability of various geophysical methods to provide enough resolution to identify either subsurface material parameters or subsurface structure, or both. We focus on electrical resistivity as the target material parameter and electrical resistivity tomography (ERT), electromagnetic induction (EMI), and ground penetrating radar (GPR) as the set of geophysical methods. However, extending the approach to different sets of geophysical parameters and methods is straightforward. Different geophysical datasets are entered into a trans-dimensional Markov chain Monte Carlo (McMC) search-based joint inversion algorithm. The trans-dimensional property of the McMC algorithm allows dynamic parameterisation of the model space, which in turn helps to avoid bias of the post-inversion results towards a particular model. Given that we are attempting to develop an approach that has practical potential, we discretize the subsurface into an array of one-dimensional earth-models. Accordingly, the ERT data that are collected by using two-dimensional acquisition geometry are re-casted to a set of equivalent vertical electric soundings. Different data are inverted either individually or jointly to estimate one-dimensional subsurface models at discrete locations. We use Shannon's information measure to quantify the information obtained from the inversion of different combinations of geophysical datasets. Information from multiple methods is brought together via introducing joint likelihood function and/or constraining the prior information. A Bayesian maximum entropy approach is used for spatial fusion of spatially dispersed estimated one-dimensional models and mapping of the target parameter. We illustrate the approach with a synthetic dataset and then apply it to a field dataset. We show that the proposed fusion strategy is successful not only in enhancing the subsurface information but also as a survey design tool to identify the appropriate combination of the geophysical tools and show whether application of an individual method for further investigation of a specific site is beneficial.
McDermott, Jason E.; Wang, Jing; Mitchell, Hugh; Webb-Robertson, Bobbie-Jo; Hafen, Ryan; Ramey, John; Rodland, Karin D.
2012-01-01
Introduction The advent of high throughput technologies capable of comprehensive analysis of genes, transcripts, proteins and other significant biological molecules has provided an unprecedented opportunity for the identification of molecular markers of disease processes. However, it has simultaneously complicated the problem of extracting meaningful molecular signatures of biological processes from these complex datasets. The process of biomarker discovery and characterization provides opportunities for more sophisticated approaches to integrating purely statistical and expert knowledge-based approaches. Areas covered In this review we will present examples of current practices for biomarker discovery from complex omic datasets and the challenges that have been encountered in deriving valid and useful signatures of disease. We will then present a high-level review of data-driven (statistical) and knowledge-based methods applied to biomarker discovery, highlighting some current efforts to combine the two distinct approaches. Expert opinion Effective, reproducible and objective tools for combining data-driven and knowledge-based approaches to identify predictive signatures of disease are key to future success in the biomarker field. We will describe our recommendations for possible approaches to this problem including metrics for the evaluation of biomarkers. PMID:23335946
Computational intelligence approaches for pattern discovery in biological systems.
Fogel, Gary B
2008-07-01
Biology, chemistry and medicine are faced by tremendous challenges caused by an overwhelming amount of data and the need for rapid interpretation. Computational intelligence (CI) approaches such as artificial neural networks, fuzzy systems and evolutionary computation are being used with increasing frequency to contend with this problem, in light of noise, non-linearity and temporal dynamics in the data. Such methods can be used to develop robust models of processes either on their own or in combination with standard statistical approaches. This is especially true for database mining, where modeling is a key component of scientific understanding. This review provides an introduction to current CI methods, their application to biological problems, and concludes with a commentary about the anticipated impact of these approaches in bioinformatics.
Multimodal Neuroimaging: Basic Concepts and Classification of Neuropsychiatric Diseases.
Tulay, Emine Elif; Metin, Barış; Tarhan, Nevzat; Arıkan, Mehmet Kemal
2018-06-01
Neuroimaging techniques are widely used in neuroscience to visualize neural activity, to improve our understanding of brain mechanisms, and to identify biomarkers-especially for psychiatric diseases; however, each neuroimaging technique has several limitations. These limitations led to the development of multimodal neuroimaging (MN), which combines data obtained from multiple neuroimaging techniques, such as electroencephalography, functional magnetic resonance imaging, and yields more detailed information about brain dynamics. There are several types of MN, including visual inspection, data integration, and data fusion. This literature review aimed to provide a brief summary and basic information about MN techniques (data fusion approaches in particular) and classification approaches. Data fusion approaches are generally categorized as asymmetric and symmetric. The present review focused exclusively on studies based on symmetric data fusion methods (data-driven methods), such as independent component analysis and principal component analysis. Machine learning techniques have recently been introduced for use in identifying diseases and biomarkers of disease. The machine learning technique most widely used by neuroscientists is classification-especially support vector machine classification. Several studies differentiated patients with psychiatric diseases and healthy controls with using combined datasets. The common conclusion among these studies is that the prediction of diseases increases when combining data via MN techniques; however, there remain a few challenges associated with MN, such as sample size. Perhaps in the future N-way fusion can be used to combine multiple neuroimaging techniques or nonimaging predictors (eg, cognitive ability) to overcome the limitations of MN.
Boersema, Paul J.; Foong, Leong Yan; Ding, Vanessa M. Y.; Lemeer, Simone; van Breukelen, Bas; Philp, Robin; Boekhorst, Jos; Snel, Berend; den Hertog, Jeroen; Choo, Andre B. H.; Heck, Albert J. R.
2010-01-01
Several mass spectrometry-based assays have emerged for the quantitative profiling of cellular tyrosine phosphorylation. Ideally, these methods should reveal the exact sites of tyrosine phosphorylation, be quantitative, and not be cost-prohibitive. The latter is often an issue as typically several milligrams of (stable isotope-labeled) starting protein material are required to enable the detection of low abundance phosphotyrosine peptides. Here, we adopted and refined a peptidecentric immunoaffinity purification approach for the quantitative analysis of tyrosine phosphorylation by combining it with a cost-effective stable isotope dimethyl labeling method. We were able to identify by mass spectrometry, using just two LC-MS/MS runs, more than 1100 unique non-redundant phosphopeptides in HeLa cells from about 4 mg of starting material without requiring any further affinity enrichment as close to 80% of the identified peptides were tyrosine phosphorylated peptides. Stable isotope dimethyl labeling could be incorporated prior to the immunoaffinity purification, even for the large quantities (mg) of peptide material used, enabling the quantification of differences in tyrosine phosphorylation upon pervanadate treatment or epidermal growth factor stimulation. Analysis of the epidermal growth factor-stimulated HeLa cells, a frequently used model system for tyrosine phosphorylation, resulted in the quantification of 73 regulated unique phosphotyrosine peptides. The quantitative data were found to be exceptionally consistent with the literature, evidencing that such a targeted quantitative phosphoproteomics approach can provide reproducible results. In general, the combination of immunoaffinity purification of tyrosine phosphorylated peptides with large scale stable isotope dimethyl labeling provides a cost-effective approach that can alleviate variation in sample preparation and analysis as samples can be combined early on. Using this approach, a rather complete qualitative and quantitative picture of tyrosine phosphorylation signaling events can be generated. PMID:19770167
NASA Astrophysics Data System (ADS)
Hanan, E. J.; Tague, C.; Choate, J.; Liu, M.; Adam, J. C.
2016-12-01
Disturbance is a major force regulating C dynamics in terrestrial ecosystems. Evaluating future C balance in disturbance-prone systems requires understanding the underlying mechanisms that drive ecosystem processes over multiple scales of space and time. Simulation modeling is a powerful tool for bridging these scales, however, model projections are limited by large uncertainties in the initial state of vegetation C and N stores. Watershed models typically use one of two methods to initialize these stores. Spin up involves running a model until vegetation reaches steady state based on climate. This "potential" state however assumes the vegetation across the entire watershed has reached maturity and has a homogeneous age distribution. Yet to reliably represent C and N dynamics in disturbance-prone systems, models should be initialized to reflect their non-equilibrium conditions. Alternatively, remote sensing of a single vegetation parameter (typically leaf area index; LAI) can be combined with allometric relationships to allocate C and N to model stores and can reflect non-steady-state conditions. However, allometric relationships are species and region specific and do not account for environmental variation, thus resulting in C and N stores that may be unstable. To address this problem, we developed a new approach for initializing C and N pools using the watershed-scale ecohydrologic model RHESSys. The new approach merges the mechanistic stability of spinup with the spatial fidelity of remote sensing. Unlike traditional spin up, this approach supports non-homogeneous stand ages. We tested our approach in a pine-dominated watershed in central Idaho, which partially burned in July of 2000. We used LANDSAT and MODIS data to calculate LAI across the watershed following the 2000 fire. We then ran three sets of simulations using spin up, direct measurements, and the combined approach to initialize vegetation C and N stores, and compared our results to remotely sensed LAI following the simulation period. Model estimates of C, N, and water fluxes varied depending on which approach was used. The combined approach provided the best LAI estimates after 10 years of simulation. This method shows promise for improving projections of C, N, and water fluxes in disturbance-prone watersheds.
Efficient Regressions via Optimally Combining Quantile Information*
Zhao, Zhibiao; Xiao, Zhijie
2014-01-01
We develop a generally applicable framework for constructing efficient estimators of regression models via quantile regressions. The proposed method is based on optimally combining information over multiple quantiles and can be applied to a broad range of parametric and nonparametric settings. When combining information over a fixed number of quantiles, we derive an upper bound on the distance between the efficiency of the proposed estimator and the Fisher information. As the number of quantiles increases, this upper bound decreases and the asymptotic variance of the proposed estimator approaches the Cramér-Rao lower bound under appropriate conditions. In the case of non-regular statistical estimation, the proposed estimator leads to super-efficient estimation. We illustrate the proposed method for several widely used regression models. Both asymptotic theory and Monte Carlo experiments show the superior performance over existing methods. PMID:25484481
A Multiatlas Segmentation Using Graph Cuts with Applications to Liver Segmentation in CT Scans
2014-01-01
An atlas-based segmentation approach is presented that combines low-level operations, an affine probabilistic atlas, and a multiatlas-based segmentation. The proposed combination provides highly accurate segmentation due to registrations and atlas selections based on the regions of interest (ROIs) and coarse segmentations. Our approach shares the following common elements between the probabilistic atlas and multiatlas segmentation: (a) the spatial normalisation and (b) the segmentation method, which is based on minimising a discrete energy function using graph cuts. The method is evaluated for the segmentation of the liver in computed tomography (CT) images. Low-level operations define a ROI around the liver from an abdominal CT. We generate a probabilistic atlas using an affine registration based on geometry moments from manually labelled data. Next, a coarse segmentation of the liver is obtained from the probabilistic atlas with low computational effort. Then, a multiatlas segmentation approach improves the accuracy of the segmentation. Both the atlas selections and the nonrigid registrations of the multiatlas approach use a binary mask defined by coarse segmentation. We experimentally demonstrate that this approach performs better than atlas selections and nonrigid registrations in the entire ROI. The segmentation results are comparable to those obtained by human experts and to other recently published results. PMID:25276219
Zeng, Xiaozheng; McGough, Robert J.
2009-01-01
The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640
Industrial ecology: Quantitative methods for exploring a lower carbon future
NASA Astrophysics Data System (ADS)
Thomas, Valerie M.
2015-03-01
Quantitative methods for environmental and cost analyses of energy, industrial, and infrastructure systems are briefly introduced and surveyed, with the aim of encouraging broader utilization and development of quantitative methods in sustainable energy research. Material and energy flow analyses can provide an overall system overview. The methods of engineering economics and cost benefit analysis, such as net present values, are the most straightforward approach for evaluating investment options, with the levelized cost of energy being a widely used metric in electricity analyses. Environmental lifecycle assessment has been extensively developed, with both detailed process-based and comprehensive input-output approaches available. Optimization methods provide an opportunity to go beyond engineering economics to develop detailed least-cost or least-impact combinations of many different choices.
NASA Astrophysics Data System (ADS)
Kim, M. G.; Lin, J. C.; Huang, L.; Edwards, T. W.; Jones, J. P.; Polavarapu, S.; Nassar, R.
2012-12-01
Reducing uncertainties in the projections of atmospheric CO2 concentration levels relies on increasing our scientific understanding of the exchange processes between atmosphere and land at regional scales, which is highly dependent on climate, ecosystem processes, and anthropogenic disturbances. In order for researchers to reduce the uncertainties, a combined framework that mutually addresses these independent variables to account for each process is invaluable. In this research, an example of top-down inversion modeling approach that is combined with stable isotope measurement data is presented. The potential for the proposed analysis framework is demonstrated using the Stochastic Time-Inverted Lagrangian Transport (STILT) model runs combined with high precision CO2 concentration data measured at a Canadian greenhouse gas monitoring site as well as multiple tracers: stable isotopes and combustion-related species. This framework yields a unique regional scale constraint that can be used to relate the measured changes of tracer concentrations to processes in their upwind source regions. The inversion approach both reproduces source areas in a spatially explicit way through sophisticated Lagrangian transport modeling and infers emission processes that leave imprints on atmospheric tracers. The understanding gained through the combined approach can also be used to verify reported emissions as part of regulatory regimes. The results indicate that changes in CO2 concentration is strongly influenced by regional sources, including significant fossil fuel emissions, and that the combined approach can be used to test reported emissions of the greenhouse gas from oil sands developments. Also, methods to further reduce uncertainties in the retrieved emissions by incorporating additional constraints including tracer-to-tracer correlations and satellite measurements are discussed briefly.
A longitudinal multilevel CFA-MTMM model for interchangeable and structurally different methods
Koch, Tobias; Schultze, Martin; Eid, Michael; Geiser, Christian
2014-01-01
One of the key interests in the social sciences is the investigation of change and stability of a given attribute. Although numerous models have been proposed in the past for analyzing longitudinal data including multilevel and/or latent variable modeling approaches, only few modeling approaches have been developed for studying the construct validity in longitudinal multitrait-multimethod (MTMM) measurement designs. The aim of the present study was to extend the spectrum of current longitudinal modeling approaches for MTMM analysis. Specifically, a new longitudinal multilevel CFA-MTMM model for measurement designs with structurally different and interchangeable methods (called Latent-State-Combination-Of-Methods model, LS-COM) is presented. Interchangeable methods are methods that are randomly sampled from a set of equivalent methods (e.g., multiple student ratings for teaching quality), whereas structurally different methods are methods that cannot be easily replaced by one another (e.g., teacher, self-ratings, principle ratings). Results of a simulation study indicate that the parameters and standard errors in the LS-COM model are well recovered even in conditions with only five observations per estimated model parameter. The advantages and limitations of the LS-COM model relative to other longitudinal MTMM modeling approaches are discussed. PMID:24860515
Bi-objective integer programming for RNA secondary structure prediction with pseudoknots.
Legendre, Audrey; Angel, Eric; Tahi, Fariza
2018-01-15
RNA structure prediction is an important field in bioinformatics, and numerous methods and tools have been proposed. Pseudoknots are specific motifs of RNA secondary structures that are difficult to predict. Almost all existing methods are based on a single model and return one solution, often missing the real structure. An alternative approach would be to combine different models and return a (small) set of solutions, maximizing its quality and diversity in order to increase the probability that it contains the real structure. We propose here an original method for predicting RNA secondary structures with pseudoknots, based on integer programming. We developed a generic bi-objective integer programming algorithm allowing to return optimal and sub-optimal solutions optimizing simultaneously two models. This algorithm was then applied to the combination of two known models of RNA secondary structure prediction, namely MEA and MFE. The resulting tool, called BiokoP, is compared with the other methods in the literature. The results show that the best solution (structure with the highest F 1 -score) is, in most cases, given by BiokoP. Moreover, the results of BiokoP are homogeneous, regardless of the pseudoknot type or the presence or not of pseudoknots. Indeed, the F 1 -scores are always higher than 70% for any number of solutions returned. The results obtained by BiokoP show that combining the MEA and the MFE models, as well as returning several optimal and several sub-optimal solutions, allow to improve the prediction of secondary structures. One perspective of our work is to combine better mono-criterion models, in particular to combine a model based on the comparative approach with the MEA and the MFE models. This leads to develop in the future a new multi-objective algorithm to combine more than two models. BiokoP is available on the EvryRNA platform: https://EvryRNA.ibisc.univ-evry.fr .
Experimental modeling of swirl flows in power plants
NASA Astrophysics Data System (ADS)
Shtork, S. I.; Litvinov, I. V.; Gesheva, E. S.; Tsoy, M. A.; Skripkin, S. G.
2018-03-01
The article presents an overview of the methods and approaches to experimental modeling of various thermal and hydropower units - furnaces of pulverized coal boilers and flow-through elements of hydro turbines. The presented modeling approaches based on a combination of experimentation and rapid prototyping of working parts may be useful in optimizing energy equipment to improve safety and efficiency of industrial energy systems.
ERIC Educational Resources Information Center
Koehlinger, Keegan M.
2015-01-01
Clinical Question: Would a preschool-aged child with childhood apraxia of speech (CAS) benefit from a singular approach--such as motor planning, sensory cueing, linguistic and rhythmic--or a combined approach in order to increase intelligibility of spoken language? Method: Systematic Review. Study Sources: ASHA Wire, Google Scholar, Speech Bite.…
The Capability Approach: A Critical Review of Its Application in Health Economics.
Karimi, Milad; Brazier, John; Basarir, Hasan
The capability approach is an approach to assessing well-being developed by Amartya Sen. Interest in this approach has resulted in several attempts to develop questionnaires to measure and value capability at an individual level in health economics. This commentary critically reviews the ability of these questionnaires to measure and value capability. It is argued that the method used in the questionnaires to measure capability will result in a capability set that is an inaccurate description of the individual's true capability set. The measured capability set will either represent only one combination and ignore the value of choice in the capability set, or represent one combination that is not actually achievable by the individual. In addition, existing methods of valuing capability may be inadequate because they do not consider that capability is a set. It may be practically more feasible to measure and value capability approximately rather than directly. Suggestions are made on how to measure and value an approximation to capability, but further research is required to implement the suggestions. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach
NASA Astrophysics Data System (ADS)
Bähr, Hermann; Hanssen, Ramon F.
2012-12-01
An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.
NASA Astrophysics Data System (ADS)
Schuetze, C.; Barth, M.; Hehn, M.; Ziemann, A.
2016-12-01
The eddy-covariance (EC) method can provide information about turbulent fluxes of energy and greenhouse gases (GHG) accurately if all necessary corrections and conversions are applied to the measured raw data and all boundary conditions for the method are satisfied. Nevertheless and even in flat terrain, advection can occur leading to a closing gap of energy and matter balances. Without accounting for advection, annual estimates of CO2 sink strength are overestimated, because advection usually results in underestimation of nocturnal CO2 flux. Advection is produced by low-frequent exchange processes, which can occur due to the surface heterogeneity. To measure advective fluxes there is still and strongly a need for ground-based remote sensing techniques which provide the relevant GHG concentration together with wind components spatially resolved within the same voxel structure. The SQuAd-approach applies an integrated method combination of acoustic tomography and open-path optical remote sensing based on infrared spectroscopy with the aim to obtain spatially and temporally resolved information about wind components and GHG concentration. The monitoring approach focuses on the validation of the joint application of the two independent, non-intrusive methods concerning the ability to close the existent gap in GHG balance. The innovative combination of acoustic travel-time tomography (A-TOM) and open-path Fourier transform infrared spectroscopy (OP-FTIR) together with atmospheric modelling will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation has the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other GHG with high precision. A-TOM is a scalable method to remotely resolve 3D wind and temperature fields. The presentation will give an overview about the proposed method combination and results of experimental validation tests at an ICOS site (flat grassland) in Eastern Germany.
A physics based method for combining multiple anatomy models with application to medical simulation.
Zhu, Yanong; Magee, Derek; Ratnalingam, Rishya; Kessel, David
2009-01-01
We present a physics based approach to the construction of anatomy models by combining components from different sources; different image modalities, protocols, and patients. Given an initial anatomy, a mass-spring model is generated which mimics the physical properties of the solid anatomy components. This helps maintain valid spatial relationships between the components, as well as the validity of their shapes. Combination can be either replacing/modifying an existing component, or inserting a new component. The external forces that deform the model components to fit the new shape are estimated from Gradient Vector Flow and Distance Transform maps. We demonstrate the applicability and validity of the described approach in the area of medical simulation, by showing the processes of non-rigid surface alignment, component replacement, and component insertion.
Spatiotemporal Interpolation for Environmental Modelling
Susanto, Ferry; de Souza, Paulo; He, Jing
2016-01-01
A variation of the reduction-based approach to spatiotemporal interpolation (STI), in which time is treated independently from the spatial dimensions, is proposed in this paper. We reviewed and compared three widely-used spatial interpolation techniques: ordinary kriging, inverse distance weighting and the triangular irregular network. We also proposed a new distribution-based distance weighting (DDW) spatial interpolation method. In this study, we utilised one year of Tasmania’s South Esk Hydrology model developed by CSIRO. Root mean squared error statistical methods were performed for performance evaluations. Our results show that the proposed reduction approach is superior to the extension approach to STI. However, the proposed DDW provides little benefit compared to the conventional inverse distance weighting (IDW) method. We suggest that the improved IDW technique, with the reduction approach used for the temporal dimension, is the optimal combination for large-scale spatiotemporal interpolation within environmental modelling applications. PMID:27509497
Free and Forced Vibrations of Thick-Walled Anisotropic Cylindrical Shells
NASA Astrophysics Data System (ADS)
Marchuk, A. V.; Gnedash, S. V.; Levkovskii, S. A.
2017-03-01
Two approaches to studying the free and forced axisymmetric vibrations of cylindrical shell are proposed. They are based on the three-dimensional theory of elasticity and division of the original cylindrical shell with concentric cross-sectional circles into several coaxial cylindrical shells. One approach uses linear polynomials to approximate functions defined in plan and across the thickness. The other approach also uses linear polynomials to approximate functions defined in plan, but their variation with thickness is described by the analytical solution of a system of differential equations. Both approaches have approximation and arithmetic errors. When determining the natural frequencies by the semi-analytical finite-element method in combination with the divide and conqure method, it is convenient to find the initial frequencies by the finite-element method. The behavior of the shell during free and forced vibrations is analyzed in the case where the loading area is half the shell thickness
NASA Astrophysics Data System (ADS)
Fatrias, D.; Kamil, I.; Meilani, D.
2018-03-01
Coordinating business operation with suppliers becomes increasingly important to survive and prosper under the dynamic business environment. A good partnership with suppliers not only increase efficiency, but also strengthen corporate competitiveness. Associated with such concern, this study aims to develop a practical approach of multi-criteria supplier evaluation using combined methods of Taguchi loss function (TLF), best-worst method (BWM) and VIse Kriterijumska Optimizacija kompromisno Resenje (VIKOR). A new framework of integrative approach adopting these methods is our main contribution for supplier evaluation in literature. In this integrated approach, a compromised supplier ranking list based on the loss score of suppliers is obtained using efficient steps of a pairwise comparison based decision making process. Implemetation to the case problem with real data from crumb rubber industry shows the usefulness of the proposed approach. Finally, a suitable managerial implication is presented.
Henderson, Sarah B; Gauld, Jillian S; Rauch, Stephen A; McLean, Kathleen E; Krstic, Nikolas; Hondula, David M; Kosatsky, Tom
2016-11-15
Most excess deaths that occur during extreme hot weather events do not have natural heat recorded as an underlying or contributing cause. This study aims to identify the specific individuals who died because of hot weather using only secondary data. A novel approach was developed in which the expected number of deaths was repeatedly sampled from all deaths that occurred during a hot weather event, and compared with deaths during a control period. The deaths were compared with respect to five factors known to be associated with hot weather mortality. Individuals were ranked by their presence in significant models over 100 trials of 10,000 repetitions. Those with the highest rankings were identified as probable excess deaths. Sensitivity analyses were performed on a range of model combinations. These methods were applied to a 2009 hot weather event in greater Vancouver, Canada. The excess deaths identified were sensitive to differences in model combinations, particularly between univariate and multivariate approaches. One multivariate and one univariate combination were chosen as the best models for further analyses. The individuals identified by multiple combinations suggest that marginalized populations in greater Vancouver are at higher risk of death during hot weather. This study proposes novel methods for classifying specific deaths as expected or excess during a hot weather event. Further work is needed to evaluate performance of the methods in simulation studies and against clinically identified cases. If confirmed, these methods could be applied to a wide range of populations and events of interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Passel, Steven, E-mail: Steven.vanpassel@uhasselt.be; University of Antwerp, Department Bioscience Engineering, Groenenborgerlaan 171, 2020 Antwerp; Meul, Marijke
Sustainability assessment is needed to build sustainable farming systems. A broad range of sustainability concepts, methodologies and applications already exists. They differ in level, focus, orientation, measurement, scale, presentation and intended end-users. In this paper we illustrate that a smart combination of existing methods with different levels of application can make sustainability assessment more profound, and that it can broaden the insights of different end-user groups. An overview of sustainability assessment tools on different levels and for different end-users shows the complementarities and the opportunities of using different methods. In a case-study, a combination of the sustainable value approach (SVA)more » and MOTIFS is used to perform a sustainability evaluation of farming systems in Flanders. SVA is used to evaluate sustainability at sector level, and is especially useful to support policy makers, while MOTIFS is used to support and guide farmers towards sustainability at farm level. The combined use of the two methods with complementary goals can widen the insights of both farmers and policy makers, without losing the particularities of the different approaches. To stimulate and support further research and applications, we propose guidelines for multilevel and multi-user sustainability assessments. - Highlights: Black-Right-Pointing-Pointer We give an overview of sustainability assessment tools for agricultural systems. Black-Right-Pointing-Pointer SVA and MOTIFS are used to evaluate the sustainability of dairy farming in Flanders. Black-Right-Pointing-Pointer Combination of methods with different levels broadens the insights of different end-user groups. Black-Right-Pointing-Pointer We propose guidelines for multilevel and multi-user sustainability assessments.« less
A New Method for Studying the Periodic System Based on a Kohonen Neural Network
ERIC Educational Resources Information Center
Chen, David Zhekai
2010-01-01
A new method for studying the periodic system is described based on the combination of a Kohonen neural network and a set of chemical and physical properties. The classification results are directly shown in a two-dimensional map and easy to interpret. This is one of the major advantages of this approach over other methods reported in the…
Berente, Imre; Czinki, Eszter; Náray-Szabó, Gábor
2007-09-01
We report an approach for the determination of atomic monopoles of macromolecular systems using connectivity and geometry parameters alone. The method is appropriate also for the calculation of charge distributions based on the quantum mechanically determined wave function and does not suffer from the mathematical instability of other electrostatic potential fit methods. Copyright 2007 Wiley Periodicals, Inc.
Hybridized Multiscale Discontinuous Galerkin Methods for Multiphysics
2015-09-14
discontinuous Galerkin method for the numerical solution of the Helmholtz equation , J. Comp. Phys., 290, 318–335, 2015. [14] N.C. NGUYEN, J. PERAIRE...approximations of the Helmholtz equation for a very wide range of wave frequencies. Our approach combines the hybridizable discontinuous Galerkin methodology...local approximation spaces of the hybridizable discontinuous Galerkin methods with precomputed phases which are solutions of the eikonal equation in
Computational Prediction of Metabolism: Sites, Products, SAR, P450 Enzyme Dynamics, and Mechanisms
2012-01-01
Metabolism of xenobiotics remains a central challenge for the discovery and development of drugs, cosmetics, nutritional supplements, and agrochemicals. Metabolic transformations are frequently related to the incidence of toxic effects that may result from the emergence of reactive species, the systemic accumulation of metabolites, or by induction of metabolic pathways. Experimental investigation of the metabolism of small organic molecules is particularly resource demanding; hence, computational methods are of considerable interest to complement experimental approaches. This review provides a broad overview of structure- and ligand-based computational methods for the prediction of xenobiotic metabolism. Current computational approaches to address xenobiotic metabolism are discussed from three major perspectives: (i) prediction of sites of metabolism (SOMs), (ii) elucidation of potential metabolites and their chemical structures, and (iii) prediction of direct and indirect effects of xenobiotics on metabolizing enzymes, where the focus is on the cytochrome P450 (CYP) superfamily of enzymes, the cardinal xenobiotics metabolizing enzymes. For each of these domains, a variety of approaches and their applications are systematically reviewed, including expert systems, data mining approaches, quantitative structure–activity relationships (QSARs), and machine learning-based methods, pharmacophore-based algorithms, shape-focused techniques, molecular interaction fields (MIFs), reactivity-focused techniques, protein–ligand docking, molecular dynamics (MD) simulations, and combinations of methods. Predictive metabolism is a developing area, and there is still enormous potential for improvement. However, it is clear that the combination of rapidly increasing amounts of available ligand- and structure-related experimental data (in particular, quantitative data) with novel and diverse simulation and modeling approaches is accelerating the development of effective tools for prediction of in vivo metabolism, which is reflected by the diverse and comprehensive data sources and methods for metabolism prediction reviewed here. This review attempts to survey the range and scope of computational methods applied to metabolism prediction and also to compare and contrast their applicability and performance. PMID:22339582
NASA Astrophysics Data System (ADS)
Mel, Riccardo; Viero, Daniele Pietro; Carniello, Luca; Defina, Andrea; D'Alpaos, Luigi
2014-09-01
Providing reliable and accurate storm surge forecasts is important for a wide range of problems related to coastal environments. In order to adequately support decision-making processes, it also become increasingly important to be able to estimate the uncertainty associated with the storm surge forecast. The procedure commonly adopted to do this uses the results of a hydrodynamic model forced by a set of different meteorological forecasts; however, this approach requires a considerable, if not prohibitive, computational cost for real-time application. In the present paper we present two simplified methods for estimating the uncertainty affecting storm surge prediction with moderate computational effort. In the first approach we use a computationally fast, statistical tidal model instead of a hydrodynamic numerical model to estimate storm surge uncertainty. The second approach is based on the observation that the uncertainty in the sea level forecast mainly stems from the uncertainty affecting the meteorological fields; this has led to the idea to estimate forecast uncertainty via a linear combination of suitable meteorological variances, directly extracted from the meteorological fields. The proposed methods were applied to estimate the uncertainty in the storm surge forecast in the Venice Lagoon. The results clearly show that the uncertainty estimated through a linear combination of suitable meteorological variances nicely matches the one obtained using the deterministic approach and overcomes some intrinsic limitations in the use of a statistical tidal model.
1992-10-07
approach, method, and a means of fully and effectively bringing the role of the degree of combination of these two mechanisms in the market mechanism...economy, to production must, while upholding the planned nature of the time they over-simplistically viewed the socialist the macroeconomy, fully bring...different eco- become more complicated and interlinked, and the eco- nomic sectors, fully bring into play their combined nomic behaviors of different
Application of p-Multigrid to Discontinuous Galerkin Formulations of the Poisson Equation
NASA Technical Reports Server (NTRS)
Helenbrook, B. T.; Atkins, H. L.
2006-01-01
We investigate p-multigrid as a solution method for several different discontinuous Galerkin (DG) formulations of the Poisson equation. Different combinations of relaxation schemes and basis sets have been combined with the DG formulations to find the best performing combination. The damping factors of the schemes have been determined using Fourier analysis for both one and two-dimensional problems. One important finding is that when using DG formulations, the standard approach of forming the coarse p matrices separately for each level of multigrid is often unstable. To ensure stability the coarse p matrices must be constructed from the fine grid matrices using algebraic multigrid techniques. Of the relaxation schemes, we find that the combination of Jacobi relaxation with the spectral element basis is fairly effective. The results using this combination are p sensitive in both one and two dimensions, but reasonable convergence rates can still be achieved for moderate values of p and isotropic meshes. A competitive alternative is a block Gauss-Seidel relaxation. This actually out performs a more expensive line relaxation when the mesh is isotropic. When the mesh becomes highly anisotropic, the implicit line method and the Gauss-Seidel implicit line method are the only effective schemes. Adding the Gauss-Seidel terms to the implicit line method gives a significant improvement over the line relaxation method.
Using Active Learning for Speeding up Calibration in Simulation Models
Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2015-01-01
Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190
Makarova, N; Reiss, K; Zeeb, H; Razum, O; Spallek, J
2013-06-01
19.6% of Germany's population has a "migrant" background. Comprehensive epidemiological research on health and health development of this large, heterogeneous and increasingly important population group in Germany is still deficient. There is a lack of results on mortality and morbidity, particularly concerning chronic diseases and disease processes. The aim of this paper is to combine and to compare already applied methods with new methodological approaches for determining the vital status and the mortality of immigrants from Turkey and the former Soviet Union. For this purpose we used data from the state of Bremen (666 709 residents, last update 2010). We examined 2 methodological aspects: (i) possibilities for identifying immigrant background in the data of residents' registration office with different methods (onomastic, toponomastic, etc.) and (ii) opportunities for record linkage of the obtained data with the Bremen mortality index. Immigrants from Turkey and the former Soviet Union were successfully identified in databases of the residents' registration office by a combination of different methods. The combination of different methodological approaches proved to be considerably better than using one method only. Through the application of a name-based algorithm we found that Turkish immigrants comprise 6.9% of the total population living in Bremen. By combining the variables "citizenship" and "country of birth" the total population proportion of immigrants from the former Soviet Union was found to be 5%. We also identified the deceased immigrant population in Bremen. The information obtained from residents' registration office could be successfully linked by death register number with the data of the Bremen mortality index. This information can be used in further detailed mortality analyses. The results of this analysis show the existing opportunities to consider the heterogeneity of the German population in mortality research, especially by means of combination of different methods to identify the immigrant background. © Georg Thieme Verlag KG Stuttgart · New York.
Recent Advances in Delivery of Drug-Nucleic Acid Combinations for Cancer Treatment
Li, Jing; Wang, Yan; Zhu, Yu; Oupický, David
2013-01-01
Cancer treatment that uses a combination of approaches with the ability to affect multiple disease pathways has been proven highly effective in the treatment of many cancers. Combination therapy can include multiple chemotherapeutics or combinations of chemotherapeutics with other treatment modalities like surgery or radiation. However, despite the widespread clinical use of combination therapies, relatively little attention has been given to the potential of modern nanocarrier delivery methods, like liposomes, micelles, and nanoparticles, to enhance the efficacy of combination treatments. This lack of knowledge is particularly notable in the limited success of vectors for the delivery of combinations of nucleic acids with traditional small molecule drugs. The delivery of drug-nucleic acid combinations is particularly challenging due to differences in the physicochemical properties of the two types of agents. This review discusses recent advances in the development of delivery methods using combinations of small molecule drugs and nucleic acid therapeutics to treat cancer. This review primarily focuses on the rationale used for selecting appropriate drug-nucleic acid combinations as well as progress in the development of nanocarriers suitable for simultaneous delivery of drug-nucleic acid combinations. PMID:23624358
Recent advances in delivery of drug-nucleic acid combinations for cancer treatment.
Li, Jing; Wang, Yan; Zhu, Yu; Oupický, David
2013-12-10
Cancer treatment that uses a combination of approaches with the ability to affect multiple disease pathways has been proven highly effective in the treatment of many cancers. Combination therapy can include multiple chemotherapeutics or combinations of chemotherapeutics with other treatment modalities like surgery or radiation. However, despite the widespread clinical use of combination therapies, relatively little attention has been given to the potential of modern nanocarrier delivery methods, like liposomes, micelles, and nanoparticles, to enhance the efficacy of combination treatments. This lack of knowledge is particularly notable in the limited success of vectors for the delivery of combinations of nucleic acids with traditional small molecule drugs. The delivery of drug-nucleic acid combinations is particularly challenging due to differences in the physicochemical properties of the two types of agents. This review discusses recent advances in the development of delivery methods using combinations of small molecule drugs and nucleic acid therapeutics to treat cancer. This review primarily focuses on the rationale used for selecting appropriate drug-nucleic acid combinations as well as progress in the development of nanocarriers suitable for simultaneous delivery of drug-nucleic acid combinations. Copyright © 2013 Elsevier B.V. All rights reserved.
Entropy and generalized least square methods in assessment of the regional value of streamgages
Markus, M.; Vernon, Knapp H.; Tasker, Gary D.
2003-01-01
The Illinois State Water Survey performed a study to assess the streamgaging network in the State of Illinois. One of the important aspects of the study was to assess the regional value of each station through an assessment of the information transfer among gaging records for low, average, and high flow conditions. This analysis was performed for the main hydrologic regions in the State, and the stations were initially evaluated using a new approach based on entropy analysis. To determine the regional value of each station within a region, several information parameters, including total net information, were defined based on entropy. Stations were ranked based on the total net information. For comparison, the regional value of the same stations was assessed using the generalized least square regression (GLS) method, developed by the US Geological Survey. Finally, a hybrid combination of GLS and entropy was created by including a function of the negative net information as a penalty function in the GLS. The weights of the combined model were determined to maximize the average correlation with the results of GLS and entropy. The entropy and GLS methods were evaluated using the high-flow data from southern Illinois stations. The combined method was compared with the entropy and GLS approaches using the high-flow data from eastern Illinois stations. ?? 2003 Elsevier B.V. All rights reserved.
Combining Semantic and Lexical Methods for Mapping MedDRA to VCM Icons.
Lamy, Jean-Baptiste; Tsopra, Rosy
2018-01-01
VCM (Visualization of Concept in Medicine) is an iconic language that represents medical concepts, such as disorders, by icons. VCM has a formal semantics described by an ontology. The icons can be used in medical software for providing a visual summary or enriching texts. However, the use of VCM icons in user interfaces requires to map standard medical terminologies to VCM. Here, we present a method combining semantic and lexical approaches for mapping MedDRA to VCM. The method takes advantage of the hierarchical relations in MedDRA. It also analyzes the groups of lemmas in the term's labels, and relies on a manual mapping of these groups to the concepts in the VCM ontology. We evaluate the method on 50 terms. Finally, we discuss the method and suggest perspectives.
Estimating Characteristics of a Maneuvering Reentry Vehicle Observed by Multiple Sensors
2010-03-01
instead of as one large data set. This method allowed the filter to respond to changing dynamics. Jackson and Farbman’s approach could be of...portion of the entire acceleration was due to drag. Lee and Liu adopted a more hybrid approach , combining a least squares and Kalman filters [9...grows again as the window approaches the end of the available data. Three values for minimum window size, window size, and maximum window size are
Gaudin, Valérie
2017-09-01
Screening methods are used as a first-line approach to detect the presence of antibiotic residues in food of animal origin. The validation process guarantees that the method is fit-for-purpose, suited to regulatory requirements, and provides evidence of its performance. This article is focused on intra-laboratory validation. The first step in validation is characterisation of performance, and the second step is the validation itself with regard to pre-established criteria. The validation approaches can be absolute (a single method) or relative (comparison of methods), overall (combination of several characteristics in one) or criterion-by-criterion. Various approaches to validation, in the form of regulations, guidelines or standards, are presented and discussed to draw conclusions on their potential application for different residue screening methods, and to determine whether or not they reach the same conclusions. The approach by comparison of methods is not suitable for screening methods for antibiotic residues. The overall approaches, such as probability of detection (POD) and accuracy profile, are increasingly used in other fields of application. They may be of interest for screening methods for antibiotic residues. Finally, the criterion-by-criterion approach (Decision 2002/657/EC and of European guideline for the validation of screening methods), usually applied to the screening methods for antibiotic residues, introduced a major characteristic and an improvement in the validation, i.e. the detection capability (CCβ). In conclusion, screening methods are constantly evolving, thanks to the development of new biosensors or liquid chromatography coupled to tandem-mass spectrometry (LC-MS/MS) methods. There have been clear changes in validation approaches these last 20 years. Continued progress is required and perspectives for future development of guidelines, regulations and standards for validation are presented here.
NASA Technical Reports Server (NTRS)
DeCarvalho, N. V.; Chen, B. Y.; Pinho, S. T.; Baiz, P. M.; Ratcliffe, J. G.; Tay, T. E.
2013-01-01
A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.
NASA Technical Reports Server (NTRS)
DeCarvalho, Nelson V.; Chen, B. Y.; Pinho, Silvestre T.; Baiz, P. M.; Ratcliffe, James G.; Tay, T. E.
2013-01-01
A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.
Zhu, Hongbin; Wang, Chunyan; Qi, Yao; Song, Fengrui; Liu, Zhiqiang; Liu, Shuying
2013-01-15
A fingerprinting approach was developed by means of UPLC-ESI/MS(n) (ultra-performance liquid chromatography-electrospray ionization/mass spectrometry) for the quality control of processed Radix Aconiti, a widely used toxic traditional herbal medicine. The present fingerprinting approach was based on the two processing methods recorded in Chinese Pharmacopoeia for the purpose of reducing the toxicity and ensuring the clinical therapeutic efficacy. Similarity evaluation, hierarchical cluster analysis and principal component analysis were performed to evaluate the similarity and variation of the samples. The results showed that the well processed, unqualified processed and the raw Radix Aconiti could be clustered reasonably corresponding to the contents of their constituents. The loading plot shows that the main chemical markers having the most influence on the discrimination amongst the qualified and unqualified samples were mainly some monoester diterpenoid aconitines and diester diterpenoid aconitines. Finally, the UPLC-UV and UPLC-ESI/MS(n) characteristic fingerprints were established according to the well processed and purchased qualified samples. At the same time, a complementary quantification method of six Aconitine-type alkaloids was developed using UPLC-UV and UPLC-ESI/MS. The average recovery of the monoester diterpenoid aconitines was 95.4-99.1% and the average recovery of the diester diterpenoid aconitines was 103-112%. The proposed combined quantification method by UPLC-UV and UPLC-ESI/MS allows the samples analyzed in a wide concentration range. Therefore, the established fingerprinting approach in combination with chemometric analysis provides a flexible and reliable method for quality assessment of toxic herbal medicine. Copyright © 2012 Elsevier B.V. All rights reserved.
Chen, Wen Hao; Yang, Sam Y. S.; Xiao, Ti Qiao; Mayo, Sherry C.; Wang, Yu Dan; Wang, Hai Peng
2014-01-01
Quantifying three-dimensional spatial distributions of pores and material compositions in samples is a key materials characterization challenge, particularly in samples where compositions are distributed across a range of length scales, and where such compositions have similar X-ray absorption properties, such as in coal. Consequently, obtaining detailed information within sub-regions of a multi-length-scale sample by conventional approaches may not provide the resolution and level of detail one might desire. Herein, an approach for quantitative high-definition determination of material compositions from X-ray local computed tomography combined with a data-constrained modelling method is proposed. The approach is capable of dramatically improving the spatial resolution and enabling finer details within a region of interest of a sample larger than the field of view to be revealed than by using conventional techniques. A coal sample containing distributions of porosity and several mineral compositions is employed to demonstrate the approach. The optimal experimental parameters are pre-analyzed. The quantitative results demonstrated that the approach can reveal significantly finer details of compositional distributions in the sample region of interest. The elevated spatial resolution is crucial for coal-bed methane reservoir evaluation and understanding the transformation of the minerals during coal processing. The method is generic and can be applied for three-dimensional compositional characterization of other materials. PMID:24763649
NASA Astrophysics Data System (ADS)
Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli
2017-11-01
The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.
NASA Astrophysics Data System (ADS)
Ge, Zhouyang; Loiseau, Jean-Christophe; Tammisola, Outi; Brandt, Luca
2018-01-01
Aiming for the simulation of colloidal droplets in microfluidic devices, we present here a numerical method for two-fluid systems subject to surface tension and depletion forces among the suspended droplets. The algorithm is based on an efficient solver for the incompressible two-phase Navier-Stokes equations, and uses a mass-conserving level set method to capture the fluid interface. The four novel ingredients proposed here are, firstly, an interface-correction level set (ICLS) method; global mass conservation is achieved by performing an additional advection near the interface, with a correction velocity obtained by locally solving an algebraic equation, which is easy to implement in both 2D and 3D. Secondly, we report a second-order accurate geometric estimation of the curvature at the interface and, thirdly, the combination of the ghost fluid method with the fast pressure-correction approach enabling an accurate and fast computation even for large density contrasts. Finally, we derive a hydrodynamic model for the interaction forces induced by depletion of surfactant micelles and combine it with a multiple level set approach to study short-range interactions among droplets in the presence of attracting forces.
A Hybrid Approach for CpG Island Detection in the Human Genome.
Yang, Cheng-Hong; Lin, Yu-Da; Chiang, Yi-Cheng; Chuang, Li-Yeh
2016-01-01
CpG islands have been demonstrated to influence local chromatin structures and simplify the regulation of gene activity. However, the accurate and rapid determination of CpG islands for whole DNA sequences remains experimentally and computationally challenging. A novel procedure is proposed to detect CpG islands by combining clustering technology with the sliding-window method (PSO-based). Clustering technology is used to detect the locations of all possible CpG islands and process the data, thus effectively obviating the need for the extensive and unnecessary processing of DNA fragments, and thus improving the efficiency of sliding-window based particle swarm optimization (PSO) search. This proposed approach, named ClusterPSO, provides versatile and highly-sensitive detection of CpG islands in the human genome. In addition, the detection efficiency of ClusterPSO is compared with eight CpG island detection methods in the human genome. Comparison of the detection efficiency for the CpG islands in human genome, including sensitivity, specificity, accuracy, performance coefficient (PC), and correlation coefficient (CC), ClusterPSO revealed superior detection ability among all of the test methods. Moreover, the combination of clustering technology and PSO method can successfully overcome their respective drawbacks while maintaining their advantages. Thus, clustering technology could be hybridized with the optimization algorithm method to optimize CpG island detection. The prediction accuracy of ClusterPSO was quite high, indicating the combination of CpGcluster and PSO has several advantages over CpGcluster and PSO alone. In addition, ClusterPSO significantly reduced implementation time.
Combining Open-domain and Biomedical Knowledge for Topic Recognition in Consumer Health Questions.
Mrabet, Yassine; Kilicoglu, Halil; Roberts, Kirk; Demner-Fushman, Dina
2016-01-01
Determining the main topics in consumer health questions is a crucial step in their processing as it allows narrowing the search space to a specific semantic context. In this paper we propose a topic recognition approach based on biomedical and open-domain knowledge bases. In the first step of our method, we recognize named entities in consumer health questions using an unsupervised method that relies on a biomedical knowledge base, UMLS, and an open-domain knowledge base, DBpedia. In the next step, we cast topic recognition as a binary classification problem of deciding whether a named entity is the question topic or not. We evaluated our approach on a dataset from the National Library of Medicine (NLM), introduced in this paper, and another from the Genetic and Rare Disease Information Center (GARD). The combination of knowledge bases outperformed the results obtained by individual knowledge bases by up to 16.5% F1 and achieved state-of-the-art performance. Our results demonstrate that combining open-domain knowledge bases with biomedical knowledge bases can lead to a substantial improvement in understanding user-generated health content.
Quantum mechanical fragment methods based on partitioning atoms or partitioning coordinates.
Wang, Bo; Yang, Ke R; Xu, Xuefei; Isegawa, Miho; Leverentz, Hannah R; Truhlar, Donald G
2014-09-16
Conspectus The development of more efficient and more accurate ways to represent reactive potential energy surfaces is a requirement for extending the simulation of large systems to more complex systems, longer-time dynamical processes, and more complete statistical mechanical sampling. One way to treat large systems is by direct dynamics fragment methods. Another way is by fitting system-specific analytic potential energy functions with methods adapted to large systems. Here we consider both approaches. First we consider three fragment methods that allow a given monomer to appear in more than one fragment. The first two approaches are the electrostatically embedded many-body (EE-MB) expansion and the electrostatically embedded many-body expansion of the correlation energy (EE-MB-CE), which we have shown to yield quite accurate results even when one restricts the calculations to include only electrostatically embedded dimers. The third fragment method is the electrostatically embedded molecular tailoring approach (EE-MTA), which is more flexible than EE-MB and EE-MB-CE. We show that electrostatic embedding greatly improves the accuracy of these approaches compared with the original unembedded approaches. Quantum mechanical fragment methods share with combined quantum mechanical/molecular mechanical (QM/MM) methods the need to treat a quantum mechanical fragment in the presence of the rest of the system, which is especially challenging for those parts of the rest of the system that are close to the boundary of the quantum mechanical fragment. This is a delicate matter even for fragments that are not covalently bonded to the rest of the system, but it becomes even more difficult when the boundary of the quantum mechanical fragment cuts a bond. We have developed a suite of methods for more realistically treating interactions across such boundaries. These methods include redistributing and balancing the external partial atomic charges and the use of tuned fluorine atoms for capping dangling bonds, and we have shown that they can greatly improve the accuracy. Finally we present a new approach that goes beyond QM/MM by combining the convenience of molecular mechanics with the accuracy of fitting a potential function to electronic structure calculations on a specific system. To make the latter practical for systems with a large number of degrees of freedom, we developed a method to interpolate between local internal-coordinate fits to the potential energy. A key issue for the application to large systems is that rather than assigning the atoms or monomers to fragments, we assign the internal coordinates to reaction, secondary, and tertiary sets. Thus, we make a partition in coordinate space rather than atom space. Fits to the local dependence of the potential energy on tertiary coordinates are arrayed along a preselected reaction coordinate at a sequence of geometries called anchor points; the potential energy function is called an anchor points reactive potential. Electrostatically embedded fragment methods and the anchor points reactive potential, because they are based on treating an entire system by quantum mechanical electronic structure methods but are affordable for large and complex systems, have the potential to open new areas for accurate simulations where combined QM/MM methods are inadequate.
Working towards the SDGs: measuring resilience from a practitioner's perspective
NASA Astrophysics Data System (ADS)
van Manen, S. M.; Both, M.
2015-12-01
The broad universal nature of the SDGs requires integrated approaches across development sectors and action at a variety of scales: from global to local. In humanitarian and development contexts, particularly at the local level, working towards these goals is increasingly approached through the concept of resilience. Resilience is broadly defined as the ability to minimise the impact of, cope with and recover from the consequences of shocks and stresses, both natural and manmade, without compromising long-term prospects. Key in this are the physical resources required and the ability to organise these prior to and during a crisis. However, despite the active debate on the theoretical foundations of resilience there is a comparative lack in the development of measurement approaches. The conceptual diversity of the few existing approaches further illustrates the complexity of operationalising the concept. Here we present a practical method to measure community resilience using a questionnaire composed of a generic set of household-level indicators. Rooted in the sustainable livelihoods approach it considers 6 domains: human, social, natural, economic, physical and political, and evaluates both resources and socio-cognitive factors. It is intended to be combined with more specific intervention-based questionnaires to systematically assess, monitor and evaluate the resilience of a community and the contribution of specific activities to resilience. Its use will be illustrated using a Haiti-based case study. The method presented supports knowledge-based decision making and impact monitoring. Furthermore, the evidence-based way of working contributes to accountability to a range of stakeholders and can be used for resource mobilisation. However, it should be noted that due to its inherent complexity and comprehensive nature there is no method or combination of methods and data types that can fully capture resilience in and across all of its facets, scales and domains.
Prediction of Protein Structure by Template-Based Modeling Combined with the UNRES Force Field.
Krupa, Paweł; Mozolewska, Magdalena A; Joo, Keehyoung; Lee, Jooyoung; Czaplewski, Cezary; Liwo, Adam
2015-06-22
A new approach to the prediction of protein structures that uses distance and backbone virtual-bond dihedral angle restraints derived from template-based models and simulations with the united residue (UNRES) force field is proposed. The approach combines the accuracy and reliability of template-based methods for the segments of the target sequence with high similarity to those having known structures with the ability of UNRES to pack the domains correctly. Multiplexed replica-exchange molecular dynamics with restraints derived from template-based models of a given target, in which each restraint is weighted according to the accuracy of the prediction of the corresponding section of the molecule, is used to search the conformational space, and the weighted histogram analysis method and cluster analysis are applied to determine the families of the most probable conformations, from which candidate predictions are selected. To test the capability of the method to recover template-based models from restraints, five single-domain proteins with structures that have been well-predicted by template-based methods were used; it was found that the resulting structures were of the same quality as the best of the original models. To assess whether the new approach can improve template-based predictions with incorrectly predicted domain packing, four such targets were selected from the CASP10 targets; for three of them the new approach resulted in significantly better predictions compared with the original template-based models. The new approach can be used to predict the structures of proteins for which good templates can be found for sections of the sequence or an overall good template can be found for the entire sequence but the prediction quality is remarkably weaker in putative domain-linker regions.
Coherent beam combining in atmospheric channels using gated backscatter.
Naeh, Itay; Katzir, Abraham
2016-02-01
This paper introduces the concept of atmospheric channels and describes a possible approach for the coherent beam combining of lasers of an optical phased array (OPA) in a turbulent atmosphere. By using the recently introduced sparse spectrum harmonic augmentation method, a comprehensive simulative investigation was performed and the exceptional properties of the atmospheric channels were numerically demonstrated. Among the interesting properties are the ability to guide light in a confined manner in a refractive channel, the ability to gather different sources to the same channel, and the ability to maintain a constant relative phase within the channel between several sources. The newly introduced guiding properties combined with a suggested method for channel probing and phase measurement by aerosol backscattered radiation allows coherence improvement of the phased array's elements and energy refocusing at the location of the channel in order to increase power in the bucket without feedback from the target. The method relies on the electronic focusing, electronic scanning, and time gating of the OPA, combined with elements of the relative phase measurements.
Combined acoustic and optical trapping
Thalhammer, G.; Steiger, R.; Meinschad, M.; Hill, M.; Bernet, S.; Ritsch-Marte, M.
2011-01-01
Combining several methods for contact free micro-manipulation of small particles such as cells or micro-organisms provides the advantages of each method in a single setup. Optical tweezers, which employ focused laser beams, offer very precise and selective handling of single particles. On the other hand, acoustic trapping with wavelengths of about 1 mm allows the simultaneous trapping of many, comparatively large particles. With conventional approaches it is difficult to fully employ the strengths of each method due to the different experimental requirements. Here we present the combined optical and acoustic trapping of motile micro-organisms in a microfluidic environment, utilizing optical macro-tweezers, which offer a large field of view and working distance of several millimeters and therefore match the typical range of acoustic trapping. We characterize the acoustic trapping forces with the help of optically trapped particles and present several applications of the combined optical and acoustic trapping, such as manipulation of large (75 μm) particles and active particle sorting. PMID:22025990
Applying Emax model and bivariate thin plate splines to assess drug interactions
Kong, Maiying; Lee, J. Jack
2014-01-01
We review the semiparametric approach previously proposed by Kong and Lee and extend it to a case in which the dose-effect curves follow the Emax model instead of the median effect equation. When the maximum effects for the investigated drugs are different, we provide a procedure to obtain the additive effect based on the Loewe additivity model. Then, we apply a bivariate thin plate spline approach to estimate the effect beyond additivity along with its 95% point-wise confidence interval as well as its 95% simultaneous confidence interval for any combination dose. Thus, synergy, additivity, and antagonism can be identified. The advantages of the method are that it provides an overall assessment of the combination effect on the entire two-dimensional dose space spanned by the experimental doses, and it enables us to identify complex patterns of drug interaction in combination studies. In addition, this approach is robust to outliers. To illustrate this procedure, we analyzed data from two case studies. PMID:20036878
Applying Emax model and bivariate thin plate splines to assess drug interactions.
Kong, Maiying; Lee, J Jack
2010-01-01
We review the semiparametric approach previously proposed by Kong and Lee and extend it to a case in which the dose-effect curves follow the Emax model instead of the median effect equation. When the maximum effects for the investigated drugs are different, we provide a procedure to obtain the additive effect based on the Loewe additivity model. Then, we apply a bivariate thin plate spline approach to estimate the effect beyond additivity along with its 95 per cent point-wise confidence interval as well as its 95 per cent simultaneous confidence interval for any combination dose. Thus, synergy, additivity, and antagonism can be identified. The advantages of the method are that it provides an overall assessment of the combination effect on the entire two-dimensional dose space spanned by the experimental doses, and it enables us to identify complex patterns of drug interaction in combination studies. In addition, this approach is robust to outliers. To illustrate this procedure, we analyzed data from two case studies.
One output function: a misconception of students studying digital systems - a case study
NASA Astrophysics Data System (ADS)
Trotskovsky, E.; Sabag, N.
2015-05-01
Background:Learning processes are usually characterized by students' misunderstandings and misconceptions. Engineering educators intend to help their students overcome their misconceptions and achieve correct understanding of the concept. This paper describes a misconception in digital systems held by many students who believe that combinational logic circuits should have only one output. Purpose:The current study aims to investigate the roots of the misconception about one-output function and the pedagogical methods that can help students overcome the misconception. Sample:Three hundred and eighty-one students in the Departments of Electrical and Electronics and Mechanical Engineering at an academic engineering college, who learned the same topics of a digital combinational system, participated in the research. Design and method:In the initial research stage, students were taught according to traditional method - first to design a one-output combinational logic system, and then to implement a system with a number of output functions. In the main stage, an experimental group was taught using a new method whereby they were shown how to implement a system with several output functions, prior to learning about one-output systems. A control group was taught using the traditional method. In the replication stage (the third stage), an experimental group was taught using the new method. A mixed research methodology was used to examine the results of the new learning method. Results:Quantitative research showed that the new teaching approach resulted in a statistically significant decrease in student errors, and qualitative research revealed students' erroneous thinking patterns. Conclusions:It can be assumed that the traditional teaching method generates an incorrect mental model of the one-output function among students. The new pedagogical approach prevented the creation of an erroneous mental model and helped students develop the correct conceptual understanding.
A variational eigenvalue solver on a photonic quantum processor
Peruzzo, Alberto; McClean, Jarrod; Shadbolt, Peter; Yung, Man-Hong; Zhou, Xiao-Qi; Love, Peter J.; Aspuru-Guzik, Alán; O’Brien, Jeremy L.
2014-01-01
Quantum computers promise to efficiently solve important problems that are intractable on a conventional computer. For quantum systems, where the physical dimension grows exponentially, finding the eigenvalues of certain operators is one such intractable problem and remains a fundamental challenge. The quantum phase estimation algorithm efficiently finds the eigenvalue of a given eigenvector but requires fully coherent evolution. Here we present an alternative approach that greatly reduces the requirements for coherent evolution and combine this method with a new approach to state preparation based on ansätze and classical optimization. We implement the algorithm by combining a highly reconfigurable photonic quantum processor with a conventional computer. We experimentally demonstrate the feasibility of this approach with an example from quantum chemistry—calculating the ground-state molecular energy for He–H+. The proposed approach drastically reduces the coherence time requirements, enhancing the potential of quantum resources available today and in the near future. PMID:25055053
Reljin, Branimir; Milosević, Zorica; Stojić, Tomislav; Reljin, Irini
2009-01-01
Two methods for segmentation and visualization of microcalcifications in digital or digitized mammograms are described. First method is based on modern mathematical morphology, while the second one uses the multifractal approach. In the first method, by using an appropriate combination of some morphological operations, high local contrast enhancement, followed by significant suppression of background tissue, irrespective of its radiology density, is obtained. By iterative procedure, this method highly emphasizes only small bright details, possible microcalcifications. In a multifractal approach, from initial mammogram image, a corresponding multifractal "images" are created, from which a radiologist has a freedom to change the level of segmentation. An appropriate user friendly computer aided visualization (CAV) system with embedded two methods is realized. The interactive approach enables the physician to control the level and the quality of segmentation. Suggested methods were tested through mammograms from MIAS database as a gold standard, and from clinical praxis, using digitized films and digital images from full field digital mammograph.
Seo, Jung Hee; Mittal, Rajat
2010-01-01
A new sharp-interface immersed boundary method based approach for the computation of low-Mach number flow-induced sound around complex geometries is described. The underlying approach is based on a hydrodynamic/acoustic splitting technique where the incompressible flow is first computed using a second-order accurate immersed boundary solver. This is followed by the computation of sound using the linearized perturbed compressible equations (LPCE). The primary contribution of the current work is the development of a versatile, high-order accurate immersed boundary method for solving the LPCE in complex domains. This new method applies the boundary condition on the immersed boundary to a high-order by combining the ghost-cell approach with a weighted least-squares error method based on a high-order approximating polynomial. The method is validated for canonical acoustic wave scattering and flow-induced noise problems. Applications of this technique to relatively complex cases of practical interest are also presented. PMID:21318129
NASA Astrophysics Data System (ADS)
Ataei-Esfahani, Armin
In this dissertation, we present algorithmic procedures for sum-of-squares based stability analysis and control design for uncertain nonlinear systems. In particular, we consider the case of robust aircraft control design for a hypersonic aircraft model subject to parametric uncertainties in its aerodynamic coefficients. In recent years, Sum-of-Squares (SOS) method has attracted increasing interest as a new approach for stability analysis and controller design of nonlinear dynamic systems. Through the application of SOS method, one can describe a stability analysis or control design problem as a convex optimization problem, which can efficiently be solved using Semidefinite Programming (SDP) solvers. For nominal systems, the SOS method can provide a reliable and fast approach for stability analysis and control design for low-order systems defined over the space of relatively low-degree polynomials. However, The SOS method is not well-suited for control problems relating to uncertain systems, specially those with relatively high number of uncertainties or those with non-affine uncertainty structure. In order to avoid issues relating to the increased complexity of the SOS problems for uncertain system, we present an algorithm that can be used to transform an SOS problem with uncertainties into a LMI problem with uncertainties. A new Probabilistic Ellipsoid Algorithm (PEA) is given to solve the robust LMI problem, which can guarantee the feasibility of a given solution candidate with an a-priori fixed probability of violation and with a fixed confidence level. We also introduce two approaches to approximate the robust region of attraction (RROA) for uncertain nonlinear systems with non-affine dependence on uncertainties. The first approach is based on a combination of PEA and SOS method and searches for a common Lyapunov function, while the second approach is based on the generalized Polynomial Chaos (gPC) expansion theorem combined with the SOS method and searches for parameter-dependent Lyapunov functions. The control design problem is investigated through a case study of a hypersonic aircraft model with parametric uncertainties. Through time-scale decomposition and a series of function approximations, the complexity of the aircraft model is reduced to fall within the capability of SDP solvers. The control design problem is then formulated as a convex problem using the dual of the Lyapunov theorem. A nonlinear robust controller is searched using the combined PEA/SOS method. The response of the uncertain aircraft model is evaluated for two sets of pilot commands. As the simulation results show, the aircraft remains stable under up to 50% uncertainty in aerodynamic coefficients and can follow the pilot commands.
Stochastic Convection Parameterizations: The Eddy-Diffusivity/Mass-Flux (EDMF) Approach (Invited)
NASA Astrophysics Data System (ADS)
Teixeira, J.
2013-12-01
In this presentation it is argued that moist convection parameterizations need to be stochastic in order to be realistic - even in deterministic atmospheric prediction systems. A new unified convection and boundary layer parameterization (EDMF) that optimally combines the Eddy-Diffusivity (ED) approach for smaller-scale boundary layer mixing with the Mass-Flux (MF) approach for larger-scale plumes is discussed. It is argued that for realistic simulations stochastic methods have to be employed in this new unified EDMF. Positive results from the implementation of the EDMF approach in atmospheric models are presented.
Marchand, Jérémy; Martineau, Estelle; Guitton, Yann; Dervilly-Pinel, Gaud; Giraudeau, Patrick
2017-02-01
Multi-dimensional NMR is an appealing approach for dealing with the challenging complexity of biological samples in metabolomics. This article describes how spectroscopists have recently challenged their imagination in order to make 2D NMR a powerful tool for quantitative metabolomics, based on innovative pulse sequences combined with meticulous analytical chemistry approaches. Clever time-saving strategies have also been explored to make 2D NMR a high-throughput tool for metabolomics, relying on alternative data acquisition schemes such as ultrafast NMR. Currently, much work is aimed at drastically boosting the NMR sensitivity thanks to hyperpolarisation techniques, which have been used in combination with fast acquisition methods and could greatly expand the application potential of NMR metabolomics. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Schlegel, R. G.
1982-01-01
It is important for industry and NASA to assess the status of acoustic design technology for predicting and controlling helicopter external noise in order for a meaningful research program to be formulated which will address this problem. The prediction methodologies available to the designer and the acoustic engineer are three-fold. First is what has been described as a first principle analysis. This analysis approach attempts to remove any empiricism from the analysis process and deals with a theoretical mechanism approach to predicting the noise. The second approach attempts to combine first principle methodology (when available) with empirical data to formulate source predictors which can be combined to predict vehicle levels. The third is an empirical analysis, which attempts to generalize measured trends into a vehicle noise prediction method. This paper will briefly address each.
Allnutt, Thomas F.; McClanahan, Timothy R.; Andréfouët, Serge; Baker, Merrill; Lagabrielle, Erwann; McClennen, Caleb; Rakotomanjaka, Andry J. M.; Tianarisoa, Tantely F.; Watson, Reg; Kremen, Claire
2012-01-01
The Government of Madagascar plans to increase marine protected area coverage by over one million hectares. To assist this process, we compare four methods for marine spatial planning of Madagascar's west coast. Input data for each method was drawn from the same variables: fishing pressure, exposure to climate change, and biodiversity (habitats, species distributions, biological richness, and biodiversity value). The first method compares visual color classifications of primary variables, the second uses binary combinations of these variables to produce a categorical classification of management actions, the third is a target-based optimization using Marxan, and the fourth is conservation ranking with Zonation. We present results from each method, and compare the latter three approaches for spatial coverage, biodiversity representation, fishing cost and persistence probability. All results included large areas in the north, central, and southern parts of western Madagascar. Achieving 30% representation targets with Marxan required twice the fish catch loss than the categorical method. The categorical classification and Zonation do not consider targets for conservation features. However, when we reduced Marxan targets to 16.3%, matching the representation level of the “strict protection” class of the categorical result, the methods show similar catch losses. The management category portfolio has complete coverage, and presents several management recommendations including strict protection. Zonation produces rapid conservation rankings across large, diverse datasets. Marxan is useful for identifying strict protected areas that meet representation targets, and minimize exposure probabilities for conservation features at low economic cost. We show that methods based on Zonation and a simple combination of variables can produce results comparable to Marxan for species representation and catch losses, demonstrating the value of comparing alternative approaches during initial stages of the planning process. Choosing an appropriate approach ultimately depends on scientific and political factors including representation targets, likelihood of adoption, and persistence goals. PMID:22359534
Allnutt, Thomas F; McClanahan, Timothy R; Andréfouët, Serge; Baker, Merrill; Lagabrielle, Erwann; McClennen, Caleb; Rakotomanjaka, Andry J M; Tianarisoa, Tantely F; Watson, Reg; Kremen, Claire
2012-01-01
The Government of Madagascar plans to increase marine protected area coverage by over one million hectares. To assist this process, we compare four methods for marine spatial planning of Madagascar's west coast. Input data for each method was drawn from the same variables: fishing pressure, exposure to climate change, and biodiversity (habitats, species distributions, biological richness, and biodiversity value). The first method compares visual color classifications of primary variables, the second uses binary combinations of these variables to produce a categorical classification of management actions, the third is a target-based optimization using Marxan, and the fourth is conservation ranking with Zonation. We present results from each method, and compare the latter three approaches for spatial coverage, biodiversity representation, fishing cost and persistence probability. All results included large areas in the north, central, and southern parts of western Madagascar. Achieving 30% representation targets with Marxan required twice the fish catch loss than the categorical method. The categorical classification and Zonation do not consider targets for conservation features. However, when we reduced Marxan targets to 16.3%, matching the representation level of the "strict protection" class of the categorical result, the methods show similar catch losses. The management category portfolio has complete coverage, and presents several management recommendations including strict protection. Zonation produces rapid conservation rankings across large, diverse datasets. Marxan is useful for identifying strict protected areas that meet representation targets, and minimize exposure probabilities for conservation features at low economic cost. We show that methods based on Zonation and a simple combination of variables can produce results comparable to Marxan for species representation and catch losses, demonstrating the value of comparing alternative approaches during initial stages of the planning process. Choosing an appropriate approach ultimately depends on scientific and political factors including representation targets, likelihood of adoption, and persistence goals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Millis, Andrew
Understanding the behavior of interacting electrons in molecules and solids so that one can predict new superconductors, catalysts, light harvesters, energy and battery materials and optimize existing ones is the ``quantum many-body problem’’. This is one of the scientific grand challenges of the 21 st century. A complete solution to the problem has been proven to be exponentially hard, meaning that straightforward numerical approaches fail. New insights and new methods are needed to provide accurate yet feasible approximate solutions. This CMSCN project brought together chemists and physicists to combine insights from the two disciplines to develop innovative new approaches. Outcomesmore » included the Density Matrix Embedding method, a new, computationally inexpensive and extremely accurate approach that may enable first principles treatment of superconducting and magnetic properties of strongly correlated materials, new techniques for existing methods including an Adaptively Truncated Hilbert Space approach that will vastly expand the capabilities of the dynamical mean field method, a self-energy embedding theory and a new memory-function based approach to the calculations of the behavior of driven systems. The methods developed under this project are now being applied to improve our understanding of superconductivity, to calculate novel topological properties of materials and to characterize and improve the properties of nanoscale devices.« less
Bayesian Computation for Log-Gaussian Cox Processes: A Comparative Analysis of Methods
Teng, Ming; Nathoo, Farouk S.; Johnson, Timothy D.
2017-01-01
The Log-Gaussian Cox Process is a commonly used model for the analysis of spatial point pattern data. Fitting this model is difficult because of its doubly-stochastic property, i.e., it is an hierarchical combination of a Poisson process at the first level and a Gaussian Process at the second level. Various methods have been proposed to estimate such a process, including traditional likelihood-based approaches as well as Bayesian methods. We focus here on Bayesian methods and several approaches that have been considered for model fitting within this framework, including Hamiltonian Monte Carlo, the Integrated nested Laplace approximation, and Variational Bayes. We consider these approaches and make comparisons with respect to statistical and computational efficiency. These comparisons are made through several simulation studies as well as through two applications, the first examining ecological data and the second involving neuroimaging data. PMID:29200537
Analysis of a novel device-level SINS/ACFSS deeply integrated navigation method
NASA Astrophysics Data System (ADS)
Zhang, Hao; Qin, Shiqiao; Wang, Xingshu; Jiang, Guangwen; Tan, Wenfeng; Wu, Wei
2017-02-01
The combination of the strap-down inertial navigation system(SINS) and the celestial navigation system(CNS) is one of the popular measures to constitute the integrated navigation system. A star sensor(SS) is used as a precise attitude determination device in CNS. To solve the problem that the star image obtained by SS is motion-blurred under dynamic conditions, the attitude-correlated frames(ACF) approach is presented and the star sensor which works based on ACF approach is named ACFSS. Depending on the ACF approach, a novel device-level SINS/ACFSS deeply integrated navigation method is proposed in this paper. Feedback to the ACF process from the error of the gyro is one of the typical characters of the SINS/CNS deeply integrated navigation method. Herein, simulation results have verified its validity and efficiency in improving the accuracy of gyro and it can be proved that this method is feasible.
A multi-domain spectral method for time-fractional differential equations
NASA Astrophysics Data System (ADS)
Chen, Feng; Xu, Qinwu; Hesthaven, Jan S.
2015-07-01
This paper proposes an approach for high-order time integration within a multi-domain setting for time-fractional differential equations. Since the kernel is singular or nearly singular, two main difficulties arise after the domain decomposition: how to properly account for the history/memory part and how to perform the integration accurately. To address these issues, we propose a novel hybrid approach for the numerical integration based on the combination of three-term-recurrence relations of Jacobi polynomials and high-order Gauss quadrature. The different approximations used in the hybrid approach are justified theoretically and through numerical examples. Based on this, we propose a new multi-domain spectral method for high-order accurate time integrations and study its stability properties by identifying the method as a generalized linear method. Numerical experiments confirm hp-convergence for both time-fractional differential equations and time-fractional partial differential equations.
Using mixed methods in health research
Woodman, Jenny
2013-01-01
Summary Mixed methods research is the use of quantitative and qualitative methods in a single study or series of studies. It is an emergent methodology which is increasingly used by health researchers, especially within health services research. There is a growing literature on the theory, design and critical appraisal of mixed methods research. However, there are few papers that summarize this methodological approach for health practitioners who wish to conduct or critically engage with mixed methods studies. The objective of this paper is to provide an accessible introduction to mixed methods for clinicians and researchers unfamiliar with this approach. We present a synthesis of key methodological literature on mixed methods research, with examples from our own work and that of others, to illustrate the practical applications of this approach within health research. We summarize definitions of mixed methods research, the value of this approach, key aspects of study design and analysis, and discuss the potential challenges of combining quantitative and qualitative methods and data. One of the key challenges within mixed methods research is the successful integration of quantitative and qualitative data during analysis and interpretation. However, the integration of different types of data can generate insights into a research question, resulting in enriched understanding of complex health research problems. PMID:23885291
Disparities in urban/rural environmental quality
Individuals experience simultaneous exposure to many pollutants and social factors, which cluster to affect human health outcomes. Because the optimal approach to combining these factors is unknown, we developed a method to model simultaneous exposure using criteria air pollutant...
Synthesis and Control of Flexible Systems with Component-Level Uncertainties
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Lim, Kyong B.
2009-01-01
An efficient and computationally robust method for synthesis of component dynamics is developed. The method defines the interface forces/moments as feasible vectors in transformed coordinates to ensure that connectivity requirements of the combined structure are met. The synthesized system is then defined in a transformed set of feasible coordinates. The simplicity of form is exploited to effectively deal with modeling parametric and non-parametric uncertainties at the substructure level. Uncertainty models of reasonable size and complexity are synthesized for the combined structure from those in the substructure models. In particular, we address frequency and damping uncertainties at the component level. The approach first considers the robustness of synthesized flexible systems. It is then extended to deal with non-synthesized dynamic models with component-level uncertainties by projecting uncertainties to the system level. A numerical example is given to demonstrate the feasibility of the proposed approach.
Zhang, Tao; Gao, Feng; Jiang, Xiangqian
2017-10-02
This paper proposes an approach to measure double-sided near-right-angle structured surfaces based on dual-probe wavelength scanning interferometry (DPWSI). The principle and mathematical model is discussed and the measurement system is calibrated with a combination of standard step-height samples for both probes vertical calibrations and a specially designed calibration artefact for building up the space coordinate relationship of the dual-probe measurement system. The topography of the specially designed artefact is acquired by combining the measurement results with white light scanning interferometer (WLSI) and scanning electron microscope (SEM) for reference. The relative location of the two probes is then determined with 3D registration algorithm. Experimental validation of the approach is provided and the results show that the method is able to measure double-sided near-right-angle structured surfaces with nanometer vertical resolution and micrometer lateral resolution.
NASA Astrophysics Data System (ADS)
Vargas-Barbosa, Nella M.; Roling, Bernhard
2018-05-01
The potential of zero charge (PZC) is a fundamental property that describes the electrode/electrolyte interface. The determination of the PZC at electrode/ionic liquid interfaces has been challenging due to the lack of models that fully describe these complex interfaces as well as the non-standardized approaches used to characterize them. In this work, we present a method that combines electrode immersion transient and impedance measurements for the determination of the PZC. This combined approach allows the distinction of the potential of zero free charge (pzfc), related to fast double layer charging on a millisecond timescale, from a potential of zero charge on a timescale of tens of seconds related to slower ion transport processes at the interface. Our method highlights the complementarity of these electrochemical techniques and the importance of selecting the correct timescale to execute experiments and interpret the results.
Prischi, Filippo; Pastore, Annalisa
2016-01-01
The current main challenge of Structural Biology is to undertake the structure determination of increasingly complex systems in the attempt to better understand their biological function. As systems become more challenging, however, there is an increasing demand for the parallel use of more than one independent technique to allow pushing the frontiers of structure determination and, at the same time, obtaining independent structural validation. The combination of different Structural Biology methods has been named hybrid approaches. The aim of this review is to critically discuss the most recent examples and new developments that have allowed structure determination or experimentally-based modelling of various molecular complexes selecting them among those that combine the use of nuclear magnetic resonance and small angle scattering techniques. We provide a selective but focused account of some of the most exciting recent approaches and discuss their possible further developments.
Parametric design of pressure-relieving foot orthosis using statistics-based finite element method.
Cheung, Jason Tak-Man; Zhang, Ming
2008-04-01
Custom-molded foot orthoses are frequently prescribed in routine clinical practice to prevent or treat plantar ulcers in diabetes by reducing the peak plantar pressure. However, the design and fabrication of foot orthosis vary among clinical practitioners and manufacturers. Moreover, little information about the parametric effect of different combinations of design factors is available. As an alternative to the experimental approach, therefore, computational models of the foot and footwear can provide efficient evaluations of different combinations of structural and material design factors on plantar pressure distribution. In this study, a combined finite element and Taguchi method was used to identify the sensitivity of five design factors (arch type, insole and midsole thickness, insole and midsole stiffness) of foot orthosis on peak plantar pressure relief. From the FE predictions, the custom-molded shape was found to be the most important design factor in reducing peak plantar pressure. Besides the use of an arch-conforming foot orthosis, the insole stiffness was found to be the second most important factor for peak pressure reduction. Other design factors, such as insole thickness, midsole stiffness and midsole thickness, contributed to less important roles in peak pressure reduction in the given order. The statistics-based FE method was found to be an effective approach in evaluating and optimizing the design of foot orthosis.
Computer-aided interpretation approach for optical tomographic images
NASA Astrophysics Data System (ADS)
Klose, Christian D.; Klose, Alexander D.; Netz, Uwe J.; Scheel, Alexander K.; Beuthan, Jürgen; Hielscher, Andreas H.
2010-11-01
A computer-aided interpretation approach is proposed to detect rheumatic arthritis (RA) in human finger joints using optical tomographic images. The image interpretation method employs a classification algorithm that makes use of a so-called self-organizing mapping scheme to classify fingers as either affected or unaffected by RA. Unlike in previous studies, this allows for combining multiple image features, such as minimum and maximum values of the absorption coefficient for identifying affected and not affected joints. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index, and mutual information. Different methods (i.e., clinical diagnostics, ultrasound imaging, magnet resonance imaging, and inspection of optical tomographic images), were used to produce ground truth benchmarks to determine the performance of image interpretations. Using data from 100 finger joints, findings suggest that some parameter combinations lead to higher sensitivities, while others to higher specificities when compared to single parameter classifications employed in previous studies. Maximum performances are reached when combining the minimum/maximum ratio of the absorption coefficient and image variance. In this case, sensitivities and specificities over 0.9 can be achieved. These values are much higher than values obtained when only single parameter classifications were used, where sensitivities and specificities remained well below 0.8.
NASA Astrophysics Data System (ADS)
Hartung, Christine; Spraul, Raphael; Schuchert, Tobias
2017-10-01
Wide area motion imagery (WAMI) acquired by an airborne multicamera sensor enables continuous monitoring of large urban areas. Each image can cover regions of several square kilometers and contain thousands of vehicles. Reliable vehicle tracking in this imagery is an important prerequisite for surveillance tasks, but remains challenging due to low frame rate and small object size. Most WAMI tracking approaches rely on moving object detections generated by frame differencing or background subtraction. These detection methods fail when objects slow down or stop. Recent approaches for persistent tracking compensate for missing motion detections by combining a detection-based tracker with a second tracker based on appearance or local context. In order to avoid the additional complexity introduced by combining two trackers, we employ an alternative single tracker framework that is based on multiple hypothesis tracking and recovers missing motion detections with a classifierbased detector. We integrate an appearance-based similarity measure, merge handling, vehicle-collision tests, and clutter handling to adapt the approach to the specific context of WAMI tracking. We apply the tracking framework on a region of interest of the publicly available WPAFB 2009 dataset for quantitative evaluation; a comparison to other persistent WAMI trackers demonstrates state of the art performance of the proposed approach. Furthermore, we analyze in detail the impact of different object detection methods and detector settings on the quality of the output tracking results. For this purpose, we choose four different motion-based detection methods that vary in detection performance and computation time to generate the input detections. As detector parameters can be adjusted to achieve different precision and recall performance, we combine each detection method with different detector settings that yield (1) high precision and low recall, (2) high recall and low precision, and (3) best f-score. Comparing the tracking performance achieved with all generated sets of input detections allows us to quantify the sensitivity of the tracker to different types of detector errors and to derive recommendations for detector and parameter choice.
Batke, Monika; Gütlein, Martin; Partosch, Falko; Gundert-Remy, Ursula; Helma, Christoph; Kramer, Stefan; Maunz, Andreas; Seeland, Madeleine; Bitsch, Annette
2016-01-01
Interest is increasing in the development of non-animal methods for toxicological evaluations. These methods are however, particularly challenging for complex toxicological endpoints such as repeated dose toxicity. European Legislation, e.g., the European Union's Cosmetic Directive and REACH, demands the use of alternative methods. Frameworks, such as the Read-across Assessment Framework or the Adverse Outcome Pathway Knowledge Base, support the development of these methods. The aim of the project presented in this publication was to develop substance categories for a read-across with complex endpoints of toxicity based on existing databases. The basic conceptual approach was to combine structural similarity with shared mechanisms of action. Substances with similar chemical structure and toxicological profile form candidate categories suitable for read-across. We combined two databases on repeated dose toxicity, RepDose database, and ELINCS database to form a common database for the identification of categories. The resulting database contained physicochemical, structural, and toxicological data, which were refined and curated for cluster analyses. We applied the Predictive Clustering Tree (PCT) approach for clustering chemicals based on structural and on toxicological information to detect groups of chemicals with similar toxic profiles and pathways/mechanisms of toxicity. As many of the experimental toxicity values were not available, this data was imputed by predicting them with a multi-label classification method, prior to clustering. The clustering results were evaluated by assessing chemical and toxicological similarities with the aim of identifying clusters with a concordance between structural information and toxicity profiles/mechanisms. From these chosen clusters, seven were selected for a quantitative read-across, based on a small ratio of NOAEL of the members with the highest and the lowest NOAEL in the cluster (< 5). We discuss the limitations of the approach. Based on this analysis we propose improvements for a follow-up approach, such as incorporation of metabolic information and more detailed mechanistic information. The software enables the user to allocate a substance in a cluster and to use this information for a possible read- across. The clustering tool is provided as a free web service, accessible at http://mlc-reach.informatik.uni-mainz.de.
Dose-finding design for multi-drug combinations
Wages, Nolan A; Conaway, Mark R; O'Quigley, John
2012-01-01
Background Most of the current designs used for Phase I dose finding trials in oncology will either involve only a single cytotoxic agent or will impose some implicit ordering among the doses. The goal of the studies is to estimate the maximum tolerated dose (MTD), the highest dose that can be administered with an acceptable level of toxicity. A key working assumption of these methods is the monotonicity of the dose–toxicity curve. Purpose Here we consider situations in which the monotonicity assumption may fail. These studies are becoming increasingly common in practice, most notably, in phase I trials that involve combinations of agents. Our focus is on studies where there exist pairs of treatment combinations for which the ordering of the probabilities of a dose-limiting toxicity cannot be known a priori. Methods We describe a new dose-finding design which can be used for multiple-drug trials and can be applied to this kind of problem. Our methods proceed by laying out all possible orderings of toxicity probabilities that are consistent with the known orderings among treatment combinations and allowing the continual reassessment method (CRM) to provide efficient estimates of the MTD within these orders. The design can be seen to simplify to the CRM when the full ordering is known. Results We study the properties of the design via simulations that provide comparisons to the Bayesian approach to partial orders (POCRM) of Wages, Conaway, and O'Quigley. The POCRM was shown to perform well when compared to other suggested methods for partial orders. Therefore, we comapre our approach to it in order to assess the performance of the new design. Limitations A limitation concerns the number of possible orders. There are dose-finding studies with combinations of agents that can lead to a large number of possible orders. In this case, it may not be feasible to work with all possible orders. Conclusions The proposed design demonstrates the ability to effectively estimate MTD combinations in partially ordered dosefinding studies. Because it relaxes the monotonicity assumption, it can be considered a multivariate generalization of the CRM. Hence, it can serve as a link between single and multiple-agent dosefinding trials. PMID:21652689
Allelic-based gene-gene interaction associated with quantitative traits.
Jung, Jeesun; Sun, Bin; Kwon, Deukwoo; Koller, Daniel L; Foroud, Tatiana M
2009-05-01
Recent studies have shown that quantitative phenotypes may be influenced not only by multiple single nucleotide polymorphisms (SNPs) within a gene but also by the interaction between SNPs at unlinked genes. We propose a new statistical approach that can detect gene-gene interactions at the allelic level which contribute to the phenotypic variation in a quantitative trait. By testing for the association of allelic combinations at multiple unlinked loci with a quantitative trait, we can detect the SNP allelic interaction whether or not it can be detected as a main effect. Our proposed method assigns a score to unrelated subjects according to their allelic combination inferred from observed genotypes at two or more unlinked SNPs, and then tests for the association of the allelic score with a quantitative trait. To investigate the statistical properties of the proposed method, we performed a simulation study to estimate type I error rates and power and demonstrated that this allelic approach achieves greater power than the more commonly used genotypic approach to test for gene-gene interaction. As an example, the proposed method was applied to data obtained as part of a candidate gene study of sodium retention by the kidney. We found that this method detects an interaction between the calcium-sensing receptor gene (CaSR), the chloride channel gene (CLCNKB) and the Na, K, 2Cl cotransporter gene (CLC12A1) that contributes to variation in diastolic blood pressure.
Thermodynamics and proton activities of protic ionic liquids with quantum cluster equilibrium theory
NASA Astrophysics Data System (ADS)
Ingenmey, Johannes; von Domaros, Michael; Perlt, Eva; Verevkin, Sergey P.; Kirchner, Barbara
2018-05-01
We applied the binary Quantum Cluster Equilibrium (bQCE) method to a number of alkylammonium-based protic ionic liquids in order to predict boiling points, vaporization enthalpies, and proton activities. The theory combines statistical thermodynamics of van-der-Waals-type clusters with ab initio quantum chemistry and yields the partition functions (and associated thermodynamic potentials) of binary mixtures over a wide range of thermodynamic phase points. Unlike conventional cluster approaches that are limited to the prediction of thermodynamic properties, dissociation reactions can be effortlessly included into the bQCE formalism, giving access to ionicities, as well. The method is open to quantum chemical methods at any level of theory, but combination with low-cost composite density functional theory methods and the proposed systematic approach to generate cluster sets provides a computationally inexpensive and mostly parameter-free way to predict such properties at good-to-excellent accuracy. Boiling points can be predicted within an accuracy of 50 K, reaching excellent accuracy for ethylammonium nitrate. Vaporization enthalpies are predicted within an accuracy of 20 kJ mol-1 and can be systematically interpreted on a molecular level. We present the first theoretical approach to predict proton activities in protic ionic liquids, with results fitting well into the experimentally observed correlation. Furthermore, enthalpies of vaporization were measured experimentally for some alkylammonium nitrates and an excellent linear correlation with vaporization enthalpies of their respective parent amines is observed.
Applications of hybrid genetic algorithms in seismic tomography
NASA Astrophysics Data System (ADS)
Soupios, Pantelis; Akca, Irfan; Mpogiatzis, Petros; Basokur, Ahmet T.; Papazachos, Constantinos
2011-11-01
Almost all earth sciences inverse problems are nonlinear and involve a large number of unknown parameters, making the application of analytical inversion methods quite restrictive. In practice, most analytical methods are local in nature and rely on a linearized form of the problem equations, adopting an iterative procedure which typically employs partial derivatives in order to optimize the starting (initial) model by minimizing a misfit (penalty) function. Unfortunately, especially for highly non-linear cases, the final model strongly depends on the initial model, hence it is prone to solution-entrapment in local minima of the misfit function, while the derivative calculation is often computationally inefficient and creates instabilities when numerical approximations are used. An alternative is to employ global techniques which do not rely on partial derivatives, are independent of the misfit form and are computationally robust. Such methods employ pseudo-randomly generated models (sampling an appropriately selected section of the model space) which are assessed in terms of their data-fit. A typical example is the class of methods known as genetic algorithms (GA), which achieves the aforementioned approximation through model representation and manipulations, and has attracted the attention of the earth sciences community during the last decade, with several applications already presented for several geophysical problems. In this paper, we examine the efficiency of the combination of the typical regularized least-squares and genetic methods for a typical seismic tomography problem. The proposed approach combines a local (LOM) and a global (GOM) optimization method, in an attempt to overcome the limitations of each individual approach, such as local minima and slow convergence, respectively. The potential of both optimization methods is tested and compared, both independently and jointly, using the several test models and synthetic refraction travel-time date sets that employ the same experimental geometry, wavelength and geometrical characteristics of the model anomalies. Moreover, real data from a crosswell tomographic project for the subsurface mapping of an ancient wall foundation are used for testing the efficiency of the proposed algorithm. The results show that the combined use of both methods can exploit the benefits of each approach, leading to improved final models and producing realistic velocity models, without significantly increasing the required computation time.
Todd, E Michelle; Torrence, Brett S; Watts, Logan L; Mulhearn, Tyler J; Connelly, Shane; Mumford, Michael D
2017-01-01
In order to delineate best practices for courses on research ethics, the goal of the present effort was to identify themes related to instructional methods reflected in effective research ethics and responsible conduct of research (RCR) courses. By utilizing a qualitative review, four themes relevant to instructional methods were identified in effective research ethics courses: active participation, case-based activities, a combination of individual and group approaches, and a small number of instructional methods. Three instructional method themes associated with less effective courses were also identified: passive learning, a group-based approach, and a large number of instructional methods. Key characteristics of each theme, along with example courses relative to each theme, are described. Additionally, implications regarding these instructional method themes and recommendations for best practices in research ethics courses are discussed.
Risk analysis with a fuzzy-logic approach of a complex installation
NASA Astrophysics Data System (ADS)
Peikert, Tim; Garbe, Heyno; Potthast, Stefan
2016-09-01
This paper introduces a procedural method based on fuzzy logic to analyze systematic the risk of an electronic system in an intentional electromagnetic environment (IEME). The method analyzes the susceptibility of a complex electronic installation with respect to intentional electromagnetic interference (IEMI). It combines the advantages of well-known techniques as fault tree analysis (FTA), electromagnetic topology (EMT) and Bayesian networks (BN) and extends the techniques with an approach to handle uncertainty. This approach uses fuzzy sets, membership functions and fuzzy logic to handle the uncertainty with probability functions and linguistic terms. The linguistic terms add to the risk analysis the knowledge from experts of the investigated system or environment.
MR Imaging-Guided Attenuation Correction of PET Data in PET/MR Imaging.
Izquierdo-Garcia, David; Catana, Ciprian
2016-04-01
Attenuation correction (AC) is one of the most important challenges in the recently introduced combined PET/magnetic resonance (MR) scanners. PET/MR AC (MR-AC) approaches aim to develop methods that allow accurate estimation of the linear attenuation coefficients of the tissues and other components located in the PET field of view. MR-AC methods can be divided into 3 categories: segmentation, atlas, and PET based. This review provides a comprehensive list of the state-of-the-art MR-AC approaches and their pros and cons. The main sources of artifacts are presented. Finally, this review discusses the current status of MR-AC approaches for clinical applications. Copyright © 2016 Elsevier Inc. All rights reserved.