An experimental validation of a statistical-based damage detection approach.
DOT National Transportation Integrated Search
2011-01-01
In this work, a previously-developed, statistical-based, damage-detection approach was validated for its ability to : autonomously detect damage in bridges. The damage-detection approach uses statistical differences in the actual and : predicted beha...
McCoy, A B; Wright, A; Krousel-Wood, M; Thomas, E J; McCoy, J A; Sittig, D F
2015-01-01
Clinical knowledge bases of problem-medication pairs are necessary for many informatics solutions that improve patient safety, such as clinical summarization. However, developing these knowledge bases can be challenging. We sought to validate a previously developed crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large, non-university health care system with a widely used, commercially available electronic health record. We first retrieved medications and problems entered in the electronic health record by clinicians during routine care during a six month study period. Following the previously published approach, we calculated the link frequency and link ratio for each pair then identified a threshold cutoff for estimated problem-medication pair appropriateness through clinician review; problem-medication pairs meeting the threshold were included in the resulting knowledge base. We selected 50 medications and their gold standard indications to compare the resulting knowledge base to the pilot knowledge base developed previously and determine its recall and precision. The resulting knowledge base contained 26,912 pairs, had a recall of 62.3% and a precision of 87.5%, and outperformed the pilot knowledge base containing 11,167 pairs from the previous study, which had a recall of 46.9% and a precision of 83.3%. We validated the crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large non-university health care system with a widely used, commercially available electronic health record, indicating that the approach may be generalizable across healthcare settings and clinical systems. Further research is necessary to better evaluate the knowledge, to compare crowdsourcing with other approaches, and to evaluate if incorporating the knowledge into electronic health records improves patient outcomes.
Wright, A.; Krousel-Wood, M.; Thomas, E. J.; McCoy, J. A.; Sittig, D. F.
2015-01-01
Summary Background Clinical knowledge bases of problem-medication pairs are necessary for many informatics solutions that improve patient safety, such as clinical summarization. However, developing these knowledge bases can be challenging. Objective We sought to validate a previously developed crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large, non-university health care system with a widely used, commercially available electronic health record. Methods We first retrieved medications and problems entered in the electronic health record by clinicians during routine care during a six month study period. Following the previously published approach, we calculated the link frequency and link ratio for each pair then identified a threshold cutoff for estimated problem-medication pair appropriateness through clinician review; problem-medication pairs meeting the threshold were included in the resulting knowledge base. We selected 50 medications and their gold standard indications to compare the resulting knowledge base to the pilot knowledge base developed previously and determine its recall and precision. Results The resulting knowledge base contained 26,912 pairs, had a recall of 62.3% and a precision of 87.5%, and outperformed the pilot knowledge base containing 11,167 pairs from the previous study, which had a recall of 46.9% and a precision of 83.3%. Conclusions We validated the crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large non-university health care system with a widely used, commercially available electronic health record, indicating that the approach may be generalizable across healthcare settings and clinical systems. Further research is necessary to better evaluate the knowledge, to compare crowdsourcing with other approaches, and to evaluate if incorporating the knowledge into electronic health records improves patient outcomes. PMID:26171079
Time series regression-based pairs trading in the Korean equities market
NASA Astrophysics Data System (ADS)
Kim, Saejoon; Heo, Jun
2017-07-01
Pairs trading is an instance of statistical arbitrage that relies on heavy quantitative data analysis to profit by capitalising low-risk trading opportunities provided by anomalies of related assets. A key element in pairs trading is the rule by which open and close trading triggers are defined. This paper investigates the use of time series regression to define the rule which has previously been identified with fixed threshold-based approaches. Empirical results indicate that our approach may yield significantly increased excess returns compared to ones obtained by previous approaches on large capitalisation stocks in the Korean equities market.
Bergeest, Jan-Philip; Rohr, Karl
2012-10-01
In high-throughput applications, accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression and the understanding of cell function. We propose an approach for segmenting cell nuclei which is based on active contours using level sets and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We consider three different well-known energy functionals for active contour-based segmentation and introduce convex formulations of these functionals. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images from different experiments comprising different cell types. We have also performed a quantitative comparison with previous segmentation approaches. Copyright © 2012 Elsevier B.V. All rights reserved.
A New Approach to Automated Labeling of Internal Features of Hardwood Logs Using CT Images
Daniel L. Schmoldt; Pei Li; A. Lynn Abbott
1996-01-01
The feasibility of automatically identifying internal features of hardwood logs using CT imagery has been established previously. Features of primary interest are bark, knots, voids, decay, and clear wood. Our previous approach: filtered original CT images, applied histogram segmentation, grew volumes to extract 3-d regions, and applied a rule base, with Dempster-...
Fast globally optimal segmentation of cells in fluorescence microscopy images.
Bergeest, Jan-Philip; Rohr, Karl
2011-01-01
Accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression in high-throughput screening applications. We propose a new approach for segmenting cell nuclei which is based on active contours and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images of different cell types. We have also performed a quantitative comparison with previous segmentation approaches.
NASA Astrophysics Data System (ADS)
Fuse, Shinichiro; Mifune, Yuto; Nakamura, Hiroyuki; Tanaka, Hiroshi
2016-11-01
Feglymycin is a naturally occurring, anti-HIV and antimicrobial 13-mer peptide that includes highly racemizable 3,5-dihydroxyphenylglycines (Dpgs). Here we describe the total synthesis of feglymycin based on a linear/convergent hybrid approach. Our originally developed micro-flow amide bond formation enabled highly racemizable peptide chain elongation based on a linear approach that was previously considered impossible. Our developed approach will enable the practical preparation of biologically active oligopeptides that contain highly racemizable amino acids, which are attractive drug candidates.
ITS evaluation -- phase 3 (2010)
DOT National Transportation Integrated Search
2011-05-01
This report documents the results of applying a previously developed, standardized approach for : evaluating intelligent transportation systems (ITS) projects to 17 ITS earmark projects. The evaluation : approach was based on a questionnaire to inves...
The Role of Domain Knowledge in Creative Generation
ERIC Educational Resources Information Center
Ward, Thomas B.
2008-01-01
Previous studies have shown that a predominant tendency in creative generation tasks is to base new ideas on well-known, specific instances of previous ideas (e.g., basing ideas for imaginary aliens on dogs, cats or bears). However, a substantial minority of individuals has been shown to adopt more abstract approaches to the task and to develop…
Gold-standard evaluation of a folksonomy-based ontology learning model
NASA Astrophysics Data System (ADS)
Djuana, E.
2018-03-01
Folksonomy, as one result of collaborative tagging process, has been acknowledged for its potential in improving categorization and searching of web resources. However, folksonomy contains ambiguities such as synonymy and polysemy as well as different abstractions or generality problem. To maximize its potential, some methods for associating tags of folksonomy with semantics and structural relationships have been proposed such as using ontology learning method. This paper evaluates our previous work in ontology learning according to gold-standard evaluation approach in comparison to a notable state-of-the-art work and several baselines. The results show that our method is comparable to the state-of the art work which further validate our approach as has been previously validated using task-based evaluation approach.
Continuous Indoor Positioning Fusing WiFi, Smartphone Sensors and Landmarks
Deng, Zhi-An; Wang, Guofeng; Qin, Danyang; Na, Zhenyu; Cui, Yang; Chen, Juan
2016-01-01
To exploit the complementary strengths of WiFi positioning, pedestrian dead reckoning (PDR), and landmarks, we propose a novel fusion approach based on an extended Kalman filter (EKF). For WiFi positioning, unlike previous fusion approaches setting measurement noise parameters empirically, we deploy a kernel density estimation-based model to adaptively measure the related measurement noise statistics. Furthermore, a trusted area of WiFi positioning defined by fusion results of previous step and WiFi signal outlier detection are exploited to reduce computational cost and improve WiFi positioning accuracy. For PDR, we integrate a gyroscope, an accelerometer, and a magnetometer to determine the user heading based on another EKF model. To reduce accumulation error of PDR and enable continuous indoor positioning, not only the positioning results but also the heading estimations are recalibrated by indoor landmarks. Experimental results in a realistic indoor environment show that the proposed fusion approach achieves substantial positioning accuracy improvement than individual positioning approaches including PDR and WiFi positioning. PMID:27608019
Continuous Indoor Positioning Fusing WiFi, Smartphone Sensors and Landmarks.
Deng, Zhi-An; Wang, Guofeng; Qin, Danyang; Na, Zhenyu; Cui, Yang; Chen, Juan
2016-09-05
To exploit the complementary strengths of WiFi positioning, pedestrian dead reckoning (PDR), and landmarks, we propose a novel fusion approach based on an extended Kalman filter (EKF). For WiFi positioning, unlike previous fusion approaches setting measurement noise parameters empirically, we deploy a kernel density estimation-based model to adaptively measure the related measurement noise statistics. Furthermore, a trusted area of WiFi positioning defined by fusion results of previous step and WiFi signal outlier detection are exploited to reduce computational cost and improve WiFi positioning accuracy. For PDR, we integrate a gyroscope, an accelerometer, and a magnetometer to determine the user heading based on another EKF model. To reduce accumulation error of PDR and enable continuous indoor positioning, not only the positioning results but also the heading estimations are recalibrated by indoor landmarks. Experimental results in a realistic indoor environment show that the proposed fusion approach achieves substantial positioning accuracy improvement than individual positioning approaches including PDR and WiFi positioning.
ERIC Educational Resources Information Center
Mchenry, Nadine; Borger, Laurie; Liable-Sands, Louise
2017-01-01
The current study was constructed based on the recommendations of a previous study (McHenry & Borger, 2013). Though inquiry-based teaching has long been touted as an effective pedagogy, its application by middle school science teachers has been problematic. Using tools developed from the previous study in conjunction with professional…
Mesh Denoising based on Normal Voting Tensor and Binary Optimization.
Yadav, Sunil Kumar; Reitebuch, Ulrich; Polthier, Konrad
2017-08-17
This paper presents a two-stage mesh denoising algorithm. Unlike other traditional averaging approaches, our approach uses an element-based normal voting tensor to compute smooth surfaces. By introducing a binary optimization on the proposed tensor together with a local binary neighborhood concept, our algorithm better retains sharp features and produces smoother umbilical regions than previous approaches. On top of that, we provide a stochastic analysis on the different kinds of noise based on the average edge length. The quantitative results demonstrate that the performance of our method is better compared to state-of-the-art smoothing approaches.
Power and Responsibility in Therapy: Integrating Feminism and Multiculturalism
ERIC Educational Resources Information Center
Williams, Elizabeth Nutt; Barber, Jill S.
2004-01-01
The integration of feminist and multicultural approaches to psychotherapy, called for many times, has not yet materialized. This article reviews possible reasons this integration has not taken place and offers an approach to integration based on the guiding principles of power and responsibility, which builds on previous theories and approaches.
Unger, Jakob; Schuster, Maria; Hecker, Dietmar J; Schick, Bernhard; Lohscheller, Jörg
2016-01-01
This work presents a computer-based approach to analyze the two-dimensional vocal fold dynamics of endoscopic high-speed videos, and constitutes an extension and generalization of a previously proposed wavelet-based procedure. While most approaches aim for analyzing sustained phonation conditions, the proposed method allows for a clinically adequate analysis of both dynamic as well as sustained phonation paradigms. The analysis procedure is based on a spatio-temporal visualization technique, the phonovibrogram, that facilitates the documentation of the visible laryngeal dynamics. From the phonovibrogram, a low-dimensional set of features is computed using a principle component analysis strategy that quantifies the type of vibration patterns, irregularity, lateral symmetry and synchronicity, as a function of time. Two different test bench data sets are used to validate the approach: (I) 150 healthy and pathologic subjects examined during sustained phonation. (II) 20 healthy and pathologic subjects that were examined twice: during sustained phonation and a glissando from a low to a higher fundamental frequency. In order to assess the discriminative power of the extracted features, a Support Vector Machine is trained to distinguish between physiologic and pathologic vibrations. The results for sustained phonation sequences are compared to the previous approach. Finally, the classification performance of the stationary analyzing procedure is compared to the transient analysis of the glissando maneuver. For the first test bench the proposed procedure outperformed the previous approach (proposed feature set: accuracy: 91.3%, sensitivity: 80%, specificity: 97%, previous approach: accuracy: 89.3%, sensitivity: 76%, specificity: 96%). Comparing the classification performance of the second test bench further corroborates that analyzing transient paradigms provides clear additional diagnostic value (glissando maneuver: accuracy: 90%, sensitivity: 100%, specificity: 80%, sustained phonation: accuracy: 75%, sensitivity: 80%, specificity: 70%). The incorporation of parameters describing the temporal evolvement of vocal fold vibration clearly improves the automatic identification of pathologic vibration patterns. Furthermore, incorporating a dynamic phonation paradigm provides additional valuable information about the underlying laryngeal dynamics that cannot be derived from sustained conditions. The proposed generalized approach provides a better overall classification performance than the previous approach, and hence constitutes a new advantageous tool for an improved clinical diagnosis of voice disorders. Copyright © 2015 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Chen, Chih-Hung; Hwang, Gwo-Jen
2017-01-01
Previous research has illustrated the importance of acquiring knowledge from authentic contexts; however, without full engagement, students' learning performance might not be as good as expected. In this study, a Team Competition-based Ubiquitous Gaming approach was proposed for improving students' learning effectiveness in authentic learning…
ERIC Educational Resources Information Center
Chu, Hui-Chun; Chang, Shao-Chen
2014-01-01
Although educational computer games have been recognized as being a promising approach, previous studies have indicated that, without supportive models, students might only show temporary interest during the game-based learning process, and their learning performance is often not as good as expected. Therefore, in this paper, a two-tier test…
Implementing Curriculum Reform in Wales: The Case of the Foundation Phase
ERIC Educational Resources Information Center
Taylor, Chris; Rhys, Mirain; Waldron, Sam
2016-01-01
The Foundation Phase is a Welsh Government flagship policy of early years education (for 3-7 year-old children) in Wales. Marking a radical departure from the more formal, competency-based approach associated with the previous Key Stage 1 National Curriculum, it advocates a developmental, experiential, play-based approach to teaching and learning.…
Bhat; Bergstrom; Teasley; Bowker; Cordell
1998-01-01
/ This paper describes a framework for estimating the economic value of outdoor recreation across different ecoregions. Ten ecoregions in the continental United States were defined based on similarly functioning ecosystem characters. The individual travel cost method was employed to estimate recreation demand functions for activities such as motor boating and waterskiing, developed and primitive camping, coldwater fishing, sightseeing and pleasure driving, and big game hunting for each ecoregion. While our ecoregional approach differs conceptually from previous work, our results appear consistent with the previous travel cost method valuation studies.KEY WORDS: Recreation; Ecoregion; Travel cost method; Truncated Poisson model
An, Mahru C; O'Brien, Robert N; Zhang, Ningzhe; Patra, Biranchi N; De La Cruz, Michael; Ray, Animesh; Ellerby, Lisa M
2014-04-15
We have previously reported the genetic correction of Huntington's disease (HD) patient-derived induced pluripotent stem cells using traditional homologous recombination (HR) approaches. To extend this work, we have adopted a CRISPR-based genome editing approach to improve the efficiency of recombination in order to generate allelic isogenic HD models in human cells. Incorporation of a rapid antibody-based screening approach to measure recombination provides a powerful method to determine relative efficiency of genome editing for modeling polyglutamine diseases or understanding factors that modulate CRISPR/Cas9 HR.
Milenković, Jana; Dalmış, Mehmet Ufuk; Žgajnar, Janez; Platel, Bram
2017-09-01
New ultrafast view-sharing sequences have enabled breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) to be performed at high spatial and temporal resolution. The aim of this study is to evaluate the diagnostic potential of textural features that quantify the spatiotemporal changes of the contrast-agent uptake in computer-aided diagnosis of malignant and benign breast lesions imaged with high spatial and temporal resolution DCE-MRI. The proposed approach is based on the textural analysis quantifying the spatial variation of six dynamic features of the early-phase contrast-agent uptake of a lesion's largest cross-sectional area. The textural analysis is performed by means of the second-order gray-level co-occurrence matrix, gray-level run-length matrix and gray-level difference matrix. This yields 35 textural features to quantify the spatial variation of each of the six dynamic features, providing a feature set of 210 features in total. The proposed feature set is evaluated based on receiver operating characteristic (ROC) curve analysis in a cross-validation scheme for random forests (RF) and two support vector machine classifiers, with linear and radial basis function (RBF) kernel. Evaluation is done on a dataset with 154 breast lesions (83 malignant and 71 benign) and compared to a previous approach based on 3D morphological features and the average and standard deviation of the same dynamic features over the entire lesion volume as well as their average for the smaller region of the strongest uptake rate. The area under the ROC curve (AUC) obtained by the proposed approach with the RF classifier was 0.8997, which was significantly higher (P = 0.0198) than the performance achieved by the previous approach (AUC = 0.8704) on the same dataset. Similarly, the proposed approach obtained a significantly higher result for both SVM classifiers with RBF (P = 0.0096) and linear kernel (P = 0.0417) obtaining AUC of 0.8876 and 0.8548, respectively, compared to AUC values of previous approach of 0.8562 and 0.8311, respectively. The proposed approach based on 2D textural features quantifying spatiotemporal changes of the contrast-agent uptake significantly outperforms the previous approach based on 3D morphology and dynamic analysis in differentiating the malignant and benign breast lesions, showing its potential to aid clinical decision making. © 2017 American Association of Physicists in Medicine.
An Information Retrieval Approach for Robust Prediction of Road Surface States.
Park, Jae-Hyung; Kim, Kwanho
2017-01-28
Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods.
An Information Retrieval Approach for Robust Prediction of Road Surface States
Park, Jae-Hyung; Kim, Kwanho
2017-01-01
Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods. PMID:28134859
Gong, Ping; Nan, Xiaofei; Barker, Natalie D; Boyd, Robert E; Chen, Yixin; Wilkins, Dawn E; Johnson, David R; Suedel, Burton C; Perkins, Edward J
2016-03-08
Chemical bioavailability is an important dose metric in environmental risk assessment. Although many approaches have been used to evaluate bioavailability, not a single approach is free from limitations. Previously, we developed a new genomics-based approach that integrated microarray technology and regression modeling for predicting bioavailability (tissue residue) of explosives compounds in exposed earthworms. In the present study, we further compared 18 different regression models and performed variable selection simultaneously with parameter estimation. This refined approach was applied to both previously collected and newly acquired earthworm microarray gene expression datasets for three explosive compounds. Our results demonstrate that a prediction accuracy of R(2) = 0.71-0.82 was achievable at a relatively low model complexity with as few as 3-10 predictor genes per model. These results are much more encouraging than our previous ones. This study has demonstrated that our approach is promising for bioavailability measurement, which warrants further studies of mixed contamination scenarios in field settings.
A non-stationary cost-benefit based bivariate extreme flood estimation approach
NASA Astrophysics Data System (ADS)
Qi, Wei; Liu, Junguo
2018-02-01
Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.
NASA Technical Reports Server (NTRS)
Kiser, J. Douglas; Singh, Mrityunjay; Lei, Jin-Fen; Martin, Lisa C.
1999-01-01
A novel attachment approach for positioning sensor lead wires on silicon carbide-based monolithic ceramic and fiber reinforced ceramic matrix composite (FRCMC) components has been developed. This approach is based on an affordable, robust ceramic joining technology, named ARCJoinT, which was developed for the joining of silicon carbide-based ceramic and fiber reinforced composites. The ARCJoinT technique has previously been shown to produce joints with tailorable thickness and good high temperature strength. In this study, silicon carbide-based ceramic and FRCMC attachments of different shapes and sizes were joined onto silicon carbide fiber reinforced silicon carbide matrix (SiC/ SiC) composites having flat and curved surfaces. Based on results obtained in previous joining studies. the joined attachments should maintain their mechanical strength and integrity at temperatures up to 1350 C in air. Therefore they can be used to position and secure sensor lead wires on SiC/SiC components that are being tested in programs that are focused on developing FRCMCs for a number of demanding high temperature applications in aerospace and ground-based systems. This approach, which is suitable for installing attachments on large and complex shaped monolithic ceramic and composite components, should enhance the durability of minimally intrusive high temperature sensor systems. The technology could also be used to reinstall attachments on ceramic components that were damaged in service.
Patwary, Nurmohammed; Preza, Chrysanthe
2015-01-01
A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634
Beadle, Mary; Santy, Julie
2008-05-01
This article describes the delivery of a core pre-registration nursing and midwifery module centred on social inclusion. The module was previously delivered using a classroom-based problem-based learning approach. Difficulties with this approach led to changes to the module and its delivery. Logistic issues encouraged the module team to implement a blended learning approach using a virtual town to facilitate online learning and discussion activities. The paper describes and discusses the use of online learning technology to support student nurses and midwives. It highlights the benefits of this approach and outlines some of the experiences of the students including their evaluation of the virtual town. There is also an examination of some of the practical and theoretical issues related to both problem-based learning, online working and using a virtual town to support learning. This article outlines the approach taken and its implications.
NASA Astrophysics Data System (ADS)
Saur, Günter; Krüger, Wolfgang
2016-06-01
Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.
Shaping Approach Responses as Intervention for Specific Phobia in a Child with Autism
ERIC Educational Resources Information Center
Ricciardi, Joseph N.; Luiselli, James K.; Camare, Marianne
2006-01-01
We evaluated contact desensitization (reinforcing approach responses) as intervention for specific phobia with a child diagnosed with autism. During hospital-based intervention, the boy was able to encounter previously avoided stimuli. Parental report suggested that results were maintained postdischarge. (Contains 1 figure.)
Approach to Mathematical Problem Solving and Students' Belief Systems: Two Case Studies
ERIC Educational Resources Information Center
Callejo, Maria Luz; Vila, Antoni
2009-01-01
The goal of the study reported here is to gain a better understanding of the role of belief systems in the approach phase to mathematical problem solving. Two students of high academic performance were selected based on a previous exploratory study of 61 students 12-13 years old. In this study we identified different types of approaches to…
Capoccia, Massimo; Marconi, Silvia; Singh, Sanjeet Avtaar; Pisanelli, Domenico M; De Lazzari, Claudio
2018-05-02
Modelling and simulation may become clinically applicable tools for detailed evaluation of the cardiovascular system and clinical decision-making to guide therapeutic intervention. Models based on pressure-volume relationship and zero-dimensional representation of the cardiovascular system may be a suitable choice given their simplicity and versatility. This approach has great potential for application in heart failure where the impact of left ventricular assist devices has played a significant role as a bridge to transplant and more recently as a long-term solution for non eligible candidates. We sought to investigate the value of simulation in the context of three heart failure patients with a view to predict or guide further management. CARDIOSIM © was the software used for this purpose. The study was based on retrospective analysis of haemodynamic data previously discussed at a multidisciplinary meeting. The outcome of the simulations addressed the value of a more quantitative approach in the clinical decision process. Although previous experience, co-morbidities and the risk of potentially fatal complications play a role in clinical decision-making, patient-specific modelling may become a daily approach for selection and optimisation of device-based treatment for heart failure patients. Willingness to adopt this integrated approach may be the key to further progress.
Building a Strengths-Based Campus to Support Student Retention
ERIC Educational Resources Information Center
Soria, Krista M.; Stubblefield, Robin
2015-01-01
Strengths-based approaches are flourishing across hundreds of higher education institutions as student affairs practitioners and educators seek to leverage students' natural talents so they can reach "previously unattained levels of personal excellence" (Lopez & Louis, 2009, p. 2). Even amid the growth of strengths-based approaches…
Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection
Vesperini, Fabio; Schuller, Björn
2017-01-01
In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases. PMID:28182121
NASA Astrophysics Data System (ADS)
Fenicia, Fabrizio; Reichert, Peter; Kavetski, Dmitri; Albert, Calro
2016-04-01
The calibration of hydrological models based on signatures (e.g. Flow Duration Curves - FDCs) is often advocated as an alternative to model calibration based on the full time series of system responses (e.g. hydrographs). Signature based calibration is motivated by various arguments. From a conceptual perspective, calibration on signatures is a way to filter out errors that are difficult to represent when calibrating on the full time series. Such errors may for example occur when observed and simulated hydrographs are shifted, either on the "time" axis (i.e. left or right), or on the "streamflow" axis (i.e. above or below). These shifts may be due to errors in the precipitation input (time or amount), and if not properly accounted in the likelihood function, may cause biased parameter estimates (e.g. estimated model parameters that do not reproduce the recession characteristics of a hydrograph). From a practical perspective, signature based calibration is seen as a possible solution for making predictions in ungauged basins. Where streamflow data are not available, it may in fact be possible to reliably estimate streamflow signatures. Previous research has for example shown how FDCs can be reliably estimated at ungauged locations based on climatic and physiographic influence factors. Typically, the goal of signature based calibration is not the prediction of the signatures themselves, but the prediction of the system responses. Ideally, the prediction of system responses should be accompanied by a reliable quantification of the associated uncertainties. Previous approaches for signature based calibration, however, do not allow reliable estimates of streamflow predictive distributions. Here, we illustrate how the Bayesian approach can be employed to obtain reliable streamflow predictive distributions based on signatures. A case study is presented, where a hydrological model is calibrated on FDCs and additional signatures. We propose an approach where the likelihood function for the signatures is derived from the likelihood for streamflow (rather than using an "ad-hoc" likelihood for the signatures as done in previous approaches). This likelihood is not easily tractable analytically and we therefore cannot apply "simple" MCMC methods. This numerical problem is solved using Approximate Bayesian Computation (ABC). Our result indicate that the proposed approach is suitable for producing reliable streamflow predictive distributions based on calibration to signature data. Moreover, our results provide indications on which signatures are more appropriate to represent the information content of the hydrograph.
Pindilli, Emily J.; Casey, Frank
2015-10-26
This report is a primer on market-like and market-based mechanisms designed to conserve biodiversity and habitat. The types of markets and market-based approaches that were implemented or are emerging to benefit biodiversity and habitat in the United States are examined. The central approaches considered in this report include payments for ecosystem services, conservation banks, habitat exchanges, and eco-labels. Based on literature reviews and input from experts and practitioners, the report characterizes each market-based approach including policy context and structure; the theoretical basis for applying market-based approaches; the ecological effectiveness of practices and tools for measuring performance; and the future outlook for biodiversity and habitat markets. This report draws from previous research and serves as a summary of pertinent information associated with biodiversity and habitat markets while providing references to materials that go into greater detail on specific topics.
Eck, Simon; Wörz, Stefan; Müller-Ott, Katharina; Hahn, Matthias; Biesdorf, Andreas; Schotta, Gunnar; Rippe, Karsten; Rohr, Karl
2016-08-01
The genome is partitioned into regions of euchromatin and heterochromatin. The organization of heterochromatin is important for the regulation of cellular processes such as chromosome segregation and gene silencing, and their misregulation is linked to cancer and other diseases. We present a model-based approach for automatic 3D segmentation and 3D shape analysis of heterochromatin foci from 3D confocal light microscopy images. Our approach employs a novel 3D intensity model based on spherical harmonics, which analytically describes the shape and intensities of the foci. The model parameters are determined by fitting the model to the image intensities using least-squares minimization. To characterize the 3D shape of the foci, we exploit the computed spherical harmonics coefficients and determine a shape descriptor. We applied our approach to 3D synthetic image data as well as real 3D static and real 3D time-lapse microscopy images, and compared the performance with that of previous approaches. It turned out that our approach yields accurate 3D segmentation results and performs better than previous approaches. We also show that our approach can be used for quantifying 3D shape differences of heterochromatin foci. Copyright © 2016 Elsevier B.V. All rights reserved.
Multiscale modelling and analysis of collective decision making in swarm robotics.
Vigelius, Matthias; Meyer, Bernd; Pascoe, Geoffrey
2014-01-01
We present a unified approach to describing certain types of collective decision making in swarm robotics that bridges from a microscopic individual-based description to aggregate properties. Our approach encompasses robot swarm experiments, microscopic and probabilistic macroscopic-discrete simulations as well as an analytic mathematical model. Following up on previous work, we identify the symmetry parameter, a measure of the progress of the swarm towards a decision, as a fundamental integrated swarm property and formulate its time evolution as a continuous-time Markov process. Contrary to previous work, which justified this approach only empirically and a posteriori, we justify it from first principles and derive hard limits on the parameter regime in which it is applicable.
A revised load estimation procedure for the Susquehanna, Potomac, Patuxent, and Choptank rivers
Yochum, Steven E.
2000-01-01
The U.S. Geological Survey?s Chesapeake Bay River Input Program has updated the nutrient and suspended-sediment load data base for the Susquehanna, Potomac, Patuxent, and Choptank Rivers using a multiple-window, center-estimate regression methodology. The revised method optimizes the seven-parameter regression approach that has been used historically by the program. The revised method estimates load using the fifth or center year of a sliding 9-year window. Each year a new model is run for each site and constituent, the most recent year is added, and the previous 4 years of estimates are updated. The fifth year in the 9-year window is considered the best estimate and is kept in the data base. The last year of estimation shows the most change from the previous year?s estimate and this change approaches a minimum at the fifth year. Differences between loads computed using this revised methodology and the loads populating the historical data base have been noted but the load estimates do not typically change drastically. The data base resulting from the application of this revised methodology is populated by annual and monthly load estimates that are known with greater certainty than in the previous load data base.
NASA Astrophysics Data System (ADS)
Guo, Yiqing; Jia, Xiuping; Paull, David
2018-06-01
The explosive availability of remote sensing images has challenged supervised classification algorithms such as Support Vector Machines (SVM), as training samples tend to be highly limited due to the expensive and laborious task of ground truthing. The temporal correlation and spectral similarity between multitemporal images have opened up an opportunity to alleviate this problem. In this study, a SVM-based Sequential Classifier Training (SCT-SVM) approach is proposed for multitemporal remote sensing image classification. The approach leverages the classifiers of previous images to reduce the required number of training samples for the classifier training of an incoming image. For each incoming image, a rough classifier is firstly predicted based on the temporal trend of a set of previous classifiers. The predicted classifier is then fine-tuned into a more accurate position with current training samples. This approach can be applied progressively to sequential image data, with only a small number of training samples being required from each image. Experiments were conducted with Sentinel-2A multitemporal data over an agricultural area in Australia. Results showed that the proposed SCT-SVM achieved better classification accuracies compared with two state-of-the-art model transfer algorithms. When training data are insufficient, the overall classification accuracy of the incoming image was improved from 76.18% to 94.02% with the proposed SCT-SVM, compared with those obtained without the assistance from previous images. These results demonstrate that the leverage of a priori information from previous images can provide advantageous assistance for later images in multitemporal image classification.
Wahman, David G; Speitel, Gerald E; Katz, Lynn E
2017-11-21
Chloramine chemistry is complex, with a variety of reactions occurring in series and parallel and many that are acid or base catalyzed, resulting in numerous rate constants. Bromide presence increases system complexity even further with possible bromamine and bromochloramine formation. Therefore, techniques for parameter estimation must address this complexity through thoughtful experimental design and robust data analysis approaches. The current research outlines a rational basis for constrained data fitting using Brønsted theory, application of the microscopic reversibility principle to reversible acid or base catalyzed reactions, and characterization of the relative significance of parallel reactions using fictive product tracking. This holistic approach was used on a comprehensive and well-documented data set for bromamine decomposition, allowing new interpretations of existing data by revealing that a previously published reaction scheme was not robust; it was not able to describe monobromamine or dibromamine decay outside of the conditions for which it was calibrated. The current research's simplified model (3 reactions, 17 constants) represented the experimental data better than the previously published model (4 reactions, 28 constants). A final model evaluation was conducted based on representative drinking water conditions to determine a minimal model (3 reactions, 8 constants) applicable for drinking water conditions.
Development of a Reading Material Recommendation System Based on a Knowledge Engineering Approach
ERIC Educational Resources Information Center
Hsu, Ching-Kun; Hwang, Gwo-Jen; Chang, Chih-Kai
2010-01-01
In a language curriculum, the training of reading ability is one of the most important aspects. Previous studies have shown the importance of assigning proper articles to individual students for training their reading ability; nevertheless, previous experience has also shown the challenges of this issue owing to the complexity of personal factors…
ERIC Educational Resources Information Center
Sengupta, Pratim; Farris, Amy Voss; Wright, Mason
2012-01-01
Novice learners find motion as a continuous process of change challenging to understand. In this paper, we present a pedagogical approach based on agent-based, visual programming to address this issue. Integrating agent-based programming, in particular, Logo programming, with curricular science has been shown to be challenging in previous research…
Majumdar, Angshul; Gogna, Anupriya; Ward, Rabab
2014-08-25
We address the problem of acquiring and transmitting EEG signals in Wireless Body Area Networks (WBAN) in an energy efficient fashion. In WBANs, the energy is consumed by three operations: sensing (sampling), processing and transmission. Previous studies only addressed the problem of reducing the transmission energy. For the first time, in this work, we propose a technique to reduce sensing and processing energy as well: this is achieved by randomly under-sampling the EEG signal. We depart from previous Compressed Sensing based approaches and formulate signal recovery (from under-sampled measurements) as a matrix completion problem. A new algorithm to solve the matrix completion problem is derived here. We test our proposed method and find that the reconstruction accuracy of our method is significantly better than state-of-the-art techniques; and we achieve this while saving sensing, processing and transmission energy. Simple power analysis shows that our proposed methodology consumes considerably less power compared to previous CS based techniques.
Moore, Jason H; Boczko, Erik M; Summar, Marshall L
2005-02-01
Understanding how DNA sequence variations impact human health through a hierarchy of biochemical and physiological systems is expected to improve the diagnosis, prevention, and treatment of common, complex human diseases. We have previously developed a hierarchical dynamic systems approach based on Petri nets for generating biochemical network models that are consistent with genetic models of disease susceptibility. This modeling approach uses an evolutionary computation approach called grammatical evolution as a search strategy for optimal Petri net models. We have previously demonstrated that this approach routinely identifies biochemical network models that are consistent with a variety of genetic models in which disease susceptibility is determined by nonlinear interactions between two or more DNA sequence variations. We review here this approach and then discuss how it can be used to model biochemical and metabolic data in the context of genetic studies of human disease susceptibility.
Using Whole Language Materials in the Adult ESOL Classroom.
ERIC Educational Resources Information Center
Schiffer, Edward W.
A practicum explored the use of instructional materials based on the whole language approach to second language learning in adult English-as-a-Second-Language (ESL) instruction. The approach was implemented in a beginning ESL classroom at an adult education center that had previously used publisher textbooks, which were not thought to provide…
From Career Decision-Making Styles to Career Decision-Making Profiles: A Multidimensional Approach
ERIC Educational Resources Information Center
Gati, Itamar; Landman, Shiri; Davidovitch, Shlomit; Asulin-Peretz, Lisa; Gadassi, Reuma
2010-01-01
Previous research on individual differences in career decision-making processes has often focused on classifying individuals into a few types of decision-making "styles" based on the most dominant trait or characteristic of their approach to the decision process (e.g., rational, intuitive, dependent; Harren, 1979). In this research, an…
Understanding Challenges of Using ICT in Secondary Schools in Sweden from Teachers' Perspective
ERIC Educational Resources Information Center
Ekberg, Siri; Gao, Shang
2018-01-01
Purpose: The purpose of this paper is to investigate the challenges of using ICT in secondary schools in Sweden from teachers' perspectives. Design/methodology/approach: The research followed a qualitative research approach. First, a conceptual framework was developed based on previous research. Then, four teachers, teaching in six different…
A Cognitive Component Analysis Approach for Developing Game-Based Spatial Learning Tools
ERIC Educational Resources Information Center
Hung, Pi-Hsia; Hwang, Gwo-Jen; Lee, Yueh-Hsun; Su, I-Hsiang
2012-01-01
Spatial ability has been recognized as one of the most important factors affecting the mathematical performance of students. Previous studies on spatial learning have mainly focused on developing strategies to shorten the problem-solving time of learners for very specific learning tasks. Such an approach usually has limited effects on improving…
The Relationship between Ethical Positions and Methodological Approaches: A Scandinavian Perspective
ERIC Educational Resources Information Center
Beach, Dennis; Eriksson, Anita
2010-01-01
In this article, based on reading ethnographic theses, books and articles and conversations with nine key informants, we have tried to describe how research ethics are approached and written about in educational ethnography in Scandinavia. The article confirms findings from previous research that there are different methodological forms of…
Chellemi, D O; Gamliel, A; Katan, J; Subbarao, K V
2016-03-01
Biological suppression of soilborne diseases with minimal use of outside interventive actions has been difficult to achieve in high input conventional crop production systems due to the inherent risk of pest resurgence. This review examines previous approaches to the management of soilborne disease as precursors to the evolution of a systems-based approach, in which plant disease suppression through natural biological feedback mechanisms in soil is incorporated into the design and operation of cropping systems. Two case studies are provided as examples in which a systems-based approach is being developed and deployed in the production of high value crops: lettuce/strawberry production in the coastal valleys of central California (United States) and sweet basil and other herb crop production in Israel. Considerations for developing and deploying system-based approaches are discussed and operational frameworks and metrics to guide their development are presented with the goal of offering a credible alternative to conventional approaches to soilborne disease management.
ERIC Educational Resources Information Center
Williams, Grant; Clement, John
2015-01-01
This study sought to identify specific types of discussion-based strategies that two successful high school physics teachers using a model-based approach utilized in attempting to foster students' construction of explanatory models for scientific concepts. We found evidence that, in addition to previously documented dialogical strategies that…
DOT National Transportation Integrated Search
2015-03-01
Mixture proportioning is routinely a matter of using a recipe based on a previously produced concrete, rather than adjusting the : proportions based on the needs of the mixture and the locally available materials. As budgets grow tighter and increasi...
Motavalli, Mostafa; Whitney, G Adam; Dennis, James E; Mansour, Joseph M
2013-12-01
A previously developed novel imaging technique for determining the depth dependent properties of cartilage in simple shear is implemented. Shear displacement is determined from images of deformed lines photobleached on a sample, and shear strain is obtained from the derivative of the displacement. We investigated the feasibility of an alternative systematic approach to numerical differentiation for computing the shear strain that is based on fitting a continuous function to the shear displacement. Three models for a continuous shear displacement function are evaluated: polynomials, cubic splines, and non-parametric locally weighted scatter plot curves. Four independent approaches are then applied to identify the best-fit model and the accuracy of the first derivative. One approach is based on the Akaiki Information Criteria, and the Bayesian Information Criteria. The second is based on a method developed to smooth and differentiate digitized data from human motion. The third method is based on photobleaching a predefined circular area with a specific radius. Finally, we integrate the shear strain and compare it with the total shear deflection of the sample measured experimentally. Results show that 6th and 7th order polynomials are the best models for the shear displacement and its first derivative. In addition, failure of tissue-engineered cartilage, consistent with previous results, demonstrates the qualitative value of this imaging approach. © 2013 Elsevier Ltd. All rights reserved.
Andrés, Axel; Rosés, Martí; Bosch, Elisabeth
2014-11-28
In previous work, a two-parameter model to predict chromatographic retention of ionizable analytes in gradient mode was proposed. However, the procedure required some previous experimental work to get a suitable description of the pKa change with the mobile phase composition. In the present study this previous experimental work has been simplified. The analyte pKa values have been calculated through equations whose coefficients vary depending on their functional group. Forced by this new approach, other simplifications regarding the retention of the totally neutral and totally ionized species also had to be performed. After the simplifications were applied, new prediction values were obtained and compared with the previously acquired experimental data. The simplified model gave pretty good predictions while saving a significant amount of time and resources. Copyright © 2014 Elsevier B.V. All rights reserved.
Automated Assume-Guarantee Reasoning by Abstraction Refinement
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra
2008-01-01
Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.
Multiscale Modelling and Analysis of Collective Decision Making in Swarm Robotics
Vigelius, Matthias; Meyer, Bernd; Pascoe, Geoffrey
2014-01-01
We present a unified approach to describing certain types of collective decision making in swarm robotics that bridges from a microscopic individual-based description to aggregate properties. Our approach encompasses robot swarm experiments, microscopic and probabilistic macroscopic-discrete simulations as well as an analytic mathematical model. Following up on previous work, we identify the symmetry parameter, a measure of the progress of the swarm towards a decision, as a fundamental integrated swarm property and formulate its time evolution as a continuous-time Markov process. Contrary to previous work, which justified this approach only empirically and a posteriori, we justify it from first principles and derive hard limits on the parameter regime in which it is applicable. PMID:25369026
Improving Distributed Diagnosis Through Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino
2011-01-01
Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.
Aircraft applications of fault detection and isolation techniques
NASA Astrophysics Data System (ADS)
Marcos Esteban, Andres
In this thesis the problems of fault detection & isolation and fault tolerant systems are studied from the perspective of LTI frequency-domain, model-based techniques. Emphasis is placed on the applicability of these LTI techniques to nonlinear models, especially to aerospace systems. Two applications of Hinfinity LTI fault diagnosis are given using an open-loop (no controller) design approach: one for the longitudinal motion of a Boeing 747-100/200 aircraft, the other for a turbofan jet engine. An algorithm formalizing a robust identification approach based on model validation ideas is also given and applied to the previous jet engine. A general linear fractional transformation formulation is given in terms of the Youla and Dual Youla parameterizations for the integrated (control and diagnosis filter) approach. This formulation provides better insight into the trade-off between the control and the diagnosis objectives. It also provides the basic groundwork towards the development of nested schemes for the integrated approach. These nested structures allow iterative improvements on the control/filter Youla parameters based on successive identification of the system uncertainty (as given by the Dual Youla parameter). The thesis concludes with an application of Hinfinity LTI techniques to the integrated design for the longitudinal motion of the previous Boeing 747-100/200 model.
Approaches to Classroom-Based Computational Science.
ERIC Educational Resources Information Center
Guzdial, Mark
Computational science includes the use of computer-based modeling and simulation to define and test theories about scientific phenomena. The challenge for educators is to develop techniques for implementing computational science in the classroom. This paper reviews some previous work on the use of simulation alone (without modeling), modeling…
Phonetics Information Base and Lexicon
ERIC Educational Resources Information Center
Moran, Steven Paul
2012-01-01
In this dissertation, I investigate the linguistic and technological challenges involved in creating a cross-linguistic data set to undertake phonological typology. I then address the question of whether more sophisticated, knowledge-based approaches to data modeling, coupled with a broad cross-linguistic data set, can extend previous typological…
Extending Theory-Based Quantitative Predictions to New Health Behaviors.
Brick, Leslie Ann D; Velicer, Wayne F; Redding, Colleen A; Rossi, Joseph S; Prochaska, James O
2016-04-01
Traditional null hypothesis significance testing suffers many limitations and is poorly adapted to theory testing. A proposed alternative approach, called Testing Theory-based Quantitative Predictions, uses effect size estimates and confidence intervals to directly test predictions based on theory. This paper replicates findings from previous smoking studies and extends the approach to diet and sun protection behaviors using baseline data from a Transtheoretical Model behavioral intervention (N = 5407). Effect size predictions were developed using two methods: (1) applying refined effect size estimates from previous smoking research or (2) using predictions developed by an expert panel. Thirteen of 15 predictions were confirmed for smoking. For diet, 7 of 14 predictions were confirmed using smoking predictions and 6 of 16 using expert panel predictions. For sun protection, 3 of 11 predictions were confirmed using smoking predictions and 5 of 19 using expert panel predictions. Expert panel predictions and smoking-based predictions poorly predicted effect sizes for diet and sun protection constructs. Future studies should aim to use previous empirical data to generate predictions whenever possible. The best results occur when there have been several iterations of predictions for a behavior, such as with smoking, demonstrating that expected values begin to converge on the population effect size. Overall, the study supports necessity in strengthening and revising theory with empirical data.
Wei Liao; Rohr, Karl; Chang-Ki Kang; Zang-Hee Cho; Worz, Stefan
2016-01-01
We propose a novel hybrid approach for automatic 3D segmentation and quantification of high-resolution 7 Tesla magnetic resonance angiography (MRA) images of the human cerebral vasculature. Our approach consists of two main steps. First, a 3D model-based approach is used to segment and quantify thick vessels and most parts of thin vessels. Second, remaining vessel gaps of the first step in low-contrast and noisy regions are completed using a 3D minimal path approach, which exploits directional information. We present two novel minimal path approaches. The first is an explicit approach based on energy minimization using probabilistic sampling, and the second is an implicit approach based on fast marching with anisotropic directional prior. We conducted an extensive evaluation with over 2300 3D synthetic images and 40 real 3D 7 Tesla MRA images. Quantitative and qualitative evaluation shows that our approach achieves superior results compared with a previous minimal path approach. Furthermore, our approach was successfully used in two clinical studies on stroke and vascular dementia.
EPA is developing approaches to inform the derivation of a Maximum Contaminant Level Goal (MCLG) for perchlorate in drinking water under the Safe Drinking Water Act. EPA previously conducted an independent, external, scientific peer review of the draft biologically-based dose-res...
ERIC Educational Resources Information Center
Kyriakides, L.; Christoforidou, M.; Panayiotou, A.; Creemers, B. P. M.
2017-01-01
The dynamic approach (DA) suggests that professional development should be differentiated to meet teachers' individual needs while engaging participants into systematic and guided critical reflection. Previous experimental studies demonstrated that one-year interventions based on the DA have a positive impact on teacher effectiveness. The study…
Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.
Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen
2017-11-01
A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.
Harvey Cushing's Approaches to Tumors in His Early Career: From the Skull Base to the Cranial Vault
Pendleton, Courtney; Raza, Shaan M.; Gallia, Gary L.; Quiñones-Hinojosa, Alfredo
2011-01-01
In this report, we review Dr. Cushing's early surgical cases at the Johns Hopkins Hospital, revealing details of his early operative approaches to tumors of the skull base and cranial vault. Following Institutional Review Board approval, and through the courtesy of the Alan Mason Chesney Archives, we reviewed the Johns Hopkins Hospital surgical files from 1896 to 1912. Participants included four adult patients and one child who underwent surgical resection of bony tumors of the skull base and the cranial vault. The main outcome measures were operative approach and condition recorded at the time of discharge. The indications for surgery included unspecified malignant tumor of the basal meninges and temporal bone, basal cell carcinoma, osteoma of the posterior skull base, and osteomas of the frontal and parietofrontal cranial vault. While Cushing's experience with selected skull base pathology has been previously reported, the breadth of his contributions to operative approaches to the skull base has been neglected. PMID:22470271
Intersectionality: An Arts-Based Approach to Student Awareness
ERIC Educational Resources Information Center
Edmonds, Leonard
2017-01-01
This study was designed to introduce specific activities/lessons to students in an online university gender and communication course. It was also designed to determine how participants made meaning of and felt about learning about intersectionality of gender and cultural identities, using arts-based data collection. Previous research on the…
Development of an Adaptive Learning System with Two Sources of Personalization Information
ERIC Educational Resources Information Center
Tseng, J. C. R.; Chu, H. C.; Hwang, G. J.; Tsai, C. C.
2008-01-01
Previous research of adaptive learning mainly focused on improving student learning achievements based only on single-source of personalization information, such as learning style, cognitive style or learning achievement. In this paper, an innovative adaptive learning approach is proposed by basing upon two main sources of personalization…
Effectiveness of Problem-Based Learning in Introductory Business Courses
ERIC Educational Resources Information Center
Hartman, Katherine B.; Moberg, Christopher R.; Lambert, Jamie M.
2013-01-01
Problem-based learning (PBL) is an instructional approach that provides learners with opportunities to identify solutions to ill-structured, real-world problems. Previous research provides evidence to support claims about the positive effects of PBL on cognitive skill development and knowledge retention. This study contributes to existing…
GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.
Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd
2018-01-01
In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.
Guo, Jiahua; Sinclair, Chris J; Selby, Katherine; Boxall, Alistair B A
2016-06-01
Approximately 1500 active pharmaceutical ingredients are currently in use; however, the environmental occurrence and impacts of only a small proportion of these have been investigated. Recognizing that it would be impractical to monitor and assess all pharmaceuticals that are in use, several previous studies have proposed the use of prioritization approaches to identify substances of most concern so that resources can be focused on these. All of these previous approaches suffer from limitations. In the present study, the authors draw on experience from previous prioritization exercises and present a holistic approach for prioritizing pharmaceuticals in the environment in terms of risks to aquatic and soil organisms, avian and mammalian wildlife, and humans. The approach considers both apical ecotoxicological endpoints as well as potential nonapical effects related to the therapeutic mode of action. Application of the approach is illustrated for 146 active pharmaceuticals that are used either in the community or in hospital settings in the United Kingdom. Using the approach, 16 compounds were identified as a potential priority. These substances include compounds belonging to the antibiotic, antidepressant, anti-inflammatory, antidiabetic, antiobesity, and estrogen classes as well as associated metabolites. In the future, the prioritization approach should be applied more broadly around the different regions of the world. Environ Toxicol Chem 2016;35:1550-1559. © 2016 SETAC. © 2016 SETAC.
Improving Security for SCADA Sensor Networks with Reputation Systems and Self-Organizing Maps.
Moya, José M; Araujo, Alvaro; Banković, Zorana; de Goyeneche, Juan-Mariano; Vallejo, Juan Carlos; Malagón, Pedro; Villanueva, Daniel; Fraga, David; Romero, Elena; Blesa, Javier
2009-01-01
The reliable operation of modern infrastructures depends on computerized systems and Supervisory Control and Data Acquisition (SCADA) systems, which are also based on the data obtained from sensor networks. The inherent limitations of the sensor devices make them extremely vulnerable to cyberwarfare/cyberterrorism attacks. In this paper, we propose a reputation system enhanced with distributed agents, based on unsupervised learning algorithms (self-organizing maps), in order to achieve fault tolerance and enhanced resistance to previously unknown attacks. This approach has been extensively simulated and compared with previous proposals.
Improving Security for SCADA Sensor Networks with Reputation Systems and Self-Organizing Maps
Moya, José M.; Araujo, Álvaro; Banković, Zorana; de Goyeneche, Juan-Mariano; Vallejo, Juan Carlos; Malagón, Pedro; Villanueva, Daniel; Fraga, David; Romero, Elena; Blesa, Javier
2009-01-01
The reliable operation of modern infrastructures depends on computerized systems and Supervisory Control and Data Acquisition (SCADA) systems, which are also based on the data obtained from sensor networks. The inherent limitations of the sensor devices make them extremely vulnerable to cyberwarfare/cyberterrorism attacks. In this paper, we propose a reputation system enhanced with distributed agents, based on unsupervised learning algorithms (self-organizing maps), in order to achieve fault tolerance and enhanced resistance to previously unknown attacks. This approach has been extensively simulated and compared with previous proposals. PMID:22291569
Perthold, Jan Walther; Oostenbrink, Chris
2018-05-17
Enveloping distribution sampling (EDS) is an efficient approach to calculate multiple free-energy differences from a single molecular dynamics (MD) simulation. However, the construction of an appropriate reference-state Hamiltonian that samples all states efficiently is not straightforward. We propose a novel approach for the construction of the EDS reference-state Hamiltonian, related to a previously described procedure to smoothen energy landscapes. In contrast to previously suggested EDS approaches, our reference-state Hamiltonian preserves local energy minima of the combined end-states. Moreover, we propose an intuitive, robust and efficient parameter optimization scheme to tune EDS Hamiltonian parameters. We demonstrate the proposed method with established and novel test systems and conclude that our approach allows for the automated calculation of multiple free-energy differences from a single simulation. Accelerated EDS promises to be a robust and user-friendly method to compute free-energy differences based on solid statistical mechanics.
Nguyen, Thanh-Son; Selinger, Jonathan V
2017-09-01
In liquid crystal elastomers and polymer networks, the orientational order of liquid crystals is coupled with elastic distortions of crosslinked polymers. Previous theoretical research has described these materials through two different approaches: a neoclassical theory based on the liquid crystal director and the deformation gradient tensor, and a geometric elasticity theory based on the difference between the actual metric tensor and a reference metric. Here, we connect those two approaches using a formalism based on differential geometry. Through this connection, we determine how both the director and the geometry respond to a change of temperature.
Mid-course multi-target tracking using continuous representation
NASA Technical Reports Server (NTRS)
Zak, Michail; Toomarian, Nikzad
1991-01-01
The thrust of this paper is to present a new approach to multi-target tracking for the mid-course stage of the Strategic Defense Initiative (SDI). This approach is based upon a continuum representation of a cluster of flying objects. We assume that the velocities of the flying objects can be embedded into a smooth velocity field. This assumption is based upon the impossibility of encounters in a high density cluster between the flying objects. Therefore, the problem is reduced to an identification of a moving continuum based upon consecutive time frame observations. In contradistinction to the previous approaches, here each target is considered as a center of a small continuous neighborhood subjected to a local-affine transformation, and therefore, the target trajectories do not mix. Obviously, their mixture in plane of sensor view is apparent. The approach is illustrated by an example.
Parenthood and Worrying About Climate Change: The Limitations of Previous Approaches.
Ekholm, Sara; Olofsson, Anna
2017-02-01
The present study considers the correlation between parenthood and worry about the consequences of climate change. Two approaches to gauging people's perceptions of the risks of climate change are compared: the classic approach, which measures risk perception, and the emotion-based approach, which measures feelings toward a risk object. The empirical material is based on a questionnaire-based survey of 3,529 people in Sweden, of whom 1,376 answered, giving a response rate of 39%. The results show that the correlation of parenthood and climate risk is significant when the emotional aspect is raised, but not when respondents were asked to do cognitive estimates of risk. Parenthood proves significant in all three questions that measure feelings, demonstrating that it is a determinant that serves to increase worry about climate change. © 2016 Society for Risk Analysis.
Evidence based management for paediatric burn: new approaches and improved scar outcomes.
Kishikova, Lyudmila; Smith, Matthew D; Cubison, Tania C S
2014-12-01
Little evidence has been produced on the best practice for managing paediatric burns. We set out to develop a formal approach based on the finding that hypertrophic scarring is related to healing-time, with durations under 21 days associated with improved scar outcome. Incorporating new advances in burn care, we compared outcomes under the new approach to a cohort treated previously. Our study was a retrospective cross-sectional case note study, with demographic, treatment and outcome information collected. The management and outcome of each case was assessed and compared against another paediatric burns cohort from 2006. 181 burns presenting across a six month period were analysed (2010 cohort) and compared to 337 children from a previous cohort from 2006. Comparison of patients between cohorts showed an overall shift towards shorter healing-times in the 2010 cohort. A lower overall rate of hypertrophic scarring was seen in the 2010 cohort, and for corresponding healing-times after injury, hypertrophic scarring rates were halved in comparison to the 2006 cohort. We demonstrate that the use of a structured approach for paediatric burns has improved outcomes with regards to healing-time and hypertrophic scarring rate. This approach allows maximisation of healing potential and implements aggressive prophylactic measures. Copyright © 2014 Elsevier Ltd and ISBI. All rights reserved.
Increasing efficiency of CO2 uptake by combined land-ocean sink
NASA Astrophysics Data System (ADS)
van Marle, M.; van Wees, D.; Houghton, R. A.; Nassikas, A.; van der Werf, G.
2017-12-01
Carbon-climate feedbacks are one of the key uncertainties in predicting future climate change. Such a feedback could originate from carbon sinks losing their efficiency, for example due to saturation of the CO2 fertilization effect or ocean warming. An indirect approach to estimate how the combined land and ocean sink responds to climate change and growing fossil fuel emissions is based on assessing the trends in the airborne fraction of CO2 emissions from fossil fuel and land use change. One key limitation with this approach has been the large uncertainty in quantifying land use change emissions. We have re-assessed those emissions in a more data-driven approach by combining estimates coming from a bookkeeping model with visibility-based land use change emissions available for the Arc of Deforestation and Equatorial Asia, two key regions with large land use change emissions. The advantage of the visibility-based dataset is that the emissions are observation-based and this dataset provides more detailed information about interannual variability than previous estimates. Based on our estimates we provide evidence that land use and land cover change emissions have increased more rapidly than previously thought, implying that the airborne fraction has decreased since the start of CO2 measurements in 1959. This finding is surprising because it means that the combined land and ocean sink has become more efficient while the opposite is expected.
ERIC Educational Resources Information Center
Parks, Perry
2015-01-01
This case study examines a creative approach by two journalism professors to enhance experiential learning in separate skills-based newswriting and editing courses by collaborating to produce a live online news report from campus each week on a four-hour deadline. The study builds on previous research into how innovative classroom structures that…
ERIC Educational Resources Information Center
Deschesnes, Marthe; Drouin, Nathalie; Tessier, Caroline; Couturier, Yves
2014-01-01
Purpose: The purpose of this paper is to understand how a Canadian intervention based on a professional development (PD) model did or did not influence schools' capacities to absorb a Healthy School (HS) approach into their operations. This study is the second part of a research project: previously published results regarding this research…
Guerrero, Erick G; Padwa, Howard; Fenwick, Karissa; Harris, Lesley M; Aarons, Gregory A
2016-05-14
Despite a solid research base supporting evidence-based practices (EBPs) for addiction treatment such as contingency management and medication-assisted treatment, these services are rarely implemented and delivered in community-based addiction treatment programs in the USA. As a result, many clients do not benefit from the most current and efficacious treatments, resulting in reduced quality of care and compromised treatment outcomes. Previous research indicates that addiction program leaders play a key role in supporting EBP adoption and use. The present study expanded on this previous work to identify strategies that addiction treatment program leaders report using to implement new practices. We relied on a staged and iterative mixed-methods approach to achieve the following four goals: (a) collect data using focus groups and semistructured interviews and conduct analyses to identify implicit managerial strategies for implementation, (b) use surveys to quantitatively rank strategy effectiveness, (c) determine how strategies fit with existing theories of organizational management and change, and (d) use a consensus group to corroborate and expand on the results of the previous three stages. Each goal corresponded to a methodological phase, which included data collection and analytic approaches to identify and evaluate leadership interventions that facilitate EBP implementation in community-based addiction treatment programs. Findings show that the top-ranked strategies involved the recruitment and selection of staff members receptive to change, offering support and requesting feedback during the implementation process, and offering in vivo and hands-on training. Most strategies corresponded to emergent implementation leadership approaches that also utilize principles of transformational and transactional leadership styles. Leadership behaviors represented orientations such as being proactive to respond to implementation needs, supportive to assist staff members during the uptake of new practices, knowledgeable to properly guide the implementation process, and perseverant to address ongoing barriers that are likely to stall implementation efforts. These findings emphasize how leadership approaches are leveraged to facilitate the implementation and delivery of EBPs in publicly funded addiction treatment programs. Findings have implications for the content and structure of leadership interventions needed in community-based addiction treatment programs and the development of leadership interventions in these and other service settings.
NASA Astrophysics Data System (ADS)
Lin, Chuang; Wang, Binghui; Jiang, Ning; Farina, Dario
2018-04-01
Objective. This paper proposes a novel simultaneous and proportional multiple degree of freedom (DOF) myoelectric control method for active prostheses. Approach. The approach is based on non-negative matrix factorization (NMF) of surface EMG signals with the inclusion of sparseness constraints. By applying a sparseness constraint to the control signal matrix, it is possible to extract the basis information from arbitrary movements (quasi-unsupervised approach) for multiple DOFs concurrently. Main Results. In online testing based on target hitting, able-bodied subjects reached a greater throughput (TP) when using sparse NMF (SNMF) than with classic NMF or with linear regression (LR). Accordingly, the completion time (CT) was shorter for SNMF than NMF or LR. The same observations were made in two patients with unilateral limb deficiencies. Significance. The addition of sparseness constraints to NMF allows for a quasi-unsupervised approach to myoelectric control with superior results with respect to previous methods for the simultaneous and proportional control of multi-DOF. The proposed factorization algorithm allows robust simultaneous and proportional control, is superior to previous supervised algorithms, and, because of minimal supervision, paves the way to online adaptation in myoelectric control.
Comparison of a rational vs. high throughput approach for rapid salt screening and selection.
Collman, Benjamin M; Miller, Jonathan M; Seadeek, Christopher; Stambek, Julie A; Blackburn, Anthony C
2013-01-01
In recent years, high throughput (HT) screening has become the most widely used approach for early phase salt screening and selection in a drug discovery/development setting. The purpose of this study was to compare a rational approach for salt screening and selection to those results previously generated using a HT approach. The rational approach involved a much smaller number of initial trials (one salt synthesis attempt per counterion) that were selected based on a few strategic solubility determinations of the free form combined with a theoretical analysis of the ideal solvent solubility conditions for salt formation. Salt screening results for sertraline, tamoxifen, and trazodone using the rational approach were compared to those previously generated by HT screening. The rational approach produced similar results to HT screening, including identification of the commercially chosen salt forms, but with a fraction of the crystallization attempts. Moreover, the rational approach provided enough solid from the very initial crystallization of a salt for more thorough and reliable solid-state characterization and thus rapid decision-making. The crystallization techniques used in the rational approach mimic larger-scale process crystallization, allowing smoother technical transfer of the selected salt to the process chemist.
Xie, Bin; da Silva, Orlando; Zaric, Greg
2012-01-01
To evaluate the incremental cost-effectiveness of a system-based approach for the management of neonatal jaundice and the prevention of kernicterus in term and late-preterm (≥35 weeks) infants, compared with the traditional practice based on visual inspection and selected bilirubin testing. Two hypothetical cohorts of 150,000 term and late-preterm neonates were used to compare the costs and outcomes associated with the use of a system-based or traditional practice approach. Data for the evaluation were obtained from the case costing centre at a large teaching hospital in Ontario, supplemented by data from the literature. The per child cost for the system-based approach cohort was $176, compared with $173 in the traditional practice cohort. The higher cost associated with the system-based cohort reflects increased costs for predischarge screening and treatment and increased postdischarge follow-up visits. These costs are partially offset by reduced costs from fewer emergency room visits, hospital readmissions and kernicterus cases. Compared with the traditional approach, the cost to prevent one kernicterus case using the system-based approach was $570,496, the cost per life year gained was $26,279, and the cost per quality-adjusted life year gained was $65,698. The cost to prevent one kernicterus case using the system-based approach is much lower than previously reported in the literature.
Xie, Bin; da Silva, Orlando; Zaric, Greg
2012-01-01
OBJECTIVE: To evaluate the incremental cost-effectiveness of a system-based approach for the management of neonatal jaundice and the prevention of kernicterus in term and late-preterm (≥35 weeks) infants, compared with the traditional practice based on visual inspection and selected bilirubin testing. STUDY DESIGN: Two hypothetical cohorts of 150,000 term and late-preterm neonates were used to compare the costs and outcomes associated with the use of a system-based or traditional practice approach. Data for the evaluation were obtained from the case costing centre at a large teaching hospital in Ontario, supplemented by data from the literature. RESULTS: The per child cost for the system-based approach cohort was $176, compared with $173 in the traditional practice cohort. The higher cost associated with the system-based cohort reflects increased costs for predischarge screening and treatment and increased postdischarge follow-up visits. These costs are partially offset by reduced costs from fewer emergency room visits, hospital readmissions and kernicterus cases. Compared with the traditional approach, the cost to prevent one kernicterus case using the system-based approach was $570,496, the cost per life year gained was $26,279, and the cost per quality-adjusted life year gained was $65,698. CONCLUSION: The cost to prevent one kernicterus case using the system-based approach is much lower than previously reported in the literature. PMID:23277747
New, national bottom-up estimate for tree-based biological ...
Nitrogen is a limiting nutrient in many ecosystems, but is also a chief pollutant from human activity. Quantifying human impacts on the nitrogen cycle and investigating natural ecosystem nitrogen cycling both require an understanding of the magnitude of nitrogen inputs from biological nitrogen fixation (BNF). A bottom-up approach to estimating BNF—scaling rates up from measurements to broader scales—is attractive because it is rooted in actual BNF measurements. However, bottom-up approaches have been hindered by scaling difficulties, and a recent top-down approach suggested that the previous bottom-up estimate was much too large. Here, we used a bottom-up approach for tree-based BNF, overcoming scaling difficulties with the systematic, immense (>70,000 N-fixing trees) Forest Inventory and Analysis (FIA) database. We employed two approaches to estimate species-specific BNF rates: published ecosystem-scale rates (kg N ha-1 yr-1) and published estimates of the percent of N derived from the atmosphere (%Ndfa) combined with FIA-derived growth rates. Species-specific rates can vary for a variety of reasons, so for each approach we examined how different assumptions influenced our results. Specifically, we allowed BNF rates to vary with stand age, N-fixer density, and canopy position (since N-fixation is known to require substantial light).Our estimates from this bottom-up technique are several orders of magnitude lower than previous estimates indicating
Model-Based Engine Control Architecture with an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Connolly, Joseph W.
2016-01-01
This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The non-linear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.
Model-Based Engine Control Architecture with an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Connolly, Joseph W.
2016-01-01
This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The nonlinear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.
NASA Astrophysics Data System (ADS)
Armando, Alessandro; Giunchiglia, Enrico; Ponta, Serena Elisa
We present an approach to the formal specification and automatic analysis of business processes under authorization constraints based on the action language \\cal{C}. The use of \\cal{C} allows for a natural and concise modeling of the business process and the associated security policy and for the automatic analysis of the resulting specification by using the Causal Calculator (CCALC). Our approach improves upon previous work by greatly simplifying the specification step while retaining the ability to perform a fully automatic analysis. To illustrate the effectiveness of the approach we describe its application to a version of a business process taken from the banking domain and use CCALC to determine resource allocation plans complying with the security policy.
Matsumoto, Hirotaka; Kiryu, Hisanori
2016-06-08
Single-cell technologies make it possible to quantify the comprehensive states of individual cells, and have the power to shed light on cellular differentiation in particular. Although several methods have been developed to fully analyze the single-cell expression data, there is still room for improvement in the analysis of differentiation. In this paper, we propose a novel method SCOUP to elucidate differentiation process. Unlike previous dimension reduction-based approaches, SCOUP describes the dynamics of gene expression throughout differentiation directly, including the degree of differentiation of a cell (in pseudo-time) and cell fate. SCOUP is superior to previous methods with respect to pseudo-time estimation, especially for single-cell RNA-seq. SCOUP also successfully estimates cell lineage more accurately than previous method, especially for cells at an early stage of bifurcation. In addition, SCOUP can be applied to various downstream analyses. As an example, we propose a novel correlation calculation method for elucidating regulatory relationships among genes. We apply this method to a single-cell RNA-seq data and detect a candidate of key regulator for differentiation and clusters in a correlation network which are not detected with conventional correlation analysis. We develop a stochastic process-based method SCOUP to analyze single-cell expression data throughout differentiation. SCOUP can estimate pseudo-time and cell lineage more accurately than previous methods. We also propose a novel correlation calculation method based on SCOUP. SCOUP is a promising approach for further single-cell analysis and available at https://github.com/hmatsu1226/SCOUP.
Managing School-Based Curriculum Innovations: A Hong Kong Case Study
ERIC Educational Resources Information Center
Law, Edmond H. F.; Wan, Sally W. Y.; Galton, Maurice; Lee, John C. K.
2010-01-01
This study was originally designed to explore the impact of a distributed approach to developing curriculum leadership among schoolteachers. Previous papers have focused on reporting evidence of teacher learning in the process of engaging teachers in various types of curriculum decision-making in an innovation project based on interview data. This…
Towards an Integration of Research on Teaching and Learning
ERIC Educational Resources Information Center
Svensson, Lennart
2016-01-01
The aim of this article is to present arguments for an integrated empirical research on teaching and learning based on previous research and the phenomenographic research tradition. From 1970 and for some years after, the main focus in phenomenographic research was on students' approaches to and understanding of subject matter. Later, based on…
The Development and Application of the Coping with Bullying Scale for Children
ERIC Educational Resources Information Center
Parris, Leandra N.
2013-01-01
The Multidimensional Model for Coping with Bullying (MMCB; Parris, in development) was conceptualized based on a literature review of coping with bullying and by combining relevant aspects of previous models. Strategies were described based on their focus (problem-focused vs. emotion-focused) and orientation (avoidance, approach-self,…
Promoting Positive Academic Dispositions Using a Web-Based PBL Environment: The GlobalEd 2 Project
ERIC Educational Resources Information Center
Brown, Scott W.; Lawless, Kimberly A.; Boyer, Mark A.
2013-01-01
Problem-based learning (PBL) is an instructional design approach for promoting student learning, understanding and knowledge development in context rich settings. Previous PBL research has primarily focused on face-to-face learning environments, but current technologies afford PBL designers the opportunities to create online, virtual, PBL…
The Validity of Computer Audits of Simulated Cases Records.
ERIC Educational Resources Information Center
Rippey, Robert M.; And Others
This paper describes the implementation of a computer-based approach to scoring open-ended problem lists constructed to evaluate student and practitioner clinical judgment from real or simulated records. Based on 62 previously administered and scored problem lists, the program was written in BASIC for a Heathkit H11A computer (equivalent to DEC…
ERIC Educational Resources Information Center
Lousada, M.; Jesus, Luis M. T.; Hall, A.; Joffe, V.
2014-01-01
Background: The effectiveness of two treatment approaches (phonological therapy and articulation therapy) for treatment of 14 children, aged 4;0-6;7 years, with phonologically based speech-sound disorder (SSD) has been previously analysed with severity outcome measures (percentage of consonants correct score, percentage occurrence of phonological…
Group Communication and Critical Thinking Competence Development Using a Reality-Based Project
ERIC Educational Resources Information Center
Paulson, Edward
2011-01-01
The presented merger and acquisition classroom exercise is based on a real yet incomplete transaction transpiring during the period of the class. The approach enables adult students to apply their previously acquired business experience to a strategic analysis project facilitating the development of group communication, critical thinking, and…
Virtual-Recitation: A World Wide Web Based Approach to Active Learning in Clinical Pharmacokinetics.
ERIC Educational Resources Information Center
Woodward, Donald K.
1998-01-01
Describes implementation, evaluation of World Wide Web-based component in a Rutgers University (New Jersey) advanced clinical pharmacokinetics course. Scheduling accommodated nontraditional students; each week Web pages providing review and supplementary material and an online quiz were posted after class. Comparison with the previous year's…
Portuguese Public University Student Satisfaction: A Stakeholder Theory-Based Approach
ERIC Educational Resources Information Center
Mainardes, Emerson; Alves, Helena; Raposo, Mario
2013-01-01
In accordance with the importance of the student stakeholder to universities, the objective of this research project was to evaluate student satisfaction at Portuguese public universities as regards their self-expressed core expectations. The research was based both on stakeholder theory itself and on previous studies of university stakeholders.…
Covalent Surface Modification of Silicon Oxides with Alcohols in Polar Aprotic Solvents.
Lee, Austin W H; Gates, Byron D
2017-09-05
Alcohol-based monolayers were successfully formed on the surfaces of silicon oxides through reactions performed in polar aprotic solvents. Monolayers prepared from alcohol-based reagents have been previously introduced as an alternative approach to covalently modify the surfaces of silicon oxides. These reagents are readily available, widely distributed, and are minimally susceptible to side reactions with ambient moisture. A limitation of using alcohol-based compounds is that previous reactions required relatively high temperatures in neat solutions, which can degrade some alcohol compounds or could lead to other unwanted side reactions during the formation of the monolayers. To overcome these challenges, we investigate the condensation reaction of alcohols on silicon oxides carried out in polar aprotic solvents. In particular, propylene carbonate has been identified as a polar aprotic solvent that is relatively nontoxic, readily accessible, and can facilitate the formation of alcohol-based monolayers. We have successfully demonstrated this approach for tuning the surface chemistry of silicon oxide surfaces with a variety of alcohol containing compounds. The strategy introduced in this research can be utilized to create silicon oxide surfaces with hydrophobic, oleophobic, or charged functionalities.
Value of recruitment strategies used in a primary care practice-based trial.
Ellis, Shellie D; Bertoni, Alain G; Bonds, Denise E; Clinch, C Randall; Balasubramanyam, Aarthi; Blackwell, Caroline; Chen, Haiying; Lischke, Michael; Goff, David C
2007-05-01
"Physicians-recruiting-physicians" is the preferred recruitment approach for practice-based research. However, yields are variable; and the approach can be costly and lead to biased, unrepresentative samples. We sought to explore the potential efficiency of alternative methods. We conducted a retrospective analysis of the yield and cost of 10 recruitment strategies used to recruit primary care practices to a randomized trial to improve cardiovascular disease risk factor management. We measured response and recruitment yields and the resources used to estimate the value of each strategy. Providers at recruited practices were surveyed about motivation for participation. Response to 6 opt-in marketing strategies was 0.40% (53/13290), ranging from 0% to 2.86% by strategy; 33.96% (18/53) of responders were recruited to the study. Of those recruited from opt-out strategies, 8.68% joined the study, ranging from 5.35% to 41.67% per strategy. A strategy that combined both opt-in and opt-out approaches resulted in a 51.14% (90/176) response and a 10.80% (19/90) recruitment rate. Cost of recruitment was $613 per recruited practice. Recruitment approaches based on in-person meetings (41.67%), previous relationships (33.33%), and borrowing an Area Health Education Center's established networks (10.80%), yielded the most recruited practices per effort and were most cost efficient. Individual providers who chose to participate were motivated by interest in improving their clinical practice (80.5%); contributing to CVD primary prevention (54.4%); and invigorating their practice with new ideas (42.1%). This analysis provides suggestions for future recruitment efforts and research. Translational studies with limited funds could consider multi-modal recruitment approaches including in-person presentations to practice groups and exploitation of previous relationships, which require the providers to opt-out, and interactive opt-in approaches which rely on borrowed networks. These approaches can be supplemented with non-relationship-based opt-out strategies such as cold calls strategically targeted to underrepresented provider groups.
Petri net modeling of high-order genetic systems using grammatical evolution.
Moore, Jason H; Hahn, Lance W
2003-11-01
Understanding how DNA sequence variations impact human health through a hierarchy of biochemical and physiological systems is expected to improve the diagnosis, prevention, and treatment of common, complex human diseases. We have previously developed a hierarchical dynamic systems approach based on Petri nets for generating biochemical network models that are consistent with genetic models of disease susceptibility. This modeling approach uses an evolutionary computation approach called grammatical evolution as a search strategy for optimal Petri net models. We have previously demonstrated that this approach routinely identifies biochemical network models that are consistent with a variety of genetic models in which disease susceptibility is determined by nonlinear interactions between two DNA sequence variations. In the present study, we evaluate whether the Petri net approach is capable of identifying biochemical networks that are consistent with disease susceptibility due to higher order nonlinear interactions between three DNA sequence variations. The results indicate that our model-building approach is capable of routinely identifying good, but not perfect, Petri net models. Ideas for improving the algorithm for this high-dimensional problem are presented.
Extracting Cross-Ontology Weighted Association Rules from Gene Ontology Annotations.
Agapito, Giuseppe; Milano, Marianna; Guzzi, Pietro Hiram; Cannataro, Mario
2016-01-01
Gene Ontology (GO) is a structured repository of concepts (GO Terms) that are associated to one or more gene products through a process referred to as annotation. The analysis of annotated data is an important opportunity for bioinformatics. There are different approaches of analysis, among those, the use of association rules (AR) which provides useful knowledge, discovering biologically relevant associations between terms of GO, not previously known. In a previous work, we introduced GO-WAR (Gene Ontology-based Weighted Association Rules), a methodology for extracting weighted association rules from ontology-based annotated datasets. We here adapt the GO-WAR algorithm to mine cross-ontology association rules, i.e., rules that involve GO terms present in the three sub-ontologies of GO. We conduct a deep performance evaluation of GO-WAR by mining publicly available GO annotated datasets, showing how GO-WAR outperforms current state of the art approaches.
Surface entropy of liquids via a direct Monte Carlo approach - Application to liquid Si
NASA Technical Reports Server (NTRS)
Wang, Z. Q.; Stroud, D.
1990-01-01
Two methods are presented for a direct Monte Carlo evaluation of the surface entropy S(s) of a liquid interacting by specified, volume-independent potentials. The first method is based on an application of the approach of Ferrenberg and Swendsen (1988, 1989) to Monte Carlo simulations at two different temperatures; it gives much more reliable results for S(s) in liquid Si than previous calculations based on numerical differentiation. The second method expresses the surface entropy directly as a canonical average at fixed temperature.
Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)
2002-01-01
We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.
Optical microwave filter based on spectral slicing by use of arrayed waveguide gratings.
Pastor, Daniel; Ortega, Beatriz; Capmany, José; Sales, Salvador; Martinez, Alfonso; Muñoz, Pascual
2003-10-01
We have experimentally demonstrated a new optical signal processor based on the use of arrayed waveguide gratings. The structure exploits the concept of spectral slicing combined with the use of an optical dispersive medium. The approach presents increased flexibility from previous slicing-based structures in terms of tunability, reconfiguration, and apodization of the samples or coefficients of the transversal optical filter.
Oduru, Sreedhar; Campbell, Janee L; Karri, SriTulasi; Hendry, William J; Khan, Shafiq A; Williams, Simon C
2003-01-01
Background Complete genome annotation will likely be achieved through a combination of computer-based analysis of available genome sequences combined with direct experimental characterization of expressed regions of individual genomes. We have utilized a comparative genomics approach involving the sequencing of randomly selected hamster testis cDNAs to begin to identify genes not previously annotated on the human, mouse, rat and Fugu (pufferfish) genomes. Results 735 distinct sequences were analyzed for their relatedness to known sequences in public databases. Eight of these sequences were derived from previously unidentified genes and expression of these genes in testis was confirmed by Northern blotting. The genomic locations of each sequence were mapped in human, mouse, rat and pufferfish, where applicable, and the structure of their cognate genes was derived using computer-based predictions, genomic comparisons and analysis of uncharacterized cDNA sequences from human and macaque. Conclusion The use of a comparative genomics approach resulted in the identification of eight cDNAs that correspond to previously uncharacterized genes in the human genome. The proteins encoded by these genes included a new member of the kinesin superfamily, a SET/MYND-domain protein, and six proteins for which no specific function could be predicted. Each gene was expressed primarily in testis, suggesting that they may play roles in the development and/or function of testicular cells. PMID:12783626
NASA Astrophysics Data System (ADS)
Hatté, C.; Rousseau, D.-D.; Guiot, J.
2009-04-01
An improved inverse vegetation model has been designed to better specify both temperature and precipitation estimates from vegetation descriptions. It is based on the BIOME4 vegetation model and uses both vegetation δ13C and biome as constraints. Previous inverse models based on only one of the two proxies were already improvements over standard reconstruction methods such as the modern analog since these did not take into account some external forcings, for example CO2 concentration. This new approach makes it possible to describe a potential "isotopic niche" defined by analogy with the "climatic niche" theory. Boreal and temperate biomes simulated by BIOME4 are considered in this study. We demonstrate the impact of CO2 concentration on biome existence domains by replacing a "most likely biome" with another with increased CO2 concentration. Additionally, the climate imprint on δ13C between and within biomes is shown: the colder the biome, the lighter its potential isotopic niche; and the higher the precipitation, the lighter the δ13C. For paleoclimate purposes, previous inverse models based on either biome or δ13C did not allow informative paleoclimatic reconstructions of both precipitation and temperature. Application of the new approach to the Eemian of La Grande Pile palynological and geochemical records reduces the range in precipitation values by more than 50% reduces the range in temperatures by about 15% compared to previous inverse modeling approaches. This shows evidence of climate instabilities during Eemian period that can be correlated with independent continental and marine records.
NASA Astrophysics Data System (ADS)
Hatté, C.; Rousseau, D.-D.; Guiot, J.
2009-01-01
An improved inverse vegetation model has been designed to better specify both temperature and precipitation estimates from vegetation descriptions. It is based on the BIOME4 vegetation model and uses both vegetation δ13C and biome as constraints. Previous inverse models based on only one of the two proxies were already improvements over standard reconstruction methods such as the modern analog since these did not take into account some external forcings, for example CO2 concentration. This new approach makes it possible to describe a potential "isotopic niche" defined by analogy with the "climatic niche" theory. Boreal and temperate biomes simulated by BIOME4 are considered in this study. We demonstrate the impact of CO2 concentration on biome existence domains by replacing a "most likely biome" with another with increased CO2 concentration. Additionally, the climate imprint on δ13C between and within biomes is shown: the colder the biome, the lighter its potential isotopic niche; and the higher the precipitation, the lighter the δ13C. For paleoclimate purposes, previous inverse models based on either biome or δ13C did not allow informative paleoclimatic reconstructions of both precipitation and temperature. Application of the new approach to the Eemian of La Grande Pile palynological and geochemical records reduces the range in precipitation values by more than 50% reduces the range in temperatures by about 15% compared to previous inverse modeling approaches. This shows evidence of climate instabilities during Eemian period that can be correlated with independent continental and marine records.
Gray-world-assumption-based illuminant color estimation using color gamuts with high and low chroma
NASA Astrophysics Data System (ADS)
Kawamura, Harumi; Yonemura, Shunichi; Ohya, Jun; Kojima, Akira
2013-02-01
A new approach is proposed for estimating illuminant colors from color images under an unknown scene illuminant. The approach is based on a combination of a gray-world-assumption-based illuminant color estimation method and a method using color gamuts. The former method, which is one we had previously proposed, improved on the original method that hypothesizes that the average of all the object colors in a scene is achromatic. Since the original method estimates scene illuminant colors by calculating the average of all the image pixel values, its estimations are incorrect when certain image colors are dominant. Our previous method improves on it by choosing several colors on the basis of an opponent-color property, which is that the average color of opponent colors is achromatic, instead of using all colors. However, it cannot estimate illuminant colors when there are only a few image colors or when the image colors are unevenly distributed in local areas in the color space. The approach we propose in this paper combines our previous method and one using high chroma and low chroma gamuts, which makes it possible to find colors that satisfy the gray world assumption. High chroma gamuts are used for adding appropriate colors to the original image and low chroma gamuts are used for narrowing down illuminant color possibilities. Experimental results obtained using actual images show that even if the image colors are localized in a certain area in the color space, the illuminant colors are accurately estimated, with smaller estimation error average than that generated in the conventional method.
Fourier Magnitude-Based Privacy-Preserving Clustering on Time-Series Data
NASA Astrophysics Data System (ADS)
Kim, Hea-Suk; Moon, Yang-Sae
Privacy-preserving clustering (PPC in short) is important in publishing sensitive time-series data. Previous PPC solutions, however, have a problem of not preserving distance orders or incurring privacy breach. To solve this problem, we propose a new PPC approach that exploits Fourier magnitudes of time-series. Our magnitude-based method does not cause privacy breach even though its techniques or related parameters are publicly revealed. Using magnitudes only, however, incurs the distance order problem, and we thus present magnitude selection strategies to preserve as many Euclidean distance orders as possible. Through extensive experiments, we showcase the superiority of our magnitude-based approach.
A DTN-Based Multiple Access Fast Forward Service for the NASA Space Network
NASA Technical Reports Server (NTRS)
Israel, David; Davis, Faith; Marquart. Jane
2011-01-01
The NASA Space Network provides a demand access return link service capable of providing users a space link "on demand". An equivalent service in the forward link direction is not possible due to Tracking and Data Relay Spacecraft (TDRS) constraints. A Disruption Tolerant Networking (DTN)-based Multiple Access Fast Forward (MAFF) service has been proposed to provide a forward link to a user as soon as possible. Previous concept studies have identified a basic architecture and implementation approach. This paper reviews the user scenarios and benefits of an MAFF service and proposes an implementation approach based on the use of DTN protocols.
NASA Astrophysics Data System (ADS)
Ivanov, Mark V.; Lobas, Anna A.; Levitsky, Lev I.; Moshkovskii, Sergei A.; Gorshkov, Mikhail V.
2018-02-01
In a proteogenomic approach based on tandem mass spectrometry analysis of proteolytic peptide mixtures, customized exome or RNA-seq databases are employed for identifying protein sequence variants. However, the problem of variant peptide identification without personalized genomic data is important for a variety of applications. Following the recent proposal by Chick et al. (Nat. Biotechnol. 33, 743-749, 2015) on the feasibility of such variant peptide search, we evaluated two available approaches based on the previously suggested "open" search and the "brute-force" strategy. To improve the efficiency of these approaches, we propose an algorithm for exclusion of false variant identifications from the search results involving analysis of modifications mimicking single amino acid substitutions. Also, we propose a de novo based scoring scheme for assessment of identified point mutations. In the scheme, the search engine analyzes y-type fragment ions in MS/MS spectra to confirm the location of the mutation in the variant peptide sequence.
Coherence and visibility for vectorial light.
Luis, Alfredo
2010-08-01
Two-path interference of transversal vectorial waves is embedded within a larger scheme: this is four-path interference between four scalar waves. This comprises previous approaches to coherence between vectorial waves and restores the equivalence between correlation-based coherence and visibility.
Automatic segmentation of colon glands using object-graphs.
Gunduz-Demir, Cigdem; Kandemir, Melih; Tosun, Akif Burak; Sokmensuer, Cenk
2010-02-01
Gland segmentation is an important step to automate the analysis of biopsies that contain glandular structures. However, this remains a challenging problem as the variation in staining, fixation, and sectioning procedures lead to a considerable amount of artifacts and variances in tissue sections, which may result in huge variances in gland appearances. In this work, we report a new approach for gland segmentation. This approach decomposes the tissue image into a set of primitive objects and segments glands making use of the organizational properties of these objects, which are quantified with the definition of object-graphs. As opposed to the previous literature, the proposed approach employs the object-based information for the gland segmentation problem, instead of using the pixel-based information alone. Working with the images of colon tissues, our experiments demonstrate that the proposed object-graph approach yields high segmentation accuracies for the training and test sets and significantly improves the segmentation performance of its pixel-based counterparts. The experiments also show that the object-based structure of the proposed approach provides more tolerance to artifacts and variances in tissues.
ERIC Educational Resources Information Center
Berg, Ronan M. G.; Plovsing, Ronni R.; Damgaard, Morten
2012-01-01
Quiz-based and collaborative teaching strategies have previously been found to be efficient for the improving meaningful learning of physiology during lectures. These approaches have, however, not been investigated during laboratory exercises. In the present study, we compared the impact of solving quizzes individually and in groups with…
Effective and Ethical and Interviewing of Young Children in Pedagogical Context
ERIC Educational Resources Information Center
Dunphy, Elizabeth
2005-01-01
Ethical and effective interviewing of young children in relation to their learning is a challenging and complex process. This paper describes the use of an experience-based flexible and focused interview methodology in a study based on young children's views and understandings of number. It shows how the approach used builds on previous work in…
Meta-Analysis of Inquiry-Based Learning: Effects of Guidance
ERIC Educational Resources Information Center
Lazonder, Ard W.; Harmsen, Ruth
2016-01-01
Research has consistently shown that inquiry-based learning can be more effective than other, more expository instructional approaches as long as students are supported adequately. But what type of guidance is adequate, and for whom? These questions are difficult to answer as most previous research has only focused on one type of guidance and one…
Re-Exploring Game-Assisted Learning Research: The Perspective of Learning Theoretical Bases
ERIC Educational Resources Information Center
Wu, Wen-Hsiung; Chiou, Wen-Bin; Kao, Hao-Yun; Hu, Chung-Hsing Alex; Huang, Sih-Han
2012-01-01
Previous literature reviews or meta-analysis based studies on game-assisted learning have provided important results, but few studies have considered the importance of learning theory, and coverage of papers after 2007 is scant. This study presents a systematic review of the literature using a meta-analysis approach to provide a more comprehensive…
A Service-Learning Initiative within a Community-Based Small Business
ERIC Educational Resources Information Center
Simola, Sheldene
2009-01-01
Purpose: The purpose of this paper is to extend previous scholarly writing on community service-learning (SL) initiatives by looking beyond their use in the not-for-profit sector to their potential use in community-based small businesses. Design/methodology/approach: A rationale for the appropriateness of using SL projects in small businesses is…
ERIC Educational Resources Information Center
Chad, Paul
2012-01-01
Marketing educators are often faced with poor preclass preparation by students, declining student interest in attending classes as the semester progresses, and student complaints regarding previous bad experiences with team assessment activities. Team-based learning (TBL) is an innovative teaching strategy using semiformalized guidelines aimed to…
Spatial-temporal event detection in climate parameter imagery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKenna, Sean Andrew; Gutierrez, Karen A.
Previously developed techniques that comprise statistical parametric mapping, with applications focused on human brain imaging, are examined and tested here for new applications in anomaly detection within remotely-sensed imagery. Two approaches to analysis are developed: online, regression-based anomaly detection and conditional differences. These approaches are applied to two example spatial-temporal data sets: data simulated with a Gaussian field deformation approach and weekly NDVI images derived from global satellite coverage. Results indicate that anomalies can be identified in spatial temporal data with the regression-based approach. Additionally, la Nina and el Nino climatic conditions are used as different stimuli applied to themore » earth and this comparison shows that el Nino conditions lead to significant decreases in NDVI in both the Amazon Basin and in Southern India.« less
Management Approaches to Stomal and Peristomal Complications: A Narrative Descriptive Study.
Beitz, Janice M; Colwell, Janice C
2016-01-01
The purpose of this study was to identify optimal interventions for selected complications based on WOC nurse experts' judgment/expertise. A cross-sectional quantitative descriptive design with qualitative, narrative-type components was used for this study. Following validation rating of appropriateness of interventions and quantitative rankings of first-, second-, and third-line approaches, participants provided substantive handwritten narrative comments about listed interventions. Comments were organized and prioritized using frequency count. Narrative comments reflected the quantitative rankings of efficacy of approaches. Clinicians offered further specific suggestions regarding product use and progression of care for selected complications. Narrative analysis using descriptive quantitative frequency count supported the rankings of most preferred treatments of selected stomal and peristomal complications. Findings add to the previous research on prioritized approaches and evidence-based practice in ostomy care.
NASA Astrophysics Data System (ADS)
Bellón, Beatriz; Bégué, Agnès; Lo Seen, Danny; Lebourgeois, Valentine; Evangelista, Balbino Antônio; Simões, Margareth; Demonte Ferraz, Rodrigo Peçanha
2018-06-01
Cropping systems' maps at fine scale over large areas provide key information for further agricultural production and environmental impact assessments, and thus represent a valuable tool for effective land-use planning. There is, therefore, a growing interest in mapping cropping systems in an operational manner over large areas, and remote sensing approaches based on vegetation index time series analysis have proven to be an efficient tool. However, supervised pixel-based approaches are commonly adopted, requiring resource consuming field campaigns to gather training data. In this paper, we present a new object-based unsupervised classification approach tested on an annual MODIS 16-day composite Normalized Difference Vegetation Index time series and a Landsat 8 mosaic of the State of Tocantins, Brazil, for the 2014-2015 growing season. Two variants of the approach are compared: an hyperclustering approach, and a landscape-clustering approach involving a previous stratification of the study area into landscape units on which the clustering is then performed. The main cropping systems of Tocantins, characterized by the crop types and cropping patterns, were efficiently mapped with the landscape-clustering approach. Results show that stratification prior to clustering significantly improves the classification accuracies for underrepresented and sparsely distributed cropping systems. This study illustrates the potential of unsupervised classification for large area cropping systems' mapping and contributes to the development of generic tools for supporting large-scale agricultural monitoring across regions.
None of the above: A Bayesian account of the detection of novel categories.
Navarro, Daniel J; Kemp, Charles
2017-10-01
Every time we encounter a new object, action, or event, there is some chance that we will need to assign it to a novel category. We describe and evaluate a class of probabilistic models that detect when an object belongs to a category that has not previously been encountered. The models incorporate a prior distribution that is influenced by the distribution of previous objects among categories, and we present 2 experiments that demonstrate that people are also sensitive to this distributional information. Two additional experiments confirm that distributional information is combined with similarity when both sources of information are available. We compare our approach to previous models of unsupervised categorization and to several heuristic-based models, and find that a hierarchical Bayesian approach provides the best account of our data. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
The global magnitude-frequency relationship for large explosive volcanic eruptions
NASA Astrophysics Data System (ADS)
Rougier, Jonathan; Sparks, R. Stephen J.; Cashman, Katharine V.; Brown, Sarah K.
2018-01-01
For volcanoes, as for other natural hazards, the frequency of large events diminishes with their magnitude, as captured by the magnitude-frequency relationship. Assessing this relationship is valuable both for the insights it provides about volcanism, and for the practical challenge of risk management. We derive a global magnitude-frequency relationship for explosive volcanic eruptions of at least 300Mt of erupted mass (or M4.5). Our approach is essentially empirical, based on the eruptions recorded in the LaMEVE database. It differs from previous approaches mainly in our conservative treatment of magnitude-rounding and under-recording. Our estimate for the return period of 'super-eruptions' (1000Gt, or M8) is 17ka (95% CI: 5.2ka, 48ka), which is substantially shorter than previous estimates, indicating that volcanoes pose a larger risk to human civilisation than previously thought.
Netchacovitch, L; Thiry, J; De Bleye, C; Dumont, E; Cailletaud, J; Sacré, P-Y; Evrard, B; Hubert, Ph; Ziemons, E
2017-08-15
Since the Food and Drug Administration (FDA) published a guidance based on the Process Analytical Technology (PAT) approach, real-time analyses during manufacturing processes are in real expansion. In this study, in-line Raman spectroscopic analyses were performed during a Hot-Melt Extrusion (HME) process to determine the Active Pharmaceutical Ingredient (API) content in real-time. The method was validated based on a univariate and a multivariate approach and the analytical performances of the obtained models were compared. Moreover, on one hand, in-line data were correlated with the real API concentration present in the sample quantified by a previously validated off-line confocal Raman microspectroscopic method. On the other hand, in-line data were also treated in function of the concentration based on the weighing of the components in the prepared mixture. The importance of developing quantitative methods based on the use of a reference method was thus highlighted. The method was validated according to the total error approach fixing the acceptance limits at ±15% and the α risk at ±5%. This method reaches the requirements of the European Pharmacopeia norms for the uniformity of content of single-dose preparations. The validation proves that future results will be in the acceptance limits with a previously defined probability. Finally, the in-line validated method was compared with the off-line one to demonstrate its ability to be used in routine analyses. Copyright © 2017 Elsevier B.V. All rights reserved.
Hanna, Lezley-Anne; Hughes, Carmel
2012-12-01
To explore the role of evidence of effectiveness when making decisions about over-the-counter (OTC) medication and to ascertain whether evidence-based medicine training raised awareness in decision-making. Additionally, this work aimed to complement the findings of a previous study because all participants in this current study had received training in evidence-based medicine (unlike the previous participants). Following ethical approval and an e-mailed invitation, face-to-face, semi-structured interviews were conducted with newly registered pharmacists (who had received training in evidence-based medicine as part of their MPharm degree) to discuss the role of evidence of effectiveness with OTC medicines. Interviews were recorded and transcribed verbatim. Following transcription, all data were entered into the NVivo software package (version 8). Data were coded and analysed using a constant comparison approach. Twenty-five pharmacists (7 males and 18 females; registered for less than 4 months) were recruited and all participated in the study. Their primary focus with OTC medicines was safety; sales of products (including those that lack evidence of effectiveness) were justified provided they did no harm. Meeting patient expectation was also an important consideration and often superseded evidence. Despite knowledge of the concept, and an awareness of ethical requirements, an evidence-based approach was not routinely implemented by these pharmacists. Pharmacists did not routinely utilize evidence-based resources when making decisions about OTC medicines and some felt uncomfortable discussing the evidence-base for OTC products with patients. The evidence-based medicine training that these pharmacists received appeared to have limited influence on OTC decision-making. More work could be conducted to ensure that an evidence-based approach is routinely implemented in practice. © 2012 The Authors. IJPP © 2012 Royal Pharmaceutical Society.
Chen, Chen; Liu, Xiaohui; Zheng, Weimin; Zhang, Lei; Yao, Jun; Yang, Pengyuan
2014-04-04
To completely annotate the human genome, the task of identifying and characterizing proteins that currently lack mass spectrometry (MS) evidence is inevitable and urgent. In this study, as the first effort to screen missing proteins in large scale, we developed an approach based on SDS-PAGE followed by liquid chromatography-multiple reaction monitoring (LC-MRM), for screening of those missing proteins with only a single peptide hit in the previous liver proteome data set. Proteins extracted from normal human liver were separated in SDS-PAGE and digested in split gel slice, and the resulting digests were then subjected to LC-schedule MRM analysis. The MRM assays were developed through synthesized crude peptides for target peptides. In total, the expressions of 57 target proteins were confirmed from 185 MRM assays in normal human liver tissues. Among the proved 57 one-hit wonders, 50 proteins are of the minimally redundant set in the PeptideAtlas database, 7 proteins even have none MS-based information previously in various biological processes. We conclude that our SDS-PAGE-MRM workflow can be a powerful approach to screen missing or poorly characterized proteins in different samples and to provide their quantity if detected. The MRM raw data have been uploaded to ISB/SRM Atlas/PASSEL (PXD000648).
Two-Phase chief complaint mapping to the UMLS metathesaurus in Korean electronic medical records.
Kang, Bo-Yeong; Kim, Dae-Won; Kim, Hong-Gee
2009-01-01
The task of automatically determining the concepts referred to in chief complaint (CC) data from electronic medical records (EMRs) is an essential component of many EMR applications aimed at biosurveillance for disease outbreaks. Previous approaches that have been used for this concept mapping have mainly relied on term-level matching, whereby the medical terms in the raw text and their synonyms are matched with concepts in a terminology database. These previous approaches, however, have shortcomings that limit their efficacy in CC concept mapping, where the concepts for CC data are often represented by associative terms rather than by synonyms. Therefore, herein we propose a concept mapping scheme based on a two-phase matching approach, especially for application to Korean CCs, which uses term-level complete matching in the first phase and concept-level matching based on concept learning in the second phase. The proposed concept-level matching suggests the method to learn all the terms (associative terms as well as synonyms) that represent the concept and predict the most probable concept for a CC based on the learned terms. Experiments on 1204 CCs extracted from 15,618 discharge summaries of Korean EMRs showed that the proposed method gave significantly improved F-measure values compared to the baseline system, with improvements of up to 73.57%.
NASA Astrophysics Data System (ADS)
Zhang, Dongqing; Icke, Ilknur; Dogdas, Belma; Parimal, Sarayu; Sampath, Smita; Forbes, Joseph; Bagchi, Ansuman; Chin, Chih-Liang; Chen, Antong
2018-03-01
In the development of treatments for cardiovascular diseases, short axis cardiac cine MRI is important for the assessment of various structural and functional properties of the heart. In short axis cardiac cine MRI, Cardiac properties including the ventricle dimensions, stroke volume, and ejection fraction can be extracted based on accurate segmentation of the left ventricle (LV) myocardium. One of the most advanced segmentation methods is based on fully convolutional neural networks (FCN) and can be successfully used to do segmentation in cardiac cine MRI slices. However, the temporal dependency between slices acquired at neighboring time points is not used. Here, based on our previously proposed FCN structure, we proposed a new algorithm to segment LV myocardium in porcine short axis cardiac cine MRI by incorporating convolutional long short-term memory (Conv-LSTM) to leverage the temporal dependency. In this approach, instead of processing each slice independently in a conventional CNN-based approach, the Conv-LSTM architecture captures the dynamics of cardiac motion over time. In a leave-one-out experiment on 8 porcine specimens (3,600 slices), the proposed approach was shown to be promising by achieving average mean Dice similarity coefficient (DSC) of 0.84, Hausdorff distance (HD) of 6.35 mm, and average perpendicular distance (APD) of 1.09 mm when compared with manual segmentations, which improved the performance of our previous FCN-based approach (average mean DSC=0.84, HD=6.78 mm, and APD=1.11 mm). Qualitatively, our model showed robustness against low image quality and complications in the surrounding anatomy due to its ability to capture the dynamics of cardiac motion.
Global Kalman filter approaches to estimate absolute angles of lower limb segments.
Nogueira, Samuel L; Lambrecht, Stefan; Inoue, Roberto S; Bortole, Magdo; Montagnoli, Arlindo N; Moreno, Juan C; Rocon, Eduardo; Terra, Marco H; Siqueira, Adriano A G; Pons, Jose L
2017-05-16
In this paper we propose the use of global Kalman filters (KFs) to estimate absolute angles of lower limb segments. Standard approaches adopt KFs to improve the performance of inertial sensors based on individual link configurations. In consequence, for a multi-body system like a lower limb exoskeleton, the inertial measurements of one link (e.g., the shank) are not taken into account in other link angle estimations (e.g., foot). Global KF approaches, on the other hand, correlate the collective contribution of all signals from lower limb segments observed in the state-space model through the filtering process. We present a novel global KF (matricial global KF) relying only on inertial sensor data, and validate both this KF and a previously presented global KF (Markov Jump Linear Systems, MJLS-based KF), which fuses data from inertial sensors and encoders from an exoskeleton. We furthermore compare both methods to the commonly used local KF. The results indicate that the global KFs performed significantly better than the local KF, with an average root mean square error (RMSE) of respectively 0.942° for the MJLS-based KF, 1.167° for the matrical global KF, and 1.202° for the local KFs. Including the data from the exoskeleton encoders also resulted in a significant increase in performance. The results indicate that the current practice of using KFs based on local models is suboptimal. Both the presented KF based on inertial sensor data, as well our previously presented global approach fusing inertial sensor data with data from exoskeleton encoders, were superior to local KFs. We therefore recommend to use global KFs for gait analysis and exoskeleton control.
Hogue, Aaron; Henderson, Craig E; Becker, Sara J; Knight, Danica K
2018-06-12
This article updates the evidence base on outpatient behavioral treatments for adolescent substance use (ASU) since publication of the previous review completed for this journal by Hogue, Henderson, Ozechowski, and Robbins (2014). It first summarizes the Hogue et al. findings along with those from recent literature reviews and meta-analytic studies of ASU treatments. It then presents study design and methods criteria used to select 11 comparative studies subjected to Journal of Clinical Child and Adolescent Psychology level of support evaluation. These 11 studies are detailed in terms of their sample characteristics, methodological quality, and substance use outcomes. Cumulative level of support designations are then made for each identified treatment approach. These cumulative designations are virtually identical to those of the previous review: ecological family-based treatment, individual cognitive-behavioral therapy, and group cognitive-behavioral therapy remain well-established; behavioral family-based treatment and motivational interviewing remain probably efficacious; drug counseling remains possibly efficacious; and an updated total of 5 multicomponent treatments combining more than 1 approach (3 of which include contingency management) are deemed well-established or probably efficacious. Treatment delivery issues associated with evidence-based approaches are then reviewed, focusing on client engagement, fidelity and mediator, and predictor and moderator effects. Finally, to help accelerate innovation in ASU treatment science and practice, the article outlines promising horizons in improving youth identification and access, specifying and implementing pragmatic treatment in community settings, and leveraging emerging lessons from implementation science.
Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.
Liu, Min; Wang, Xueping; Zhang, Hongzhong
2018-03-01
In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.
Wörz, Stefan; Rohr, Karl
2006-01-01
We introduce an elastic registration approach which is based on a physical deformation model and uses Gaussian elastic body splines (GEBS). We formulate an extended energy functional related to the Navier equation under Gaussian forces which also includes landmark localization uncertainties. These uncertainties are characterized by weight matrices representing anisotropic errors. Since the approach is based on a physical deformation model, cross-effects in elastic deformations can be taken into account. Moreover, we have a free parameter to control the locality of the transformation for improved registration of local geometric image differences. We demonstrate the applicability of our scheme based on 3D CT images from the Truth Cube experiment, 2D MR images of the brain, as well as 2D gel electrophoresis images. It turns out that the new scheme achieves more accurate results compared to previous approaches.
Parish, William J; Aldridge, Arnie; Allaire, Benjamin; Ekwueme, Donatus U; Poehler, Diana; Guy, Gery P; Thomas, Cheryll C; Trogdon, Justin G
2017-11-01
To assess the burden of excessive alcohol use, researchers estimate alcohol-attributable fractions (AAFs) routinely. However, under-reporting in survey data can bias these estimates. We present an approach that adjusts for under-reporting in the estimation of AAFs, particularly within subgroups. This framework is a refinement of a previous method conducted by Rehm et al. We use a measurement error model to derive the 'true' alcohol distribution from a 'reported' alcohol distribution. The 'true' distribution leverages per-capita sales data to identify the distribution average and then identifies the shape of the distribution with self-reported survey data. Data are from the National Alcohol Survey (NAS), the National Household Survey on Drug Abuse (NHSDA) and the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC). We compared our approach with previous approaches by estimating the AAF of female breast cancer cases. Compared with Rehm et al.'s approach, our refinement performs similarly under a gamma assumption. For example, among females aged 18-25 years, the two approaches produce estimates from NHSDA that are within a percentage point. However, relaxing the gamma assumption generally produces more conservative evidence. For example, among females aged 18-25 years, estimates from NHSDA based on the best-fitting distribution are only 19.33% of breast cancer cases, which is a much smaller proportion than the gamma-based estimates of approximately 28%. A refinement of Rehm et al.'s approach to adjusting for underreporting in the estimation of alcohol-attributable fractions provides more flexibility. This flexibility can avoid biases associated with failing to account for the underlying differences in alcohol consumption patterns across different study populations. Comparisons of our refinement with Rehm et al.'s approach show that results are similar when a gamma distribution is assumed. However, results are appreciably lower when the best-fitting distribution is chosen versus gamma-based results. © 2017 Society for the Study of Addiction.
GPU based contouring method on grid DEM data
NASA Astrophysics Data System (ADS)
Tan, Liheng; Wan, Gang; Li, Feng; Chen, Xiaohui; Du, Wenlong
2017-08-01
This paper presents a novel method to generate contour lines from grid DEM data based on the programmable GPU pipeline. The previous contouring approaches often use CPU to construct a finite element mesh from the raw DEM data, and then extract contour segments from the elements. They also need a tracing or sorting strategy to generate the final continuous contours. These approaches can be heavily CPU-costing and time-consuming. Meanwhile the generated contours would be unsmooth if the raw data is sparsely distributed. Unlike the CPU approaches, we employ the GPU's vertex shader to generate a triangular mesh with arbitrary user-defined density, in which the height of each vertex is calculated through a third-order Cardinal spline function. Then in the same frame, segments are extracted from the triangles by the geometry shader, and translated to the CPU-side with an internal order in the GPU's transform feedback stage. Finally we propose a "Grid Sorting" algorithm to achieve the continuous contour lines by travelling the segments only once. Our method makes use of multiple stages of GPU pipeline for computation, which can generate smooth contour lines, and is significantly faster than the previous CPU approaches. The algorithm can be easily implemented with OpenGL 3.3 API or higher on consumer-level PCs.
Human motion retrieval from hand-drawn sketch.
Chao, Min-Wen; Lin, Chao-Hung; Assa, Jackie; Lee, Tong-Yee
2012-05-01
The rapid growth of motion capture data increases the importance of motion retrieval. The majority of the existing motion retrieval approaches are based on a labor-intensive step in which the user browses and selects a desired query motion clip from the large motion clip database. In this work, a novel sketching interface for defining the query is presented. This simple approach allows users to define the required motion by sketching several motion strokes over a drawn character, which requires less effort and extends the users’ expressiveness. To support the real-time interface, a specialized encoding of the motions and the hand-drawn query is required. Here, we introduce a novel hierarchical encoding scheme based on a set of orthonormal spherical harmonic (SH) basis functions, which provides a compact representation, and avoids the CPU/processing intensive stage of temporal alignment used by previous solutions. Experimental results show that the proposed approach can well retrieve the motions, and is capable of retrieve logically and numerically similar motions, which is superior to previous approaches. The user study shows that the proposed system can be a useful tool to input motion query if the users are familiar with it. Finally, an application of generating a 3D animation from a hand-drawn comics strip is demonstrated.
NASA Astrophysics Data System (ADS)
Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Zhai, Xinxin; Huang, Ran
2018-05-01
Lateral boundary conditions (LBCs) are essential for chemical transport models to simulate regional transport; however they often contain large uncertainties. This study proposes an optimized data fusion approach to reduce the bias of LBCs by fusing gridded model outputs, from which the daughter domain's LBCs are derived, with ground-level measurements. The optimized data fusion approach follows the framework of a previous interpolation-based fusion method but improves it by using a bias kriging method to correct the spatial bias in gridded model outputs. Cross-validation shows that the optimized approach better estimates fused fields in areas with a large number of observations compared to the previous interpolation-based method. The optimized approach was applied to correct LBCs of PM2.5 concentrations for simulations in the Pearl River Delta (PRD) region as a case study. Evaluations show that the LBCs corrected by data fusion improve in-domain PM2.5 simulations in terms of the magnitude and temporal variance. Correlation increases by 0.13-0.18 and fractional bias (FB) decreases by approximately 3%-15%. This study demonstrates the feasibility of applying data fusion to improve regional air quality modeling.
A voxel-based approach to gray matter asymmetries.
Luders, E; Gaser, C; Jancke, L; Schlaug, G
2004-06-01
Voxel-based morphometry (VBM) was used to analyze gray matter (GM) asymmetries in a large sample (n = 60) of male and female professional musicians with and without absolute pitch (AP). We chose to examine these particular groups because previous studies using traditional region-of-interest (ROI) analyses have shown differences in hemispheric asymmetry related to AP and gender. Voxel-based methods may have advantages over traditional ROI-based methods since the analysis can be performed across the whole brain with minimal user bias. After determining that the VBM method was sufficiently sensitive for the detection of differences in GM asymmetries between groups, we found that male AP musicians were more leftward lateralized in the anterior region of the planum temporale (PT) than male non-AP musicians. This confirmed the results of previous studies using ROI-based methods that showed an association between PT asymmetry and the AP phenotype. We further observed that male non-AP musicians revealed an increased leftward GM asymmetry in the postcentral gyrus compared to female non-AP musicians, again corroborating results of a previously published study using ROI-based methods. By analyzing hemispheric GM differences across our entire sample, we were able to partially confirm findings of previous studies using traditional morphometric techniques, as well as more recent, voxel-based analyses. In addition, we found some unusually pronounced GM asymmetries in our musician sample not previously detected in subjects unselected for musical training. Since we were able to validate gender- and AP-related brain asymmetries previously described using traditional ROI-based morphometric techniques, the results of our analyses support the use of VBM for examinations of GM asymmetries.
Genetic demographic networks: Mathematical model and applications.
Kimmel, Marek; Wojdyła, Tomasz
2016-10-01
Recent improvement in the quality of genetic data obtained from extinct human populations and their ancestors encourages searching for answers to basic questions regarding human population history. The most common and successful are model-based approaches, in which genetic data are compared to the data obtained from the assumed demography model. Using such approach, it is possible to either validate or adjust assumed demography. Model fit to data can be obtained based on reverse-time coalescent simulations or forward-time simulations. In this paper we introduce a computational method based on mathematical equation that allows obtaining joint distributions of pairs of individuals under a specified demography model, each of them characterized by a genetic variant at a chosen locus. The two individuals are randomly sampled from either the same or two different populations. The model assumes three types of demographic events (split, merge and migration). Populations evolve according to the time-continuous Moran model with drift and Markov-process mutation. This latter process is described by the Lyapunov-type equation introduced by O'Brien and generalized in our previous works. Application of this equation constitutes an original contribution. In the result section of the paper we present sample applications of our model to both simulated and literature-based demographies. Among other we include a study of the Slavs-Balts-Finns genetic relationship, in which we model split and migrations between the Balts and Slavs. We also include another example that involves the migration rates between farmers and hunters-gatherers, based on modern and ancient DNA samples. This latter process was previously studied using coalescent simulations. Our results are in general agreement with the previous method, which provides validation of our approach. Although our model is not an alternative to simulation methods in the practical sense, it provides an algorithm to compute pairwise distributions of alleles, in the case of haploid non-recombining loci such as mitochondrial and Y-chromosome loci in humans. Copyright © 2016 Elsevier Inc. All rights reserved.
Propiconazole inhibits steroidogenesis and reproduction in the fathead minnow (Pimephales promelas)
This study assessed effects of the conazole-fungicide propiconazole on endocrine function and reproductive success of the fathead minnow, using an experimental approach based on previously defined adverse outcome pathways (AOPs) for chemicals that inhibit steroidogenesis in fish...
Combined PEST and Trial-Error approach to improve APEX calibration
USDA-ARS?s Scientific Manuscript database
The Agricultural Policy Environmental eXtender (APEX), a physically-based hydrologic model that simulates management impacts on the environment for small watersheds, requires improved understanding of the input parameters for improved simulations. However, most previously published studies used the ...
A KPI-based process monitoring and fault detection framework for large-scale processes.
Zhang, Kai; Shardt, Yuri A W; Chen, Zhiwen; Yang, Xu; Ding, Steven X; Peng, Kaixiang
2017-05-01
Large-scale processes, consisting of multiple interconnected subprocesses, are commonly encountered in industrial systems, whose performance needs to be determined. A common approach to this problem is to use a key performance indicator (KPI)-based approach. However, the different KPI-based approaches are not developed with a coherent and consistent framework. Thus, this paper proposes a framework for KPI-based process monitoring and fault detection (PM-FD) for large-scale industrial processes, which considers the static and dynamic relationships between process and KPI variables. For the static case, a least squares-based approach is developed that provides an explicit link with least-squares regression, which gives better performance than partial least squares. For the dynamic case, using the kernel representation of each subprocess, an instrument variable is used to reduce the dynamic case to the static case. This framework is applied to the TE benchmark process and the hot strip mill rolling process. The results show that the proposed method can detect faults better than previous methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Curveslam: Utilizing Higher Level Structure In Stereo Vision-Based Navigation
2012-01-01
consider their applica- tion to SLAM . The work of [31] [32] develops a spline-based SLAM framework, but this is only for application to LIDAR -based SLAM ...Existing approaches to visual Simultaneous Localization and Mapping ( SLAM ) typically utilize points as visual feature primitives to represent landmarks...regions of interest. Further, previous SLAM techniques that propose the use of higher level structures often place constraints on the environment, such as
A transversal approach for patch-based label fusion via matrix completion
Sanroma, Gerard; Wu, Guorong; Gao, Yaozong; Thung, Kim-Han; Guo, Yanrong; Shen, Dinggang
2015-01-01
Recently, multi-atlas patch-based label fusion has received an increasing interest in the medical image segmentation field. After warping the anatomical labels from the atlas images to the target image by registration, label fusion is the key step to determine the latent label for each target image point. Two popular types of patch-based label fusion approaches are (1) reconstruction-based approaches that compute the target labels as a weighted average of atlas labels, where the weights are derived by reconstructing the target image patch using the atlas image patches; and (2) classification-based approaches that determine the target label as a mapping of the target image patch, where the mapping function is often learned using the atlas image patches and their corresponding labels. Both approaches have their advantages and limitations. In this paper, we propose a novel patch-based label fusion method to combine the above two types of approaches via matrix completion (and hence, we call it transversal). As we will show, our method overcomes the individual limitations of both reconstruction-based and classification-based approaches. Since the labeling confidences may vary across the target image points, we further propose a sequential labeling framework that first labels the highly confident points and then gradually labels more challenging points in an iterative manner, guided by the label information determined in the previous iterations. We demonstrate the performance of our novel label fusion method in segmenting the hippocampus in the ADNI dataset, subcortical and limbic structures in the LONI dataset, and mid-brain structures in the SATA dataset. We achieve more accurate segmentation results than both reconstruction-based and classification-based approaches. Our label fusion method is also ranked 1st in the online SATA Multi-Atlas Segmentation Challenge. PMID:26160394
A neural network based reputation bootstrapping approach for service selection
NASA Astrophysics Data System (ADS)
Wu, Quanwang; Zhu, Qingsheng; Li, Peng
2015-10-01
With the concept of service-oriented computing becoming widely accepted in enterprise application integration, more and more computing resources are encapsulated as services and published online. Reputation mechanism has been studied to establish trust on prior unknown services. One of the limitations of current reputation mechanisms is that they cannot assess the reputation of newly deployed services as no record of their previous behaviours exists. Most of the current bootstrapping approaches merely assign default reputation values to newcomers. However, by this kind of methods, either newcomers or existing services will be favoured. In this paper, we present a novel reputation bootstrapping approach, where correlations between features and performance of existing services are learned through an artificial neural network (ANN) and they are then generalised to establish a tentative reputation when evaluating new and unknown services. Reputations of services published previously by the same provider are also incorporated for reputation bootstrapping if available. The proposed reputation bootstrapping approach is seamlessly embedded into an existing reputation model and implemented in the extended service-oriented architecture. Empirical studies of the proposed approach are shown at last.
Scaling effect on the fracture toughness of bone materials using MMTS criterion.
Akbardoost, Javad; Amirafshari, Reza; Mohsenzade, Omid; Berto, Filippo
2018-05-21
The aim of this study is to present a stress based approach for investigating the effect of specimen size on the fracture toughness of bone materials. The proposed approach is a modified form of the classical fracture criterion called maximum tangential stress (MTS). The mechanical properties of bone are different in longitudinal and transverse directions and hence the tangential stress component in the proposed approach should be determined in the orthotropic media. Since only the singular terms of series expansions were obtained in the previous studies, the tangential stress is measured from finite element analysis. In this study, the critical distance is also assumed to be size dependent and a semi-empirical formulation is used for describing the size dependency of the critical distance. By comparing the results predicted by the proposed approach and those reported in the previous studies, it is shown that the proposed approach can predict the fracture resistance of cracked bone by taking into account the effect of specimen size. Copyright © 2018 Elsevier Ltd. All rights reserved.
Ideal solar cell equation in the presence of photon recycling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lan, Dongchen, E-mail: d.lan@unsw.edu.au; Green, Martin A., E-mail: m.green@unsw.edu.au
Previous derivations of the ideal solar cell equation based on Shockley's p-n junction diode theory implicitly assume negligible effects of photon recycling. This paper derives the equation in the presence of photon recycling that modifies the values of dark saturation and light-generated currents, using an approach applicable to arbitrary three-dimensional geometries with arbitrary doping profile and variable band gap. The work also corrects an error in previous work and proves the validity of the reciprocity theorem for charge collection in such a more general case with the previously neglected junction depletion region included.
A Visual Editor in Java for View
NASA Technical Reports Server (NTRS)
Stansifer, Ryan
2000-01-01
In this project we continued the development of a visual editor in the Java programming language to create screens on which to display real-time data. The data comes from the numerous systems monitoring the operation of the space shuttle while on the ground and in space, and from the many tests of subsystems. The data can be displayed on any computer platform running a Java-enabled World Wide Web (WWW) browser and connected to the Internet. Previously a special-purpose program bad been written to display data on emulations of character-based display screens used for many years at NASA. The goal now is to display bit-mapped screens created by a visual editor. We report here on the visual editor that creates the display screens. This project continues the work we bad done previously. Previously we had followed the design of the 'beanbox,' a prototype visual editor created by Sun Microsystems. We abandoned this approach and implemented a prototype using a more direct approach. In addition, our prototype is based on newly released Java 2 graphical user interface (GUI) libraries. The result has been a visually more appealing appearance and a more robust application.
Leung, Chung-Chu
2006-03-01
Digital subtraction radiography requires close matching of the contrast in each pair of X-ray images to be subtracted. Previous studies have shown that nonparametric contrast/brightness correction methods using the cumulative density function (CDF) and its improvements, which are based on gray-level transformation associated with the pixel histogram, perform well in uniform contrast/brightness difference conditions. However, for radiographs with nonuniform contrast/ brightness, the CDF produces unsatisfactory results. In this paper, we propose a new approach in contrast correction based on the generalized fuzzy operator with least square method. The result shows that 50% of the contrast/brightness errors can be corrected using this approach when the contrast/brightness difference between a radiographic pair is 10 U. A comparison of our approach with that of CDF is presented, and this modified GFO method produces better contrast normalization results than the CDF approach.
Pilling, Valerie K; Brannon, Laura A
2007-01-01
Health communication appeals were utilized through a Web site simulation to evaluate the potential effectiveness of 3 intervention approaches to promote responsible drinking among college students. Within the Web site simulation, participants were exposed to a persuasive message designed to represent either the generalized social norms advertising approach (based on others' behavior), the personalized behavioral feedback approach (tailored to the individual's behavior), or the schema-based approach (tailored to the individual's self-schema, or personality). A control group was exposed to a message that was designed to be neutral (it was designed to discourage heavy drinking, but it did not represent any of the previously mentioned approaches). It was hypothesized that the more personalized the message was to the individual, the more favorable college students' attitudes would be toward the responsible drinking message. Participants receiving the more personalized messages did report more favorable attitudes toward the responsible drinking message.
Tensor scale-based fuzzy connectedness image segmentation
NASA Astrophysics Data System (ADS)
Saha, Punam K.; Udupa, Jayaram K.
2003-05-01
Tangible solutions to image segmentation are vital in many medical imaging applications. Toward this goal, a framework based on fuzzy connectedness was developed in our laboratory. A fundamental notion called "affinity" - a local fuzzy hanging togetherness relation on voxels - determines the effectiveness of this segmentation framework in real applications. In this paper, we introduce the notion of "tensor scale" - a recently developed local morphometric parameter - in affinity definition and study its effectiveness. Although, our previous notion of "local scale" using the spherical model successfully incorporated local structure size into affinity and resulted in measureable improvements in segmentation results, a major limitation of the previous approach was that it ignored local structural orientation and anisotropy. The current approach of using tensor scale in affinity computation allows an effective utilization of local size, orientation, and ansiotropy in a unified manner. Tensor scale is used for computing both the homogeneity- and object-feature-based components of affinity. Preliminary results of the proposed method on several medical images and computer generated phantoms of realistic shapes are presented. Further extensions of this work are discussed.
Water quality assessment with hierarchical cluster analysis based on Mahalanobis distance.
Du, Xiangjun; Shao, Fengjing; Wu, Shunyao; Zhang, Hanlin; Xu, Si
2017-07-01
Water quality assessment is crucial for assessment of marine eutrophication, prediction of harmful algal blooms, and environment protection. Previous studies have developed many numeric modeling methods and data driven approaches for water quality assessment. The cluster analysis, an approach widely used for grouping data, has also been employed. However, there are complex correlations between water quality variables, which play important roles in water quality assessment but have always been overlooked. In this paper, we analyze correlations between water quality variables and propose an alternative method for water quality assessment with hierarchical cluster analysis based on Mahalanobis distance. Further, we cluster water quality data collected form coastal water of Bohai Sea and North Yellow Sea of China, and apply clustering results to evaluate its water quality. To evaluate the validity, we also cluster the water quality data with cluster analysis based on Euclidean distance, which are widely adopted by previous studies. The results show that our method is more suitable for water quality assessment with many correlated water quality variables. To our knowledge, it is the first attempt to apply Mahalanobis distance for coastal water quality assessment.
Strain-based diffusion solver for realistic representation of diffusion front in physical reactions
2017-01-01
When simulating fluids, such as water or fire, interacting with solids, it is a challenging problem to represent details of diffusion front in physical reaction. Previous approaches commonly use isotropic or anisotropic diffusion to model the transport of a quantity through a medium or long interface. We have identified unrealistic monotonous patterns with previous approaches and therefore, propose to extend these approaches by integrating the deformation of the material with the diffusion process. Specifically, stretching deformation represented by strain is incorporated in a divergence-constrained diffusion model. A novel diffusion model is introduced to increase the global rate at which the solid acquires relevant quantities, such as heat or saturation. This ensures that the equations describing fluid flow are linked to the change of solid geometry, and also satisfy the divergence-free condition. Experiments show that our method produces convincing results. PMID:28448591
On the exact solvability of the anisotropic central spin model: An operator approach
NASA Astrophysics Data System (ADS)
Wu, Ning
2018-07-01
Using an operator approach based on a commutator scheme that has been previously applied to Richardson's reduced BCS model and the inhomogeneous Dicke model, we obtain general exact solvability requirements for an anisotropic central spin model with XXZ-type hyperfine coupling between the central spin and the spin bath, without any prior knowledge of integrability of the model. We outline basic steps of the usage of the operators approach, and pedagogically summarize them into two Lemmas and two Constraints. Through a step-by-step construction of the eigen-problem, we show that the condition gj‧2 - gj2 = c naturally arises for the model to be exactly solvable, where c is a constant independent of the bath-spin index j, and {gj } and { gj‧ } are the longitudinal and transverse hyperfine interactions, respectively. The obtained conditions and the resulting Bethe ansatz equations are consistent with that in previous literature.
Mwakanyamale, Kisa; Day-Lewis, Frederick D.; Slater, Lee D.
2013-01-01
Fiber-optic distributed temperature sensing (FO-DTS) increasingly is used to map zones of focused groundwater/surface-water exchange (GWSWE). Previous studies of GWSWE using FO-DTS involved identification of zones of focused GWSWE based on arbitrary cutoffs of FO-DTS time-series statistics (e.g., variance, cross-correlation between temperature and stage, or spectral power). New approaches are needed to extract more quantitative information from large, complex FO-DTS data sets while concurrently providing an assessment of uncertainty associated with mapping zones of focused GSWSE. Toward this end, we present a strategy combining discriminant analysis (DA) and spectral analysis (SA). We demonstrate the approach using field experimental data from a reach of the Columbia River adjacent to the Hanford 300 Area site. Results of the combined SA/DA approach are shown to be superior to previous results from qualitative interpretation of FO-DTS spectra alone.
Bayesian estimation of the discrete coefficient of determination.
Chen, Ting; Braga-Neto, Ulisses M
2016-12-01
The discrete coefficient of determination (CoD) measures the nonlinear interaction between discrete predictor and target variables and has had far-reaching applications in Genomic Signal Processing. Previous work has addressed the inference of the discrete CoD using classical parametric and nonparametric approaches. In this paper, we introduce a Bayesian framework for the inference of the discrete CoD. We derive analytically the optimal minimum mean-square error (MMSE) CoD estimator, as well as a CoD estimator based on the Optimal Bayesian Predictor (OBP). For the latter estimator, exact expressions for its bias, variance, and root-mean-square (RMS) are given. The accuracy of both Bayesian CoD estimators with non-informative and informative priors, under fixed or random parameters, is studied via analytical and numerical approaches. We also demonstrate the application of the proposed Bayesian approach in the inference of gene regulatory networks, using gene-expression data from a previously published study on metastatic melanoma.
Convergence of methods for coupling of microscopic and mesoscopic reaction-diffusion simulations
NASA Astrophysics Data System (ADS)
Flegg, Mark B.; Hellander, Stefan; Erban, Radek
2015-05-01
In this paper, three multiscale methods for coupling of mesoscopic (compartment-based) and microscopic (molecular-based) stochastic reaction-diffusion simulations are investigated. Two of the three methods that will be discussed in detail have been previously reported in the literature; the two-regime method (TRM) and the compartment-placement method (CPM). The third method that is introduced and analysed in this paper is called the ghost cell method (GCM), since it works by constructing a "ghost cell" in which molecules can disappear and jump into the compartment-based simulation. Presented is a comparison of sources of error. The convergent properties of this error are studied as the time step Δt (for updating the molecular-based part of the model) approaches zero. It is found that the error behaviour depends on another fundamental computational parameter h, the compartment size in the mesoscopic part of the model. Two important limiting cases, which appear in applications, are considered: Δt → 0 and h is fixed; Δt → 0 and h → 0 such that √{ Δt } / h is fixed. The error for previously developed approaches (the TRM and CPM) converges to zero only in the limiting case (ii), but not in case (i). It is shown that the error of the GCM converges in the limiting case (i). Thus the GCM is superior to previous coupling techniques if the mesoscopic description is much coarser than the microscopic part of the model.
Automatic construction of a recurrent neural network based classifier for vehicle passage detection
NASA Astrophysics Data System (ADS)
Burnaev, Evgeny; Koptelov, Ivan; Novikov, German; Khanipov, Timur
2017-03-01
Recurrent Neural Networks (RNNs) are extensively used for time-series modeling and prediction. We propose an approach for automatic construction of a binary classifier based on Long Short-Term Memory RNNs (LSTM-RNNs) for detection of a vehicle passage through a checkpoint. As an input to the classifier we use multidimensional signals of various sensors that are installed on the checkpoint. Obtained results demonstrate that the previous approach to handcrafting a classifier, consisting of a set of deterministic rules, can be successfully replaced by an automatic RNN training on an appropriately labelled data.
A Non-parametric Approach to Constrain the Transfer Function in Reverberation Mapping
NASA Astrophysics Data System (ADS)
Li, Yan-Rong; Wang, Jian-Min; Bai, Jin-Ming
2016-11-01
Broad emission lines of active galactic nuclei stem from a spatially extended region (broad-line region, BLR) that is composed of discrete clouds and photoionized by the central ionizing continuum. The temporal behaviors of these emission lines are blurred echoes of continuum variations (I.e., reverberation mapping, RM) and directly reflect the structures and kinematic information of BLRs through the so-called transfer function (also known as the velocity-delay map). Based on the previous works of Rybicki and Press and Zu et al., we develop an extended, non-parametric approach to determine the transfer function for RM data, in which the transfer function is expressed as a sum of a family of relatively displaced Gaussian response functions. Therefore, arbitrary shapes of transfer functions associated with complicated BLR geometry can be seamlessly included, enabling us to relax the presumption of a specified transfer function frequently adopted in previous studies and to let it be determined by observation data. We formulate our approach in a previously well-established framework that incorporates the statistical modeling of continuum variations as a damped random walk process and takes into account long-term secular variations which are irrelevant to RM signals. The application to RM data shows the fidelity of our approach.
In this paper, we describe the limitations of radius of influence (ROI) evaluation for venting design in more detail than has been done previously and propose an alternative method based on specification and attainment of critical pore-gas velocities in contaminated subsurface me...
ERIC Educational Resources Information Center
Brown, Lily A.; Forman, Evan M.; Herbert, James D.; Hoffman, Kimberly L.; Yuen, Erica K.; Goetter, Elizabeth M.
2011-01-01
Many university students suffer from test anxiety that is severe enough to impair performance. Given mixed efficacy results of previous cognitive-behavior therapy (CBT) trials and a theoretically driven rationale, an acceptance-based behavior therapy (ABBT) approach was compared to traditional CBT (i.e., Beckian cognitive therapy; CT) for the…
Su, Xu; Wu, Guili; Li, Lili; Liu, Jianquan
2015-01-01
Background and Aims Accurate identification of species is essential for the majority of biological studies. However, defining species objectively and consistently remains a challenge, especially for plants distributed in remote regions where there is often a lack of sufficient previous specimens. In this study, multiple approaches and lines of evidence were used to determine species boundaries for plants occurring in the Qinghai–Tibet Plateau, using the genus Orinus (Poaceae) as a model system for an integrative approach to delimiting species. Methods A total of 786 individuals from 102 populations of six previously recognized species were collected for niche, morphological and genetic analyses. Three plastid DNA regions (matK, rbcL and trnH-psbA) and one nuclear DNA region [internal transcribed space (ITS)] were sequenced. Key Results Whereas six species had been previously recognized, statistical analyses based on character variation, molecular data and niche differentiation identified only two well-delimited clusters, together with a third possibly originating from relatively recent hybridization between, or historical introgression from, the other two. Conclusions Based on a principle of integrative species delimitation to reconcile different sources of data, the results provide compelling evidence that the six previously recognized species of the genus Orinus that were examined should be reduced to two, with new circumscriptions, and a third, identified in this study, should be described as a new species. This empirical study highlights the value of applying genetic differentiation, morphometric statistics and ecological niche modelling in an integrative approach to re-circumscribing species boundaries. The results produce relatively objective, operational and unbiased taxonomic classifications of plants occurring in remote regions. PMID:25987712
Su, Xu; Wu, Guili; Li, Lili; Liu, Jianquan
2015-07-01
Accurate identification of species is essential for the majority of biological studies. However, defining species objectively and consistently remains a challenge, especially for plants distributed in remote regions where there is often a lack of sufficient previous specimens. In this study, multiple approaches and lines of evidence were used to determine species boundaries for plants occurring in the Qinghai-Tibet Plateau, using the genus Orinus (Poaceae) as a model system for an integrative approach to delimiting species. A total of 786 individuals from 102 populations of six previously recognized species were collected for niche, morphological and genetic analyses. Three plastid DNA regions (matK, rbcL and trnH-psbA) and one nuclear DNA region [internal transcribed space (ITS)] were sequenced. Whereas six species had been previously recognized, statistical analyses based on character variation, molecular data and niche differentiation identified only two well-delimited clusters, together with a third possibly originating from relatively recent hybridization between, or historical introgression from, the other two. Based on a principle of integrative species delimitation to reconcile different sources of data, the results provide compelling evidence that the six previously recognized species of the genus Orinus that were examined should be reduced to two, with new circumscriptions, and a third, identified in this study, should be described as a new species. This empirical study highlights the value of applying genetic differentiation, morphometric statistics and ecological niche modelling in an integrative approach to re-circumscribing species boundaries. The results produce relatively objective, operational and unbiased taxonomic classifications of plants occurring in remote regions. © The Author 2015. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A new graph-based method for pairwise global network alignment
Klau, Gunnar W
2009-01-01
Background In addition to component-based comparative approaches, network alignments provide the means to study conserved network topology such as common pathways and more complex network motifs. Yet, unlike in classical sequence alignment, the comparison of networks becomes computationally more challenging, as most meaningful assumptions instantly lead to NP-hard problems. Most previous algorithmic work on network alignments is heuristic in nature. Results We introduce the graph-based maximum structural matching formulation for pairwise global network alignment. We relate the formulation to previous work and prove NP-hardness of the problem. Based on the new formulation we build upon recent results in computational structural biology and present a novel Lagrangian relaxation approach that, in combination with a branch-and-bound method, computes provably optimal network alignments. The Lagrangian algorithm alone is a powerful heuristic method, which produces solutions that are often near-optimal and – unlike those computed by pure heuristics – come with a quality guarantee. Conclusion Computational experiments on the alignment of protein-protein interaction networks and on the classification of metabolic subnetworks demonstrate that the new method is reasonably fast and has advantages over pure heuristics. Our software tool is freely available as part of the LISA library. PMID:19208162
McCoy, Allison B.; Wright, Adam; Rogith, Deevakar; Fathiamini, Safa; Ottenbacher, Allison J.; Sittig, Dean F.
2014-01-01
Background Correlation of data within electronic health records is necessary for implementation of various clinical decision support functions, including patient summarization. A key type of correlation is linking medications to clinical problems; while some databases of problem-medication links are available, they are not robust and depend on problems and medications being encoded in particular terminologies. Crowdsourcing represents one approach to generating robust knowledge bases across a variety of terminologies, but more sophisticated approaches are necessary to improve accuracy and reduce manual data review requirements. Objective We sought to develop and evaluate a clinician reputation metric to facilitate the identification of appropriate problem-medication pairs through crowdsourcing without requiring extensive manual review. Approach We retrieved medications from our clinical data warehouse that had been prescribed and manually linked to one or more problems by clinicians during e-prescribing between June 1, 2010 and May 31, 2011. We identified measures likely to be associated with the percentage of accurate problem-medication links made by clinicians. Using logistic regression, we created a metric for identifying clinicians who had made greater than or equal to 95% appropriate links. We evaluated the accuracy of the approach by comparing links made by those physicians identified as having appropriate links to a previously manually validated subset of problem-medication pairs. Results Of 867 clinicians who asserted a total of 237,748 problem-medication links during the study period, 125 had a reputation metric that predicted the percentage of appropriate links greater than or equal to 95%. These clinicians asserted a total of 2464 linked problem-medication pairs (983 distinct pairs). Compared to a previously validated set of problem-medication pairs, the reputation metric achieved a specificity of 99.5% and marginally improved the sensitivity of previously described knowledge bases. Conclusion A reputation metric may be a valuable measure for identifying high quality clinician-entered, crowdsourced data. PMID:24321170
The New Tropospheric Product of the International GNSS Service
NASA Technical Reports Server (NTRS)
Byun, Sung H.; Bar-Sever, Yoaz E.; Gendt, Gerd
2005-01-01
We compare this new approach for generating the IGS tropospheric products with the previous approach, which was based on explicit combination of total zenith delay contributions from the IGS ACs. The new approach enables the IGS to rapidly generate highly accurate and highly reliable total zenith delay time series for many hundreds of sites, thus increasing the utility of the products to weather modelers, climatologists, and GPS analysts. In this paper we describe this new method, and discuss issues of accuracy, quality control, utility of the new products and assess its benefits.
Micromirror-based real image laser automotive head-up display
NASA Astrophysics Data System (ADS)
Fan, Chao; He, Siyuan
2017-01-01
This paper reports a micromirror-based real image laser automotive head-up display (HUD), which overcomes the limitations of the previous designs by: (1) implementing an advanced display approach which is able to display sharp corners while the previous designs can only display curved lines such as to improve the display fidelity and (2) Optimizing the optical configuration to significantly reduce the HUD module size. The optical design in the HUD is simulated to choose the off-the-shelf concave lens. The vibration test is conducted to verify that the micromirror can survive 5 g. The prototype of the HUD system is fabricated and tested.
NASA Astrophysics Data System (ADS)
Yusof, Muhammad Mat; Sulaiman, Tajularipin; Khalid, Ruzelan; Hamid, Mohamad Shukri Abdul; Mansor, Rosnalini
2014-12-01
In professional sporting events, rating competitors before tournament start is a well-known approach to distinguish the favorite team and the weaker teams. Various methodologies are used to rate competitors. In this paper, we explore four ways to rate competitors; least squares rating, maximum likelihood strength ratio, standing points in large round robin simulation and previous league rank position. The tournament metric we used to evaluate different types of rating approach is tournament outcome characteristics measure. The tournament outcome characteristics measure is defined by the probability that a particular team in the top 100q pre-tournament rank percentile progress beyond round R, for all q and R. Based on simulation result, we found that different rating approach produces different effect to the team. Our simulation result shows that from eight teams participate in knockout standard seeding, Perak has highest probability to win for tournament that use the least squares rating approach, PKNS has highest probability to win using the maximum likelihood strength ratio and the large round robin simulation approach, while Perak has the highest probability to win a tournament using previous league season approach.
Davis, Dana; Hawk, Mary
2015-01-01
This study explored trauma centers social workers' beliefs regarding four evidence-based interventions for patients presenting with substance abuse issues. Previous research has indicated that health care providers' beliefs have prevented them from implementing non-abstinence based interventions. Study results indicated that the majority of social workers believed in the 12-step approach and were least comfortable with the harm reduction approach. However, results showed that in some cases, social workers may have negative personal beliefs regarding non-abstinence based interventions, but do not let their personal beliefs get in the way of utilizing these interventions if they are viewed as appropriate for the client's situation.
Shpynov, S; Pozdnichenko, N; Gumenuk, A
2015-01-01
Genome sequences of 36 Rickettsia and Orientia were analyzed using Formal Order Analysis (FOA). This approach takes into account arrangement of nucleotides in each sequence. A numerical characteristic, the average distance (remoteness) - "g" was used to compare of genomes. Our results corroborated previous separation of three groups within the genus Rickettsia, including typhus group, classic spotted fever group, and the ancestral group and Orientia as a separate genus. Rickettsia felis URRWXCal2 and R. akari Hartford were not in the same group based on FOA, therefore designation of a so-called transitional Rickettsia group could not be confirmed with this approach. Copyright © 2015 Institut Pasteur. Published by Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Amaral, J. T.; Becker, V. M.
2018-05-01
We investigate ρ vector meson production in e p collisions at HERA with leading neutrons in the dipole formalism. The interaction of the dipole and the pion is described in a mixed-space approach, in which the dipole-pion scattering amplitude is given by the Marquet-Peschanski-Soyez saturation model, which is based on the traveling wave solutions of the nonlinear Balitsky-Kovchegov equation. We estimate the magnitude of the absorption effects and compare our results with a previous analysis of the same process in full coordinate space. In contrast with this approach, the present study leads to absorption K factors in the range of those predicted by previous theoretical studies on semi-inclusive processes.
Figure-ground segmentation based on class-independent shape priors
NASA Astrophysics Data System (ADS)
Li, Yang; Liu, Yang; Liu, Guojun; Guo, Maozu
2018-01-01
We propose a method to generate figure-ground segmentation by incorporating shape priors into the graph-cuts algorithm. Given an image, we first obtain a linear representation of an image and then apply directional chamfer matching to generate class-independent, nonparametric shape priors, which provide shape clues for the graph-cuts algorithm. We then enforce shape priors in a graph-cuts energy function to produce object segmentation. In contrast to previous segmentation methods, the proposed method shares shape knowledge for different semantic classes and does not require class-specific model training. Therefore, the approach obtains high-quality segmentation for objects. We experimentally validate that the proposed method outperforms previous approaches using the challenging PASCAL VOC 2010/2012 and Berkeley (BSD300) segmentation datasets.
Frequency domain phase noise analysis of dual injection-locked optoelectronic oscillators.
Jahanbakht, Sajad
2016-10-01
Dual injection-locked optoelectronic oscillators (DIL-OEOs) have been introduced as a means to achieve very low-noise microwave oscillations while avoiding the large spurious peaks that occur in the phase noise of the conventional single-loop OEOs. In these systems, two OEOs are inter-injection locked to each other. The OEO with the longer optical fiber delay line is called the master OEO, and the other is called the slave OEO. Here, a frequency domain approach for simulating the phase noise spectrum of each of the OEOs in a DIL-OEO system and based on the conversion matrix approach is presented. The validity of the new approach is verified by comparing its results with previously published data in the literature. In the new approach, first, in each of the master or slave OEOs, the power spectral densities (PSDs) of two white and 1/f noise sources are optimized such that the resulting simulated phase noise of any of the master or slave OEOs in the free-running state matches the measured phase noise of that OEO. After that, the proposed approach is able to simulate the phase noise PSD of both OEOs at the injection-locked state. Because of the short run-time requirements, especially compared to previously proposed time domain approaches, the new approach is suitable for optimizing the power injection ratios (PIRs), and potentially other circuit parameters, in order to achieve good performance regarding the phase noise in each of the OEOs. Through various numerical simulations, the optimum PIRs for achieving good phase noise performance are presented and discussed; they are in agreement with the previously published results. This further verifies the applicability of the new approach. Moreover, some other interesting results regarding the spur levels are also presented.
Morbidity of Early Spine Surgery in the Multiply Injured Patient
2014-07-31
stable’’) based on the following clinical criteria, which were adopted from previously described criteria to indicate a damage control approach to...obtained from primary source documents. Presence of any intra- cranial injury or a chest injury was identified by review of the trauma computed tomography...approach to their injuries [7, 19–21]. ‘‘Damage Control surgery’’ was first adopted by trauma surgeons to describe initial control of penetrating
A new approach to children's footwear based on foot type classification.
Mauch, M; Grau, S; Krauss, I; Maiwald, C; Horstmann, T
2009-08-01
Current shoe designs do not allow for the comprehensive 3-D foot shape, which means they are unable to reproduce the wide variability in foot morphology. Therefore, the purpose of this study was to capture these variations of children's feet by classifying them into groups (types) and thereby provide a basis for their implementation in the design of children's shoes. The feet of 2867 German children were measured using a 3-D foot scanner. Cluster analysis was then applied to classify the feet into three different foot types. The characteristics of these foot types differ regarding their volume and forefoot shape both within and between shoe sizes. This new approach is in clear contrast to previous systems, since it captures the variability of foot morphology in a more comprehensive way by using a foot typing system and therefore paves the way for the unimpaired development of children's feet. Previous shoe systems do not allow for the wide variations in foot morphology. A new approach was developed regarding different morphological foot types based on 3-D measurements relevant in shoe construction. This can be directly applied to create specific designs for children's shoes.
Discriminative Prediction of A-To-I RNA Editing Events from DNA Sequence
Sun, Jiangming; Singh, Pratibha; Bagge, Annika; Valtat, Bérengère; Vikman, Petter; Spégel, Peter; Mulder, Hindrik
2016-01-01
RNA editing is a post-transcriptional alteration of RNA sequences that, via insertions, deletions or base substitutions, can affect protein structure as well as RNA and protein expression. Recently, it has been suggested that RNA editing may be more frequent than previously thought. A great impediment, however, to a deeper understanding of this process is the paramount sequencing effort that needs to be undertaken to identify RNA editing events. Here, we describe an in silico approach, based on machine learning, that ameliorates this problem. Using 41 nucleotide long DNA sequences, we show that novel A-to-I RNA editing events can be predicted from known A-to-I RNA editing events intra- and interspecies. The validity of the proposed method was verified in an independent experimental dataset. Using our approach, 203 202 putative A-to-I RNA editing events were predicted in the whole human genome. Out of these, 9% were previously reported. The remaining sites require further validation, e.g., by targeted deep sequencing. In conclusion, the approach described here is a useful tool to identify potential A-to-I RNA editing events without the requirement of extensive RNA sequencing. PMID:27764195
The application of mean field theory to image motion estimation.
Zhang, J; Hanauer, G G
1995-01-01
Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.
The Vector-Ballot Approach for Online Voting Procedures
NASA Astrophysics Data System (ADS)
Kiayias, Aggelos; Yung, Moti
Looking at current cryptographic-based e-voting protocols, one can distinguish three basic design paradigms (or approaches): (a) Mix-Networks based, (b) Homomorphic Encryption based, and (c) Blind Signatures based. Each of the three possesses different advantages and disadvantages w.r.t. the basic properties of (i) efficient tallying, (ii) universal verifiability, and (iii) allowing write-in ballot capability (in addition to predetermined candidates). In fact, none of the approaches results in a scheme that simultaneously achieves all three. This is unfortunate, since the three basic properties are crucial for efficiency, integrity and versatility (flexibility), respectively. Further, one can argue that a serious business offering of voting technology should offer a flexible technology that achieves various election goals with a single user interface. This motivates our goal, which is to suggest a new "vector-ballot" based approach for secret-ballot e-voting that is based on three new notions: Provably Consistent Vector Ballot Encodings, Shrink-and-Mix Networks and Punch-Hole-Vector-Ballots. At the heart of our approach is the combination of mix networks and homomorphic encryption under a single user interface; given this, it is rather surprising that it achieves much more than any of the previous approaches for e-voting achieved in terms of the basic properties. Our approach is presented in two generic designs called "homomorphic vector-ballots with write-in votes" and "multi-candidate punch-hole vector-ballots"; both of our designs can be instantiated over any homomorphic encryption function.
Matrix factorization-based data fusion for gene function prediction in baker's yeast and slime mold.
Zitnik, Marinka; Zupan, Blaž
2014-01-01
The development of effective methods for the characterization of gene functions that are able to combine diverse data sources in a sound and easily-extendible way is an important goal in computational biology. We have previously developed a general matrix factorization-based data fusion approach for gene function prediction. In this manuscript, we show that this data fusion approach can be applied to gene function prediction and that it can fuse various heterogeneous data sources, such as gene expression profiles, known protein annotations, interaction and literature data. The fusion is achieved by simultaneous matrix tri-factorization that shares matrix factors between sources. We demonstrate the effectiveness of the approach by evaluating its performance on predicting ontological annotations in slime mold D. discoideum and on recognizing proteins of baker's yeast S. cerevisiae that participate in the ribosome or are located in the cell membrane. Our approach achieves predictive performance comparable to that of the state-of-the-art kernel-based data fusion, but requires fewer data preprocessing steps.
Increasing Reservation Attendance: Ganado's Approach.
ERIC Educational Resources Information Center
Foster, Carl; And Others
Based on recommendations of a District Attendance Task Force, in 1980 the Ganado School District (a Navajo Reservation District) formulated an Attendance Improvement Plan which decreased the primary school's absentee rate 37% over previous years and which dramatically increased Friday attendance. The primary school targeted "high risk"…
USDA-ARS?s Scientific Manuscript database
This study examined the sterol compositions of 102 dinoflagellates (including several previously unexamined species) using clustering techniques as a means of determining the relatedness of the organisms. In addition, dinoflagellate sterol-based relationships were compared statistically to dinoflag...
ENHANCING TEST SENSITIVITY IN TOXICITY TESTING BY USING A STATISTICAL PERFORMANCE STANDARD
Previous reports have shown that within-test sensitivity can vary markedly among laboratories. Experts have advocated an empirical approach to controlling test variability based on the MSD, control means, and other test acceptability criteria. (The MSD represents the smallest dif...
Quantifying the Sources of Intermodel Spread in Equilibrium Climate Sensitivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caldwell, Peter M.; Zelinka, Mark D.; Taylor, Karl E.
This paper clarifies the causes of intermodel differences in the global-average temperature response to doubled CO 2, commonly known as equilibrium climate sensitivity (ECS). The authors begin by noting several issues with the standard approach for decomposing ECS into a sum of forcing and feedback terms. This leads to a derivation of an alternative method based on linearizing the effect of the net feedback. Consistent with previous studies, the new method identifies shortwave cloud feedback as the dominant source of intermodel spread in ECS. This new approach also reveals that covariances between cloud feedback and forcing, between lapse rate andmore » longwave cloud feedbacks, and between albedo and shortwave cloud feedbacks play an important and previously underappreciated role in determining model differences in ECS. Finally, defining feedbacks based on fixed relative rather than specific humidity (as suggested by Held and Shell) reduces the covariances between processes and leads to more straightforward interpretations of results.« less
Gazestani, Vahid H; Salavati, Reza
2015-01-01
Trypanosoma brucei is a vector-borne parasite with intricate life cycle that can cause serious diseases in humans and animals. This pathogen relies on fine regulation of gene expression to respond and adapt to variable environments, with implications in transmission and infectivity. However, the involved regulatory elements and their mechanisms of actions are largely unknown. Here, benefiting from a new graph-based approach for finding functional regulatory elements in RNA (GRAFFER), we have predicted 88 new RNA regulatory elements that are potentially involved in the gene regulatory network of T. brucei. We show that many of these newly predicted elements are responsive to both transcriptomic and proteomic changes during the life cycle of the parasite. Moreover, we found that 11 of predicted elements strikingly resemble previously identified regulatory elements for the parasite. Additionally, comparison with previously predicted motifs on T. brucei suggested the superior performance of our approach based on the current limited knowledge of regulatory elements in T. brucei.
Quantifying the Sources of Intermodel Spread in Equilibrium Climate Sensitivity
Caldwell, Peter M.; Zelinka, Mark D.; Taylor, Karl E.; ...
2016-01-07
This paper clarifies the causes of intermodel differences in the global-average temperature response to doubled CO 2, commonly known as equilibrium climate sensitivity (ECS). The authors begin by noting several issues with the standard approach for decomposing ECS into a sum of forcing and feedback terms. This leads to a derivation of an alternative method based on linearizing the effect of the net feedback. Consistent with previous studies, the new method identifies shortwave cloud feedback as the dominant source of intermodel spread in ECS. This new approach also reveals that covariances between cloud feedback and forcing, between lapse rate andmore » longwave cloud feedbacks, and between albedo and shortwave cloud feedbacks play an important and previously underappreciated role in determining model differences in ECS. Finally, defining feedbacks based on fixed relative rather than specific humidity (as suggested by Held and Shell) reduces the covariances between processes and leads to more straightforward interpretations of results.« less
NASA Astrophysics Data System (ADS)
Moshtaghi, Mehrdad; Adla, Soham; Pande, Saket; Disse, Markus; Savenije, Hubert
2017-04-01
The concept of sustainability is central to smallholder agriculture as subsistence farming is constantly impacted by livelihood insecurity and is constrained by access to capital, water technology and alternative employment opportunities. This study compares two approaches which aim at quantifying smallholder sustainability but differ in their underlying principles, methodologies for assessment and reporting, and applications. The yield index based insurance can protect the smallholder agriculture and help it to more economic sustainability because the income of smallholder depends on selling crops and this insurance scheme is based on crop yields. In this research, the trigger of this insurance sets on the basis of yields in previous years. The crop yields are calculated every year through socio-hydrology modeling and smallholder can get indemnity when crop yields are lower than average of previous five years (a crop failure). The FAO Sustainability Assessment of Food and Agriculture (SAFA) is an inclusive and comprehensive framework for sustainability assessment in the food and agricultural sector. It follows the UN definition of the 4 dimensions of sustainability (good governance, environmental integrity, economic resilience and social well-being) and includes 21 themes and 58 sub-themes with a multi-indicator approach. The direct sustainability corresponding to the FAO SAFA economic resilience dimension is compared with the indirect notion of sustainability derived from the yield based index insurance. A semi-synthetic comparison is conducted to understand the differences in the underlying principles, methodologies and application of the two approaches. Both approaches are applied to data from smallholder regions of Marathwada in Maharashtra (India) which experienced a severe rise in farmer suicides in the 2000s which has been attributed to a combination of socio-hydrological factors.
Measurement of CIB power spectra with CAM-SPEC from Planck HFI maps
NASA Astrophysics Data System (ADS)
Mak, Suet Ying; Challinor, Anthony; Efstathiou, George; Lagache, Guilaine
2015-08-01
We present new measurements of the cosmic infrared background (CIB) anisotropies and its first likelihood using Planck HFI data at 353, 545, and 857 GHz. The measurements are based on cross-frequency power spectra and likelihood analysis using the CAM-SPEC package, rather than map based template removal of foregrounds as done in previous Planck CIB analysis. We construct the likelihood of the CIB temperature fluctuations, an extension of CAM-SPEC likelihood as used in CMB analysis to higher frequency, and use it to drive the best estimate of the CIB power spectrum over three decades in multiple moment, l, covering 50 ≤ l ≤ 2500. We adopt parametric models of the CIB and foreground contaminants (Galactic cirrus, infrared point sources, and cosmic microwave background anisotropies), and calibrate the dataset uniformly across frequencies with known Planck beam and noise properties in the likelihood construction. We validate our likelihood through simulations and extensive suite of consistency tests, and assess the impact of instrumental and data selection effects on the final CIB power spectrum constraints. Two approaches are developed for interpreting the CIB power spectrum. The first approach is based on simple parametric model which model the cross frequency power using amplitudes, correlation coefficients, and known multipole dependence. The second approach is based on the physical models for galaxy clustering and the evolution of infrared emission of galaxies. The new approaches fit all auto- and cross- power spectra very well, with the best fit of χ2ν = 1.04 (parametric model). Using the best foreground solution, we find that the cleaned CIB power spectra are in good agreement with previous Planck and Herschel measurements.
Density matters: Review of approaches to setting organism-based ballast water discharge standards
Lee II,; Frazier,; Ruiz,
2010-01-01
As part of their effort to develop national ballast water discharge standards under NPDES permitting, the Office of Water requested that WED scientists identify and review existing approaches to generating organism-based discharge standards for ballast water. Six potential approaches were identified and the utility and uncertainties of each approach was evaluated. During the process of reviewing the existing approaches, the WED scientists, in conjunction with scientists at the USGS and Smithsonian Institution, developed a new approach (per capita invasion probability or "PCIP") that addresses many of the limitations of the previous methodologies. THE PCIP approach allows risk managers to generate quantitative discharge standards using historical invasion rates, ballast water discharge volumes, and ballast water organism concentrations. The statistical power of sampling ballast water for both the validation of ballast water treatment systems and ship-board compliance monitoring with the existing methods, though it should be possible to obtain sufficient samples during treatment validation. The report will go to a National Academy of Sciences expert panel that will use it in their evaluation of approaches to developing ballast water discharge standards for the Office of Water.
NASA Astrophysics Data System (ADS)
Asaithambi, Sasikumar; Rajappa, Muthaiah
2018-05-01
In this paper, an automatic design method based on a swarm intelligence approach for CMOS analog integrated circuit (IC) design is presented. The hybrid meta-heuristics optimization technique, namely, the salp swarm algorithm (SSA), is applied to the optimal sizing of a CMOS differential amplifier and the comparator circuit. SSA is a nature-inspired optimization algorithm which mimics the navigating and hunting behavior of salp. The hybrid SSA is applied to optimize the circuit design parameters and to minimize the MOS transistor sizes. The proposed swarm intelligence approach was successfully implemented for an automatic design and optimization of CMOS analog ICs using Generic Process Design Kit (GPDK) 180 nm technology. The circuit design parameters and design specifications are validated through a simulation program for integrated circuit emphasis simulator. To investigate the efficiency of the proposed approach, comparisons have been carried out with other simulation-based circuit design methods. The performances of hybrid SSA based CMOS analog IC designs are better than the previously reported studies.
Asaithambi, Sasikumar; Rajappa, Muthaiah
2018-05-01
In this paper, an automatic design method based on a swarm intelligence approach for CMOS analog integrated circuit (IC) design is presented. The hybrid meta-heuristics optimization technique, namely, the salp swarm algorithm (SSA), is applied to the optimal sizing of a CMOS differential amplifier and the comparator circuit. SSA is a nature-inspired optimization algorithm which mimics the navigating and hunting behavior of salp. The hybrid SSA is applied to optimize the circuit design parameters and to minimize the MOS transistor sizes. The proposed swarm intelligence approach was successfully implemented for an automatic design and optimization of CMOS analog ICs using Generic Process Design Kit (GPDK) 180 nm technology. The circuit design parameters and design specifications are validated through a simulation program for integrated circuit emphasis simulator. To investigate the efficiency of the proposed approach, comparisons have been carried out with other simulation-based circuit design methods. The performances of hybrid SSA based CMOS analog IC designs are better than the previously reported studies.
NASA Astrophysics Data System (ADS)
Sabol, Bruce M.
2005-09-01
There has been a longstanding need for an objective and cost-effective technique to detect, characterize, and quantify submersed aquatic vegetation at spatial scales between direct physical sampling and remote aerial-based imaging. Acoustic-based approaches for doing so are reviewed and an explicit approach, using a narrow, single-beam echosounder, is described in detail. This heuristic algorithm is based on the spatial distribution of a thresholded signal generated from a high-frequency, narrow-beam echosounder operated in a vertical orientation from a survey boat. The physical basis, rationale, and implementation of this algorithm are described, and data documenting performance are presented. Using this technique, it is possible to generate orders of magnitude more data than would be available using previous techniques with a comparable level of effort. Thus, new analysis and interpretation approaches are called for which can make full use of these data. Several analyses' examples are shown for environmental effects application studies. Current operational window and performance limitations are identified and thoughts on potential processing approaches to improve performance are discussed.
Jaiswal, Astha; Godinez, William J; Eils, Roland; Lehmann, Maik Jorg; Rohr, Karl
2015-11-01
Automatic fluorescent particle tracking is an essential task to study the dynamics of a large number of biological structures at a sub-cellular level. We have developed a probabilistic particle tracking approach based on multi-scale detection and two-step multi-frame association. The multi-scale detection scheme allows coping with particles in close proximity. For finding associations, we have developed a two-step multi-frame algorithm, which is based on a temporally semiglobal formulation as well as spatially local and global optimization. In the first step, reliable associations are determined for each particle individually in local neighborhoods. In the second step, the global spatial information over multiple frames is exploited jointly to determine optimal associations. The multi-scale detection scheme and the multi-frame association finding algorithm have been combined with a probabilistic tracking approach based on the Kalman filter. We have successfully applied our probabilistic tracking approach to synthetic as well as real microscopy image sequences of virus particles and quantified the performance. We found that the proposed approach outperforms previous approaches.
Progressive retry for software error recovery in distributed systems
NASA Technical Reports Server (NTRS)
Wang, Yi-Min; Huang, Yennun; Fuchs, W. K.
1993-01-01
In this paper, we describe a method of execution retry for bypassing software errors based on checkpointing, rollback, message reordering and replaying. We demonstrate how rollback techniques, previously developed for transient hardware failure recovery, can also be used to recover from software faults by exploiting message reordering to bypass software errors. Our approach intentionally increases the degree of nondeterminism and the scope of rollback when a previous retry fails. Examples from our experience with telecommunications software systems illustrate the benefits of the scheme.
ERIC Educational Resources Information Center
Lee, Guemin; Park, In-Yong
2012-01-01
Previous assessments of the reliability of test scores for testlet-composed tests have indicated that item-based estimation methods overestimate reliability. This study was designed to address issues related to the extent to which item-based estimation methods overestimate the reliability of test scores composed of testlets and to compare several…
ERIC Educational Resources Information Center
Wang, Hsiu-Ying; Huang, Iwen; Hwang, Gwo-Jen
2016-01-01
Concept mapping has been widely used in various fields to facilitate students' organization of knowledge. Previous studies have, however, pointed out that it is difficult for students to construct concept maps from the abundant searched data without appropriate scaffolding. Thus, researchers have suggested that students could produce high quality…
ERIC Educational Resources Information Center
Krain, Matthew
2016-01-01
This study revisits case learning's effects on student engagement and assesses student learning as a result of the use of case studies and problem-based learning. The author replicates a previous study that used indirect assessment techniques to get at case learning's impact, and then extends the analysis using a pre- and post-test experimental…
ERIC Educational Resources Information Center
Ashley, Martin
2008-01-01
The author has previously argued against "early closure"--the tendency to close down children's curiosity through an over-zealous approach to issues-based education. Indoctrination might be a result but "burn-out," a potentially permanent attitude change that sets in before puberty, is more likely. This article is based on the…
Transorbital and transnasal endoscopic repair of a meningoencephalocele.
Schaberg, Madeleine; Murchison, Ann P; Rosen, Marc R; Evans, James J; Bilyk, Jurij R
2011-10-01
A 71-year-old female with a history of thyroid eye disease (TED) presented for evaluation of a skull base mass noted on neuroimaging. She had previously undergone bilateral orbital decompressions and strabismus surgery and had no neurologic symptoms. Successful resection of the menigoencephalocele and repair of the skull base defect was performed through a combined transnasal endoscopic and transorbital approach, obviating the need for craniotomy.
Semantics-Based Composition of Integrated Cardiomyocyte Models Motivated by Real-World Use Cases.
Neal, Maxwell L; Carlson, Brian E; Thompson, Christopher T; James, Ryan C; Kim, Karam G; Tran, Kenneth; Crampin, Edmund J; Cook, Daniel L; Gennari, John H
2015-01-01
Semantics-based model composition is an approach for generating complex biosimulation models from existing components that relies on capturing the biological meaning of model elements in a machine-readable fashion. This approach allows the user to work at the biological rather than computational level of abstraction and helps minimize the amount of manual effort required for model composition. To support this compositional approach, we have developed the SemGen software, and here report on SemGen's semantics-based merging capabilities using real-world modeling use cases. We successfully reproduced a large, manually-encoded, multi-model merge: the "Pandit-Hinch-Niederer" (PHN) cardiomyocyte excitation-contraction model, previously developed using CellML. We describe our approach for annotating the three component models used in the PHN composition and for merging them at the biological level of abstraction within SemGen. We demonstrate that we were able to reproduce the original PHN model results in a semi-automated, semantics-based fashion and also rapidly generate a second, novel cardiomyocyte model composed using an alternative, independently-developed tension generation component. We discuss the time-saving features of our compositional approach in the context of these merging exercises, the limitations we encountered, and potential solutions for enhancing the approach.
Semantics-Based Composition of Integrated Cardiomyocyte Models Motivated by Real-World Use Cases
Neal, Maxwell L.; Carlson, Brian E.; Thompson, Christopher T.; James, Ryan C.; Kim, Karam G.; Tran, Kenneth; Crampin, Edmund J.; Cook, Daniel L.; Gennari, John H.
2015-01-01
Semantics-based model composition is an approach for generating complex biosimulation models from existing components that relies on capturing the biological meaning of model elements in a machine-readable fashion. This approach allows the user to work at the biological rather than computational level of abstraction and helps minimize the amount of manual effort required for model composition. To support this compositional approach, we have developed the SemGen software, and here report on SemGen’s semantics-based merging capabilities using real-world modeling use cases. We successfully reproduced a large, manually-encoded, multi-model merge: the “Pandit-Hinch-Niederer” (PHN) cardiomyocyte excitation-contraction model, previously developed using CellML. We describe our approach for annotating the three component models used in the PHN composition and for merging them at the biological level of abstraction within SemGen. We demonstrate that we were able to reproduce the original PHN model results in a semi-automated, semantics-based fashion and also rapidly generate a second, novel cardiomyocyte model composed using an alternative, independently-developed tension generation component. We discuss the time-saving features of our compositional approach in the context of these merging exercises, the limitations we encountered, and potential solutions for enhancing the approach. PMID:26716837
Formability prediction for AHSS materials using damage models
NASA Astrophysics Data System (ADS)
Amaral, R.; Santos, Abel D.; José, César de Sá; Miranda, Sara
2017-05-01
Advanced high strength steels (AHSS) are seeing an increased use, mostly due to lightweight design in automobile industry and strict regulations on safety and greenhouse gases emissions. However, the use of these materials, characterized by a high strength to weight ratio, stiffness and high work hardening at early stages of plastic deformation, have imposed many challenges in sheet metal industry, mainly their low formability and different behaviour, when compared to traditional steels, which may represent a defying task, both to obtain a successful component and also when using numerical simulation to predict material behaviour and its fracture limits. Although numerical prediction of critical strains in sheet metal forming processes is still very often based on the classic forming limit diagrams, alternative approaches can use damage models, which are based on stress states to predict failure during the forming process and they can be classified as empirical, physics based and phenomenological models. In the present paper a comparative analysis of different ductile damage models is carried out, in order numerically evaluate two isotropic coupled damage models proposed by Johnson-Cook and Gurson-Tvergaard-Needleman (GTN), each of them corresponding to the first two previous group classification. Finite element analysis is used considering these damage mechanics approaches and the obtained results are compared with experimental Nakajima tests, thus being possible to evaluate and validate the ability to predict damage and formability limits for previous defined approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yan-Rong; Wang, Jian-Min; Bai, Jin-Ming, E-mail: liyanrong@mail.ihep.ac.cn
Broad emission lines of active galactic nuclei stem from a spatially extended region (broad-line region, BLR) that is composed of discrete clouds and photoionized by the central ionizing continuum. The temporal behaviors of these emission lines are blurred echoes of continuum variations (i.e., reverberation mapping, RM) and directly reflect the structures and kinematic information of BLRs through the so-called transfer function (also known as the velocity-delay map). Based on the previous works of Rybicki and Press and Zu et al., we develop an extended, non-parametric approach to determine the transfer function for RM data, in which the transfer function ismore » expressed as a sum of a family of relatively displaced Gaussian response functions. Therefore, arbitrary shapes of transfer functions associated with complicated BLR geometry can be seamlessly included, enabling us to relax the presumption of a specified transfer function frequently adopted in previous studies and to let it be determined by observation data. We formulate our approach in a previously well-established framework that incorporates the statistical modeling of continuum variations as a damped random walk process and takes into account long-term secular variations which are irrelevant to RM signals. The application to RM data shows the fidelity of our approach.« less
Pre-configured polyhedron based protection against multi-link failures in optical mesh networks.
Huang, Shanguo; Guo, Bingli; Li, Xin; Zhang, Jie; Zhao, Yongli; Gu, Wanyi
2014-02-10
This paper focuses on random multi-link failures protection in optical mesh networks, instead of single, the dual or sequential failures of previous studies. Spare resource efficiency and failure robustness are major concerns in link protection strategy designing and a k-regular and k-edge connected structure is proved to be one of the optimal solutions for link protection network. Based on this, a novel pre-configured polyhedron based protection structure is proposed, and it could provide protection for both simultaneous and sequential random link failures with improved spare resource efficiency. Its performance is evaluated in terms of spare resource consumption, recovery rate and average recovery path length, as well as compared with ring based and subgraph protection under probabilistic link failure scenarios. Results show the proposed novel link protection approach has better performance than previous works.
Command Center Training Tool (C2T2)
NASA Technical Reports Server (NTRS)
Jones, Phillip; Drucker, Nich; Mathews, Reejo; Stanton, Laura; Merkle, Ed
2012-01-01
This abstract presents the training approach taken to create a management-centered, experiential learning solution for the Virginia Port Authority's Port Command Center. The resultant tool, called the Command Center Training Tool (C2T2), follows a holistic approach integrated across the training management cycle and within a single environment. The approach allows a single training manager to progress from training design through execution and AAR. The approach starts with modeling the training organization, identifying the organizational elements and their individual and collective performance requirements, including organizational-specific performance scoring ontologies. Next, the developer specifies conditions, the problems, and constructs that compose exercises and drive experiential learning. These conditions are defined by incidents, which denote a single, multi-media datum, and scenarios, which are stories told by incidents. To these layered, modular components, previously developed meta-data is attached, including associated performance requirements. The components are then stored in a searchable library An event developer can create a training event by searching the library based on metadata and then selecting and loading the resultant modular pieces. This loading process brings into the training event all the previously associated task and teamwork material as well as AAR preparation materials. The approach includes tools within an integrated management environment that places these materials at the fingertips of the event facilitator such that, in real time, the facilitator can track training audience performance and resultantly modify the training event. The approach also supports the concentrated knowledge management requirements for rapid preparation of an extensive AAR. This approach supports the integrated training cycle and allows a management-based perspective and advanced tools, through which a complex, thorough training event can be developed.
NASA Astrophysics Data System (ADS)
Qi, Wei
2017-11-01
Cost-benefit analysis is commonly used for engineering planning and design problems in practice. However, previous cost-benefit based design flood estimation is based on stationary assumption. This study develops a non-stationary cost-benefit based design flood estimation approach. This approach integrates a non-stationary probability distribution function into cost-benefit analysis, and influence of non-stationarity on expected total cost (including flood damage and construction costs) and design flood estimation can be quantified. To facilitate design flood selections, a 'Risk-Cost' analysis approach is developed, which reveals the nexus of extreme flood risk, expected total cost and design life periods. Two basins, with 54-year and 104-year flood data respectively, are utilized to illustrate the application. It is found that the developed approach can effectively reveal changes of expected total cost and extreme floods in different design life periods. In addition, trade-offs are found between extreme flood risk and expected total cost, which reflect increases in cost to mitigate risk. Comparing with stationary approaches which generate only one expected total cost curve and therefore only one design flood estimation, the proposed new approach generate design flood estimation intervals and the 'Risk-Cost' approach selects a design flood value from the intervals based on the trade-offs between extreme flood risk and expected total cost. This study provides a new approach towards a better understanding of the influence of non-stationarity on expected total cost and design floods, and could be beneficial to cost-benefit based non-stationary design flood estimation across the world.
NASA Astrophysics Data System (ADS)
Upton, Brianna; Evans, John; Morrow, Cherilynn; Thoms, Brian
2009-11-01
Previous studies have shown that many students have misconceptions about basic concepts in physics. Moreover, it has been concluded that one of the challenges lies in the teaching methodology. To address this, Georgia State University has begun teaching studio algebra-based physics. Although many institutions have implemented studio physics, most have done so in calculus-based sequences. The effectiveness of the studio approach in an algebra-based introductory physics course needs further investigation. A 3-semester study assessing the effectiveness of studio physics in an algebra-based physics sequence has been performed. This study compares the results of student pre- and post-tests using the Force Concept Inventory. Using the results from this assessment tool, we will discuss the effectiveness of the studio approach to teaching physics at GSU.
Mapping spatial patterns with morphological image processing
Peter Vogt; Kurt H. Riitters; Christine Estreguil; Jacek Kozak; Timothy G. Wade; James D. Wickham
2006-01-01
We use morphological image processing for classifying spatial patterns at the pixel level on binary land-cover maps. Land-cover pattern is classified as 'perforated,' 'edge,' 'patch,' and 'core' with higher spatial precision and thematic accuracy compared to a previous approach based on image convolution, while retaining the...
ERIC Educational Resources Information Center
Bird, Anne Marie; Ross, Diane
1984-01-01
A brief history of research in sport psychology based on Lander's (1982) analysis is presented. A systematic approach to theory building is offered. Previous methodological inadequacies are identified using examples of observational learning and anxiety. (Author/DF)
2018-04-01
empirical, external energy-damage correlation methods for evaluating hearing damage risk associated with impulsive noise exposure. AHAAH applies the...is validated against the measured results of human exposures to impulsive sounds, and unlike wholly empirical correlation approaches, AHAAH’s...a measured level (LAEQ8 of 85 dB). The approach in MIL-STD-1474E is very different. Previous standards tried to find a correlation between some
Using Reputation Systems and Non-Deterministic Routing to Secure Wireless Sensor Networks
Moya, José M.; Vallejo, Juan Carlos; Fraga, David; Araujo, Álvaro; Villanueva, Daniel; de Goyeneche, Juan-Mariano
2009-01-01
Security in wireless sensor networks is difficult to achieve because of the resource limitations of the sensor nodes. We propose a trust-based decision framework for wireless sensor networks coupled with a non-deterministic routing protocol. Both provide a mechanism to effectively detect and confine common attacks, and, unlike previous approaches, allow bad reputation feedback to the network. This approach has been extensively simulated, obtaining good results, even for unrealistically complex attack scenarios. PMID:22412345
Ozone Production and Control Strategies for Southern Taiwan
NASA Astrophysics Data System (ADS)
Shiu, C.; Liu, S.; Chang, C.; Chen, J.; Chou, C. C.; Lin, C.
2006-12-01
An observation-based modeling (OBM) approach is used to estimate the ozone production efficiency and production rate of O3 (P(O3)) in southern Taiwan. The approach can also provide an indirect estimate of the concentration of OH. Measured concentrations of two aromatic hydrocarbons, i.e. ethylbenzene/m,p-xylene, are used to estimate the degree of photochemical processing and the amounts of photochemically consumed NOx and NMHCs. In addition, a one-dimensional (1d) photochemical model is used to compare with the OBM results. The average ozone production efficiency during the field campaign in Kaohsiung-Pingtung area in Fall 2003 is found to be about 5, comparable to previous works. The relationship of P(O3) with NOx is examined in detail and compared to previous studies. The derived OH concentrations from this approach are in fair agreement with values calculated from the 1d photochemical model. The relationship of total oxidants (e.g. O3+NO2) versus initial NOx and NMHCs suggests that reducing NMHCs are more effective in controlling total oxidants than reducing NOx. For O3 control, reducing NMHC is even more effective than NOx due to the NO titration effect. This observation-based approach provides a good alternative for understanding the production of ozone and formulating ozone control strategy in urban and suburban environment without measurements of peroxy radicals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Zhiling; Wei, Wei; Turlapaty, Anish
2012-07-01
At the United States Army's test sites, fired penetrators made of Depleted Uranium (DU) have been buried under ground and become hazardous waste. Previously, we developed techniques for detecting buried radioactive targets. We also developed approaches for locating buried paramagnetic metal objects by utilizing the electromagnetic induction (EMI) sensor data. In this paper, we apply data fusion techniques to combine results from both the radiation detection and the EMI detection, so that we can further distinguish among DU penetrators, DU oxide, and non- DU metal debris. We develop a two-step fusion approach for the task, and test it with surveymore » data collected on simulation targets. In this work, we explored radiation and EMI data fusion for detecting DU, oxides, and non-DU metals. We developed a two-step fusion approach based on majority voting and a set of decision rules. With this approach, we fuse results from radiation detection based on the RX algorithm and EMI detection based on a 3-step analysis. Our fusion approach has been tested successfully with data collected on simulation targets. In the future, we will need to further verify the effectiveness of this fusion approach with field data. (authors)« less
A Deep Learning based Approach to Reduced Order Modeling of Fluids using LSTM Neural Networks
NASA Astrophysics Data System (ADS)
Mohan, Arvind; Gaitonde, Datta
2017-11-01
Reduced Order Modeling (ROM) can be used as surrogates to prohibitively expensive simulations to model flow behavior for long time periods. ROM is predicated on extracting dominant spatio-temporal features of the flow from CFD or experimental datasets. We explore ROM development with a deep learning approach, which comprises of learning functional relationships between different variables in large datasets for predictive modeling. Although deep learning and related artificial intelligence based predictive modeling techniques have shown varied success in other fields, such approaches are in their initial stages of application to fluid dynamics. Here, we explore the application of the Long Short Term Memory (LSTM) neural network to sequential data, specifically to predict the time coefficients of Proper Orthogonal Decomposition (POD) modes of the flow for future timesteps, by training it on data at previous timesteps. The approach is demonstrated by constructing ROMs of several canonical flows. Additionally, we show that statistical estimates of stationarity in the training data can indicate a priori how amenable a given flow-field is to this approach. Finally, the potential and limitations of deep learning based ROM approaches will be elucidated and further developments discussed.
Measurement of EUV lithography pupil amplitude and phase variation via image-based methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levinson, Zachary; Verduijn, Erik; Wood, Obert R.
2016-04-01
Here, an approach to image-based EUV aberration metrology using binary mask targets and iterative model-based solutions to extract both the amplitude and phase components of the aberrated pupil function is presented. The approach is enabled through previously developed modeling, fitting, and extraction algorithms. We seek to examine the behavior of pupil amplitude variation in real-optical systems. Optimized target images were captured under several conditions to fit the resulting pupil responses. Both the amplitude and phase components of the pupil function were extracted from a zone-plate-based EUV mask microscope. The pupil amplitude variation was expanded in three different bases: Zernike polynomials,more » Legendre polynomials, and Hermite polynomials. It was found that the Zernike polynomials describe pupil amplitude variation most effectively of the three.« less
Link-Based Similarity Measures Using Reachability Vectors
Yoon, Seok-Ho; Kim, Ji-Soo; Ryu, Minsoo; Choi, Ho-Jin
2014-01-01
We present a novel approach for computing link-based similarities among objects accurately by utilizing the link information pertaining to the objects involved. We discuss the problems with previous link-based similarity measures and propose a novel approach for computing link based similarities that does not suffer from these problems. In the proposed approach each target object is represented by a vector. Each element of the vector corresponds to all the objects in the given data, and the value of each element denotes the weight for the corresponding object. As for this weight value, we propose to utilize the probability of reaching from the target object to the specific object, computed using the “Random Walk with Restart” strategy. Then, we define the similarity between two objects as the cosine similarity of the two vectors. In this paper, we provide examples to show that our approach does not suffer from the aforementioned problems. We also evaluate the performance of the proposed methods in comparison with existing link-based measures, qualitatively and quantitatively, with respect to two kinds of data sets, scientific papers and Web documents. Our experimental results indicate that the proposed methods significantly outperform the existing measures. PMID:24701188
Conditioning of FRF measurements for use with frequency based substructuring
NASA Astrophysics Data System (ADS)
Nicgorski, Dana; Avitabile, Peter
2010-02-01
Frequency based substructuring approaches have been used for the generation of system models from component data. While numerical models show successful results, there have been many difficulties with actual measurements in many instances. Previous work has identified some of these typical problems using simulated data to incorporate specific measurement difficulties commonly observed along with approaches to overcome some of these difficulties. This paper presents the results using actual measured data for a laboratory structure subjected to both analytical and experimental studies. Various commonly used approaches are shown to illustrate some of the difficulties with measured data. A new approach to better condition the measured functions and purge commonly found data measurement contaminants is utilized to provide dramatically improved results. Several cases are explored to show the difficulties commonly observed as well as the improved conditioning of the measured data to obtain acceptable results.
Proteomics approaches advance our understanding of plant self-incompatibility response.
Sankaranarayanan, Subramanian; Jamshed, Muhammad; Samuel, Marcus A
2013-11-01
Self-incompatibility (SI) in plants is a genetic mechanism that prevents self-fertilization and promotes out-crossing needed to maintain genetic diversity. SI has been classified into two broad categories: the gametophytic self-incompatibility (GSI) and the sporophytic self-incompatibility (SSI) based on the genetic mechanisms involved in 'self' pollen rejection. Recent proteomic approaches to identify potential candidates involved in SI have shed light onto a number of previously unidentified mechanisms required for SI response. SI proteome research has progressed from the use of isoelectric focusing in early days to the latest third-generation technique of comparative isobaric tag for relative and absolute quantitation (iTRAQ) used in recent times. We will focus on the proteome-based approaches used to study self-incompatibility (GSI and SSI), recent developments in the field of incompatibility research with emphasis on SSI and future prospects of using proteomic approaches to study self-incompatibility.
Identification of sea ice types in spaceborne synthetic aperture radar data
NASA Technical Reports Server (NTRS)
Kwok, Ronald; Rignot, Eric; Holt, Benjamin; Onstott, R.
1992-01-01
This study presents an approach for identification of sea ice types in spaceborne SAR image data. The unsupervised classification approach involves cluster analysis for segmentation of the image data followed by cluster labeling based on previously defined look-up tables containing the expected backscatter signatures of different ice types measured by a land-based scatterometer. Extensive scatterometer observations and experience accumulated in field campaigns during the last 10 yr were used to construct these look-up tables. The classification approach, its expected performance, the dependence of this performance on radar system performance, and expected ice scattering characteristics are discussed. Results using both aircraft and simulated ERS-1 SAR data are presented and compared to limited field ice property measurements and coincident passive microwave imagery. The importance of an integrated postlaunch program for the validation and improvement of this approach is discussed.
Lee, Kai-Hui; Chiu, Pei-Ling
2013-10-01
Conventional visual cryptography (VC) suffers from a pixel-expansion problem, or an uncontrollable display quality problem for recovered images, and lacks a general approach to construct visual secret sharing schemes for general access structures. We propose a general and systematic approach to address these issues without sophisticated codebook design. This approach can be used for binary secret images in non-computer-aided decryption environments. To avoid pixel expansion, we design a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach. We begin by formulating a mathematic model for the VC construction problem to find the column vectors for the optimal VC construction, after which we develop a simulated-annealing-based algorithm to solve the problem. The experimental results show that the display quality of the recovered image is superior to that of previous papers.
Self-organizing maps: a versatile tool for the automatic analysis of untargeted imaging datasets.
Franceschi, Pietro; Wehrens, Ron
2014-04-01
MS-based imaging approaches allow for location-specific identification of chemical components in biological samples, opening up possibilities of much more detailed understanding of biological processes and mechanisms. Data analysis, however, is challenging, mainly because of the sheer size of such datasets. This article presents a novel approach based on self-organizing maps, extending previous work in order to be able to handle the large number of variables present in high-resolution mass spectra. The key idea is to generate prototype images, representing spatial distributions of ions, rather than prototypical mass spectra. This allows for a two-stage approach, first generating typical spatial distributions and associated m/z bins, and later analyzing the interesting bins in more detail using accurate masses. The possibilities and advantages of the new approach are illustrated on an in-house dataset of apple slices. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Stecker, Floyd W.
2012-01-01
We calculate the intensity and photon spectrum of the intergalactic background light (IBL) as a function of red shift using an approach based on observational data obtained at in different wavelength bands from local to deep galaxy surveys. Our empirically based approach allows us, for the firs.t time, to obtain a completely model independent determination of the IBL and to quantify its uncertainties. Using our results on the IBL, we then place upper and lower limits on the opacity of the universe to gamma-rays, independent of previous constraints.
nextPARS: parallel probing of RNA structures in Illumina
Saus, Ester; Willis, Jesse R.; Pryszcz, Leszek P.; Hafez, Ahmed; Llorens, Carlos; Himmelbauer, Heinz
2018-01-01
RNA molecules play important roles in virtually every cellular process. These functions are often mediated through the adoption of specific structures that enable RNAs to interact with other molecules. Thus, determining the secondary structures of RNAs is central to understanding their function and evolution. In recent years several sequencing-based approaches have been developed that allow probing structural features of thousands of RNA molecules present in a sample. Here, we describe nextPARS, a novel Illumina-based implementation of in vitro parallel probing of RNA structures. Our approach achieves comparable accuracy to previous implementations, while enabling higher throughput and sample multiplexing. PMID:29358234
Decomposability and scalability in space-based observatory scheduling
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Smith, Stephen F.
1992-01-01
In this paper, we discuss issues of problem and model decomposition within the HSTS scheduling framework. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) scheduling problem, motivated by the limitations of the current solution and, more generally, the insufficiency of classical planning and scheduling approaches in this problem context. We first summarize the salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research. Then, we describe some key problem decomposition techniques supported by HSTS and underlying our integrated planning and scheduling approach, and we discuss the leverage they provide in solving space-based observatory scheduling problems.
Spurious symptom reduction in fault monitoring
NASA Technical Reports Server (NTRS)
Shontz, William D.; Records, Roger M.; Choi, Jai J.
1993-01-01
Previous work accomplished on NASA's Faultfinder concept suggested that the concept was jeopardized by spurious symptoms generated in the monitoring phase. The purpose of the present research was to investigate methods of reducing the generation of spurious symptoms during in-flight engine monitoring. Two approaches for reducing spurious symptoms were investigated. A knowledge base of rules was constructed to filter known spurious symptoms and a neural net was developed to improve the expectation values used in the monitoring process. Both approaches were effective in reducing spurious symptoms individually. However, the best results were obtained using a hybrid system combining the neural net capability to improve expectation values with the rule-based logic filter.
A novel load balanced energy conservation approach in WSN using biogeography based optimization
NASA Astrophysics Data System (ADS)
Kaushik, Ajay; Indu, S.; Gupta, Daya
2017-09-01
Clustering sensor nodes is an effective technique to reduce energy consumption of the sensor nodes and maximize the lifetime of Wireless sensor networks. Balancing load of the cluster head is an important factor in long run operation of WSNs. In this paper we propose a novel load balancing approach using biogeography based optimization (LB-BBO). LB-BBO uses two separate fitness functions to perform load balancing of equal and unequal load respectively. The proposed method is simulated using matlab and compared with existing methods. The proposed method shows better performance than all the previous works implemented for energy conservation in WSN
Alpha Matting with KL-Divergence Based Sparse Sampling.
Karacan, Levent; Erdem, Aykut; Erdem, Erkut
2017-06-22
In this paper, we present a new sampling-based alpha matting approach for the accurate estimation of foreground and background layers of an image. Previous sampling-based methods typically rely on certain heuristics in collecting representative samples from known regions, and thus their performance deteriorates if the underlying assumptions are not satisfied. To alleviate this, we take an entirely new approach and formulate sampling as a sparse subset selection problem where we propose to pick a small set of candidate samples that best explains the unknown pixels. Moreover, we describe a new dissimilarity measure for comparing two samples which is based on KLdivergence between the distributions of features extracted in the vicinity of the samples. The proposed framework is general and could be easily extended to video matting by additionally taking temporal information into account in the sampling process. Evaluation on standard benchmark datasets for image and video matting demonstrates that our approach provides more accurate results compared to the state-of-the-art methods.
Silberstein, Lev; Goncalves, Kevin A; Kharchenko, Peter V; Turcotte, Raphael; Kfoury, Youmna; Mercier, Francois; Baryawno, Ninib; Severe, Nicolas; Bachand, Jacqueline; Spencer, Joel A; Papazian, Ani; Lee, Dongjun; Chitteti, Brahmananda Reddy; Srour, Edward F; Hoggatt, Jonathan; Tate, Tiffany; Lo Celso, Cristina; Ono, Noriaki; Nutt, Stephen; Heino, Jyrki; Sipilä, Kalle; Shioda, Toshihiro; Osawa, Masatake; Lin, Charles P; Hu, Guo-Fu; Scadden, David T
2016-10-06
Physiological stem cell function is regulated by secreted factors produced by niche cells. In this study, we describe an unbiased approach based on the differential single-cell gene expression analysis of mesenchymal osteolineage cells close to, and further removed from, hematopoietic stem/progenitor cells (HSPCs) to identify candidate niche factors. Mesenchymal cells displayed distinct molecular profiles based on their relative location. We functionally examined, among the genes that were preferentially expressed in proximal cells, three secreted or cell-surface molecules not previously connected to HSPC biology-the secreted RNase angiogenin, the cytokine IL18, and the adhesion molecule Embigin-and discovered that all of these factors are HSPC quiescence regulators. Therefore, our proximity-based differential single-cell approach reveals molecular heterogeneity within niche cells and can be used to identify novel extrinsic stem/progenitor cell regulators. Similar approaches could also be applied to other stem cell/niche pairs to advance the understanding of microenvironmental regulation of stem cell function. Copyright © 2016 Elsevier Inc. All rights reserved.
Bengali-English Relevant Cross Lingual Information Access Using Finite Automata
NASA Astrophysics Data System (ADS)
Banerjee, Avishek; Bhattacharyya, Swapan; Hazra, Simanta; Mondal, Shatabdi
2010-10-01
CLIR techniques searches unrestricted texts and typically extract term and relationships from bilingual electronic dictionaries or bilingual text collections and use them to translate query and/or document representations into a compatible set of representations with a common feature set. In this paper, we focus on dictionary-based approach by using a bilingual data dictionary with a combination to statistics-based methods to avoid the problem of ambiguity also the development of human computer interface aspects of NLP (Natural Language processing) is the approach of this paper. The intelligent web search with regional language like Bengali is depending upon two major aspect that is CLIA (Cross language information access) and NLP. In our previous work with IIT, KGP we already developed content based CLIA where content based searching in trained on Bengali Corpora with the help of Bengali data dictionary. Here we want to introduce intelligent search because to recognize the sense of meaning of a sentence and it has a better real life approach towards human computer interactions.
NASA Astrophysics Data System (ADS)
Boé, Julien; Terray, Laurent
2014-05-01
Ensemble approaches for climate change projections have become ubiquitous. Because of large model-to-model variations and, generally, lack of rationale for the choice of a particular climate model against others, it is widely accepted that future climate change and its impacts should not be estimated based on a single climate model. Generally, as a default approach, the multi-model ensemble mean (MMEM) is considered to provide the best estimate of climate change signals. The MMEM approach is based on the implicit hypothesis that all the models provide equally credible projections of future climate change. This hypothesis is unlikely to be true and ideally one would want to give more weight to more realistic models. A major issue with this alternative approach lies in the assessment of the relative credibility of future climate projections from different climate models, as they can only be evaluated against present-day observations: which present-day metric(s) should be used to decide which models are "good" and which models are "bad" in the future climate? Once a supposedly informative metric has been found, other issues arise. What is the best statistical method to combine multiple models results taking into account their relative credibility measured by a given metric? How to be sure in the end that the metric-based estimate of future climate change is not in fact less realistic than the MMEM? It is impossible to provide strict answers to those questions in the climate change context. Yet, in this presentation, we propose a methodological approach based on a perfect model framework that could bring some useful elements of answer to the questions previously mentioned. The basic idea is to take a random climate model in the ensemble and treat it as if it were the truth (results of this model, in both past and future climate, are called "synthetic observations"). Then, all the other members from the multi-model ensemble are used to derive thanks to a metric-based approach a posterior estimate of climate change, based on the synthetic observation of the metric. Finally, it is possible to compare the posterior estimate to the synthetic observation of future climate change to evaluate the skill of the method. The main objective of this presentation is to describe and apply this perfect model framework to test different methodological issues associated with non-uniform model weighting and similar metric-based approaches. The methodology presented is general, but will be applied to the specific case of summer temperature change in France, for which previous works have suggested potentially useful metrics associated with soil-atmosphere and cloud-temperature interactions. The relative performances of different simple statistical approaches to combine multiple model results based on metrics will be tested. The impact of ensemble size, observational errors, internal variability, and model similarity will be characterized. The potential improvements associated with metric-based approaches compared to the MMEM is terms of errors and uncertainties will be quantified.
Oreshkov, Ognyan; Calsamiglia, John
2010-07-30
We propose a theory of adiabaticity in quantum markovian dynamics based on a decomposition of the Hilbert space induced by the asymptotic behavior of the Lindblad semigroup. A central idea of our approach is that the natural generalization of the concept of eigenspace of the Hamiltonian in the case of markovian dynamics is a noiseless subsystem with a minimal noisy cofactor. Unlike previous attempts to define adiabaticity for open systems, our approach deals exclusively with physical entities and provides a simple, intuitive picture at the Hilbert-space level, linking the notion of adiabaticity to the theory of noiseless subsystems. As two applications of our theory, we propose a general framework for decoherence-assisted computation in noiseless codes and a dissipation-driven approach to holonomic computation based on adiabatic dragging of subsystems that is generally not achievable by nondissipative means.
Dynamic sensor management of dispersed and disparate sensors for tracking resident space objects
NASA Astrophysics Data System (ADS)
El-Fallah, A.; Zatezalo, A.; Mahler, R.; Mehra, R. K.; Donatelli, D.
2008-04-01
Dynamic sensor management of dispersed and disparate sensors for space situational awareness presents daunting scientific and practical challenges as it requires optimal and accurate maintenance of all Resident Space Objects (RSOs) of interest. We demonstrate an approach to the space-based sensor management problem by extending a previously developed and tested sensor management objective function, the Posterior Expected Number of Targets (PENT), to disparate and dispersed sensors. This PENT extension together with observation models for various sensor platforms, and a Probability Hypothesis Density Particle Filter (PHD-PF) tracker provide a powerful tool for tackling this challenging problem. We demonstrate the approach using simulations for tracking RSOs by a Space Based Visible (SBV) sensor and ground based radars.
GRACE time-variable gravity field recovery using an improved energy balance approach
NASA Astrophysics Data System (ADS)
Shang, Kun; Guo, Junyi; Shum, C. K.; Dai, Chunli; Luo, Jia
2015-12-01
A new approach based on energy conservation principle for satellite gravimetry mission has been developed and yields more accurate estimation of in situ geopotential difference observables using K-band ranging (KBR) measurements from the Gravity Recovery and Climate Experiment (GRACE) twin-satellite mission. This new approach preserves more gravity information sensed by KBR range-rate measurements and reduces orbit error as compared to previous energy balance methods. Results from analysis of 11 yr of GRACE data indicated that the resulting geopotential difference estimates agree well with predicted values from official Level 2 solutions: with much higher correlation at 0.9, as compared to 0.5-0.8 reported by previous published energy balance studies. We demonstrate that our approach produced a comparable time-variable gravity solution with the Level 2 solutions. The regional GRACE temporal gravity solutions over Greenland reveals that a substantially higher temporal resolution is achievable at 10-d sampling as compared to the official monthly solutions, but without the compromise of spatial resolution, nor the need to use regularization or post-processing.
NASA Technical Reports Server (NTRS)
Jankovsky, Robert; Tverdokhlebov, Sergery; Manzella, David
1999-01-01
The development of Hall thrusters with powers ranging from tens of kilowatts to in excess of one hundred kilowatts is considered based on renewed interest in high power. high thrust electric propulsion applications. An approach to develop such thrusters based on previous experience is discussed. It is shown that the previous experimental data taken with thrusters of 10 kW input power and less can be used. Potential mass savings due to the design of high power Hall thrusters are discussed. Both xenon and alternate thruster propellant are considered, as are technological issues that will challenge the design of high power Hall thrusters. Finally, the implications of such a development effort with regard to ground testing and spacecraft intecrati'on issues are discussed.
Illeghems, Koen; De Vuyst, Luc; Papalexandratou, Zoi; Weckx, Stefan
2012-01-01
This is the first report on the phylogenetic analysis of the community diversity of a single spontaneous cocoa bean box fermentation sample through a metagenomic approach involving 454 pyrosequencing. Several sequence-based and composition-based taxonomic profiling tools were used and evaluated to avoid software-dependent results and their outcome was validated by comparison with previously obtained culture-dependent and culture-independent data. Overall, this approach revealed a wider bacterial (mainly γ-Proteobacteria) and fungal diversity than previously found. Further, the use of a combination of different classification methods, in a software-independent way, helped to understand the actual composition of the microbial ecosystem under study. In addition, bacteriophage-related sequences were found. The bacterial diversity depended partially on the methods used, as composition-based methods predicted a wider diversity than sequence-based methods, and as classification methods based solely on phylogenetic marker genes predicted a more restricted diversity compared with methods that took all reads into account. The metagenomic sequencing analysis identified Hanseniaspora uvarum, Hanseniaspora opuntiae, Saccharomyces cerevisiae, Lactobacillus fermentum, and Acetobacter pasteurianus as the prevailing species. Also, the presence of occasional members of the cocoa bean fermentation process was revealed (such as Erwinia tasmaniensis, Lactobacillus brevis, Lactobacillus casei, Lactobacillus rhamnosus, Lactococcus lactis, Leuconostoc mesenteroides, and Oenococcus oeni). Furthermore, the sequence reads associated with viral communities were of a restricted diversity, dominated by Myoviridae and Siphoviridae, and reflecting Lactobacillus as the dominant host. To conclude, an accurate overview of all members of a cocoa bean fermentation process sample was revealed, indicating the superiority of metagenomic sequencing over previously used techniques.
Geometric Lagrangian approach to the physical degree of freedom count in field theory
NASA Astrophysics Data System (ADS)
Díaz, Bogar; Montesinos, Merced
2018-05-01
To circumvent some technical difficulties faced by the geometric Lagrangian approach to the physical degree of freedom count presented in the work of Díaz, Higuita, and Montesinos [J. Math. Phys. 55, 122901 (2014)] that prevent its direct implementation to field theory, in this paper, we slightly modify the geometric Lagrangian approach in such a way that its resulting version works perfectly for field theory (and for particle systems, of course). As in previous work, the current approach also allows us to directly get the Lagrangian constraints, a new Lagrangian formula for the counting of the number of physical degrees of freedom, the gauge transformations, and the number of first- and second-class constraints for any action principle based on a Lagrangian depending on the fields and their first derivatives without performing any Dirac's canonical analysis. An advantage of this approach over the previous work is that it also allows us to handle the reducibility of the constraints and to get the off-shell gauge transformations. The theoretical framework is illustrated in 3-dimensional generalized general relativity (Palatini and Witten's exotic actions), Chern-Simons theory, 4-dimensional BF theory, and 4-dimensional general relativity given by Palatini's action with a cosmological constant.
Electrical resistivity and thermal conductivity of liquid aluminum in the two-temperature state
NASA Astrophysics Data System (ADS)
Petrov, Yu V.; Inogamov, N. A.; Mokshin, A. V.; Galimzyanov, B. N.
2018-01-01
The electrical resistivity and thermal conductivity of liquid aluminum in the two-temperature state is calculated by using the relaxation time approach and structural factor of ions obtained by molecular dynamics simulation. Resistivity witin the Ziman-Evans approach is also considered to be higher than in the approach with previously calculated conductivity via the relaxation time. Calculations based on the construction of the ion structural factor through the classical molecular dynamics and kinetic equation for electrons are more economical in terms of computing resources and give results close to the Kubo-Greenwood with the quantum molecular dynamics calculations.
Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques
2012-09-01
The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.
Wikman, Anna; Kukkola, Laura; Börjesson, Helene; Cernvall, Martin; Woodford, Joanne; Grönqvist, Helena; von Essen, Louise
2018-04-18
Parenting a child through cancer is a distressing experience, and a subgroup of parents report negative long-term psychological consequences years after treatment completion. However, there is a lack of evidence-based psychological interventions for parents who experience distress in relation to a child's cancer disease after end of treatment. One aim of this study was to develop an internet-administered, cognitive behavior therapy-based, psychological, guided, self-help intervention (ENGAGE) for parents of children previously treated for cancer. Another aim was to identify acceptable procedures for future feasibility and efficacy studies testing and evaluating the intervention. Participatory action research methodology was used. The study included face-to-face workshops and related Web-based exercises. A total of 6 parents (4 mothers, 2 fathers) of children previously treated for cancer were involved as parent research partners. Moreover, 2 clinical psychologists were involved as expert research partners. Research partners and research group members worked collaboratively throughout the study. Data were analyzed iteratively using written summaries of the workshops and Web-based exercises parallel to data collection. A 10-week, internet-administered, cognitive behavior therapy-based, psychological, guided, self-help intervention (ENGAGE) was developed in collaboration with parent research partners and expert research partners. The content of the intervention, mode and frequency of e-therapist support, and the individualized approach for feedback were modified based on the research partner input. Shared solutions were reached regarding the type and timing of support from an e-therapist (eg, initial video or telephone call, multiple methods of e-therapist contact), duration and timing of intervention (eg, 10 weeks, 30-min assessments), and the removal of unnecessary support functions (eg, removal of chat and forum functions). Preferences for study procedures in future studies testing and evaluating the intervention were discussed; consensus was not reached for all aspects. To the best of our knowledge, this study is the first use of a participatory action research approach to develop a psychological intervention for parents of children previously treated for cancer and to identify acceptable study procedures. Involvement of parents with lived experience was vital in the development of a potentially relevant and acceptable intervention for this population. ©Anna Wikman, Laura Kukkola, Helene Börjesson, Martin Cernvall, Joanne Woodford, Helena Grönqvist, Louise von Essen. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 18.04.2018.
Re-Storying an Entrepreneurial Identity: Education, Experience and Self-Narrative
ERIC Educational Resources Information Center
Harmeling, Susan S.
2011-01-01
Purpose: This paper aims to explore the ways in which entrepreneurship education may serve as an identity workspace. Design/methodology/approach: This is a conceptual/theoretical paper based on previously completed empirical work. Findings: The paper makes the connection between worldmaking, experience, action and identity. Practical implications:…
Embracing the Cloud: Six Ways to Look at the Shift to Cloud Computing
ERIC Educational Resources Information Center
Ullman, David F.; Haggerty, Blake
2010-01-01
Cloud computing is the latest paradigm shift for the delivery of IT services. Where previous paradigms (centralized, decentralized, distributed) were based on fairly straightforward approaches to technology and its management, cloud computing is radical in comparison. The literature on cloud computing, however, suffers from many divergent…
Child-Initiated Learning, the Outdoor Environment and the "Underachieving" Child
ERIC Educational Resources Information Center
Maynard, Trisha; Waters, Jane; Clement, Jennifer
2013-01-01
The Foundation Phase for Wales advocates an experiential, play-based approach to learning for children aged three to seven years that includes child-initiated activity within the outdoor environment. In previous research, Foundation Phase practitioners maintained that children perceived to be "underachieving" within the classroom came…
A Cognitive Apprenticeship Approach to Facilitating Web-Based Collaborative Problem Solving
ERIC Educational Resources Information Center
Kuo, Fan-Ray; Hwang, Gwo-Jen; Chen, Szu-Chuang; Chen, Sherry Y.
2012-01-01
Enhancing students' problem-solving abilities has been recognized as an important and challenging issue for technology-enhanced learning. Thus, previous research has attempted to address this issue by developing various mechanisms, among which a cognitive apprenticeship model can particularly enhance students' abilities. However, it is not clear…
ERIC Educational Resources Information Center
American Psychological Association (APA), 2008
2008-01-01
This report focuses on psychological practice with children and adolescents, concurring with a previous task force report that integrating science and practice must be a priority. In addition, the report advocates that developmental considerations and cultural/contextual factors warrant specific, distinctive attention by researchers and…
Human Figure Drawings: Abusing the Abused.
ERIC Educational Resources Information Center
Bardos, Achilles N.
1993-01-01
Responds to previous article (Motta, Little, and Tobin, this issue) which reviewed data-based studies on figure drawings and found little support for their validity or use in assessing personality, behavior, emotion, or intellectual functioning. Notes recent approaches to interpretation of human figure drawings and cites flaws in argument against…
Self-Explanation and Explanatory Feedback in Games: Individual Differences, Gameplay, and Learning
ERIC Educational Resources Information Center
Killingsworth, Stephen S.; Clark, Douglas B.; Adams, Deanne M.
2015-01-01
Previous research has demonstrated the efficacy of two explanation-based approaches for increasing learning in educational games. The first involves asking students to explain their answers (self-explanation) and the second involves providing correct explanations (explanatory feedback). This study (1) compared self-explanation and explanatory…
Book Selection, Collection Development, and Bounded Rationality.
ERIC Educational Resources Information Center
Schwartz, Charles A.
1989-01-01
Reviews previously proposed schemes of classical rationality in book selection, describes new approaches to rational choice behavior, and presents a model of book selection based on bounded rationality in a garbage can decision process. The role of tacit knowledge and symbolic content in the selection process are also discussed. (102 references)…
Wheat mill stream properties for discrete element method modeling
USDA-ARS?s Scientific Manuscript database
A discrete phase approach based on individual wheat kernel characteristics is needed to overcome the limitations of previous statistical models and accurately predict the milling behavior of wheat. As a first step to develop a discrete element method (DEM) model for the wheat milling process, this s...
USDA-ARS?s Scientific Manuscript database
Current molecular methodologies, specifically DNA-based approaches, provide access to previously hidden soil biodiversity and are routinely employed in environmental studies of microbial ecology. Selection of cell lysis methodology is critical to community analyses due to the inability of any singul...
Espousing Democratic Leadership Practices: A Study of Values in Action
ERIC Educational Resources Information Center
Devereaux, Lorraine
2003-01-01
This article examines principals' espoused values and their values in action. It provides a reanalysis of previously collected data through a values lens. The original research study was an international quantitative and qualitative investigation of principals' leadership approaches that was based in 15 schools. This particular excerpt of the…
Clustering Students Based on Motivation to Learn: A Blended Learning Approach
ERIC Educational Resources Information Center
Rentroia-Bonito, Maria Alexandra; Gonçalves, Daniel; Jorge, Joaquim A.
2015-01-01
Technological advances during the last decade have provided huge possibilities to support e-learning. However, there are still concerns regarding Return-on-Investment (ROI) of e-learning, its sustainability within organizational boundaries and effectiveness across potential learner groups. Much previous research has concentrated on learners'…
The Role of Simulation Case Studies in Enterprise Education
ERIC Educational Resources Information Center
Tunstall, Richard; Lynch, Martin
2010-01-01
Purpose: This paper aims to explore the role of electronic simulation case studies in enterprise education, their effectiveness, and their relationship to traditional forms of classroom-based approaches to experiential learning. The paper seeks to build on previous work within the field of enterprise and management education, specifically in…
Investigating Storage and Retrieval Processes of Directed Forgetting: A Model-Based Approach
ERIC Educational Resources Information Center
Rummel, Jan; Marevic, Ivan; Kuhlmann, Beatrice G.
2016-01-01
Intentional forgetting of previously learned information is an adaptive cognitive capability of humans but its cognitive underpinnings are not yet well understood. It has been argued that it strongly depends on the presentation method whether forgetting instructions alter storage or retrieval stages (Basden, Basden, & Gargano, 1993). In…
The Math Master Level 2. Ages 6-8.
ERIC Educational Resources Information Center
Levy, Barbara W.
This booklet, designed for ages 6-8, is the second in a series designed to help teachers develop a more positive and creative approach to giving work on mathematics skills to children. It is based on objectives concerning creativity, fostering independent thinking, using experiences, using mastery, reinforcing previously learned skills,…
The Math Master Level 4. Ages 10-12.
ERIC Educational Resources Information Center
Levy, Barbara W.
This booklet, designed for ages 10-12, is the fourth in a series designed to help teachers develop a more positive and creative approach to giving work on mathematics skills to children. It is based on objectives concerning creativity, fostering independent thinking, using experiences, using mastery, reinforcing previously learned skills,…
The Math Master Level 1. Preschool-Age 6.
ERIC Educational Resources Information Center
Levy, Barbara W.
This booklet, designed for preschool through age 6, is the first in a series designed to help teachers develop a more positive and creative approach to giving work on mathematics skills to children. It is based on objectives concerning creativity, fostering independent thinking, using experiences, using mastery, reinforcing previously learned…
The Math Master Level 3. Ages 8-10.
ERIC Educational Resources Information Center
Levy, Barbara W.
This booklet, designed for ages 8-10, is the third in a series designed to help teachers develop a more positive and creative approach to giving work on mathematics skills to children. It is based on objectives concerning creativity, fostering independent thinking, using experiences, using mastery, reinforcing previously learned skills,…
Novel Approach to Facilitating Tradeoff Multi-Objective Grouping Optimization
ERIC Educational Resources Information Center
Lin, Yu-Shih; Chang, Yi-Chun; Chu, Chih-Ping
2016-01-01
The grouping problem is critical in collaborative learning (CL) because of the complexity and difficulty in adequate grouping, based on various grouping criteria and numerous learners. Previous studies have paid attention to certain research questions, and the consideration for a number of learner characteristics has arisen. Such a multi-objective…
Decision-Making Accuracy of CBM Progress-Monitoring Data
ERIC Educational Resources Information Center
Hintze, John M.; Wells, Craig S.; Marcotte, Amanda M.; Solomon, Benjamin G.
2018-01-01
This study examined the diagnostic accuracy associated with decision making as is typically conducted with curriculum-based measurement (CBM) approaches to progress monitoring. Using previously published estimates of the standard errors of estimate associated with CBM, 20,000 progress-monitoring data sets were simulated to model student reading…
Assessing Quality of Critical Thought in Online Discussion
ERIC Educational Resources Information Center
Weltzer-Ward, Lisa; Baltes, Beate; Lynn, Laura Knight
2009-01-01
Purpose: The purpose of this paper is to describe a theoretically based coding framework for an integrated analysis and assessment of critical thinking in online discussion. Design/methodology/approach: The critical thinking assessment framework (TAF) is developed through review of theory and previous research, verified by comparing results to…
The colloquial approach: An active learning technique
NASA Astrophysics Data System (ADS)
Arce, Pedro
1994-09-01
This paper addresses the very important problem of the effectiveness of teaching methodologies in fundamental engineering courses such as transport phenomena. An active learning strategy, termed the colloquial approach, is proposed in order to increase student involvement in the learning process. This methodology is a considerable departure from traditional methods that use solo lecturing. It is based on guided discussions, and it promotes student understanding of new concepts by directing the student to construct new ideas by building upon the current knowledge and by focusing on key cases that capture the essential aspects of new concepts. The colloquial approach motivates the student to participate in discussions, to develop detailed notes, and to design (or construct) his or her own explanation for a given problem. This paper discusses the main features of the colloquial approach within the framework of other current and previous techniques. Problem-solving strategies and the need for new textbooks and for future investigations based on the colloquial approach are also outlined.
Gloger, Oliver; Kühn, Jens; Stanski, Adam; Völzke, Henry; Puls, Ralf
2010-07-01
Automatic 3D liver segmentation in magnetic resonance (MR) data sets has proven to be a very challenging task in the domain of medical image analysis. There exist numerous approaches for automatic 3D liver segmentation on computer tomography data sets that have influenced the segmentation of MR images. In contrast to previous approaches to liver segmentation in MR data sets, we use all available MR channel information of different weightings and formulate liver tissue and position probabilities in a probabilistic framework. We apply multiclass linear discriminant analysis as a fast and efficient dimensionality reduction technique and generate probability maps then used for segmentation. We develop a fully automatic three-step 3D segmentation approach based upon a modified region growing approach and a further threshold technique. Finally, we incorporate characteristic prior knowledge to improve the segmentation results. This novel 3D segmentation approach is modularized and can be applied for normal and fat accumulated liver tissue properties. Copyright 2010 Elsevier Inc. All rights reserved.
Groundwater remediation solutions at hanford
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilmore, T.J.; Truex, M.J.; Williams, M.D.
2007-07-01
In 2006, Congress provided funding to the U. S. Department of Energy (DOE) to study new technologies that could be used to treat contamination from the Hanford Site that might impact the Columbia River. DOE identified three high priority sites that had groundwater contamination migrating towards the Columbia river for remediation. The contaminants included strontium-90, uranium and chromium. A natural systems approach was taken that uses a mass balance concept to frame the problem and determine the most appropriate remedial approach. This approach provides for a scientifically based remedial decision. The technologies selected to address these contaminants included an apatitemore » adsorption barrier coupled with a phyto-remediation to address the strontium-90 contamination, injection of polyphosphate into the subsurface to sequester uranium, and a bioremediation approach to reduce chromium contamination in the groundwater. The ability to provide scientifically based approaches to these sites was in large part due to work the Pacific Northwest National Laboratory developed under previous DOE Office of Science and Office of Environmental Management projects. (authors)« less
NASA Astrophysics Data System (ADS)
Antle, J. M.; Valdivia, R. O.; Jones, J.; Rosenzweig, C.; Ruane, A. C.
2013-12-01
This presentation provides an overview of the new methods developed by researchers in the Agricultural Model Inter-comparison and Improvement Project (AgMIP) for regional climate impact assessment and analysis of adaptation in agricultural systems. This approach represents a departure from approaches in the literature in several dimensions. First, the approach is based on the analysis of agricultural systems (not individual crops) and is inherently trans-disciplinary: it is based on a deep collaboration among a team of climate scientists, agricultural scientists and economists to design and implement regional integrated assessments of agricultural systems. Second, in contrast to previous approaches that have imposed future climate on models based on current socio-economic conditions, this approach combines bio-physical and economic models with a new type of pathway analysis (Representative Agricultural Pathways) to parameterize models consistent with a plausible future world in which climate change would be occurring. Third, adaptation packages for the agricultural systems in a region are designed by the research team with a level of detail that is useful to decision makers, such as research administrators and donors, who are making agricultural R&D investment decisions. The approach is illustrated with examples from AgMIP's projects currently being carried out in Africa and South Asia.
Gai, Jiading; Obeid, Nady; Holtrop, Joseph L.; Wu, Xiao-Long; Lam, Fan; Fu, Maojing; Haldar, Justin P.; Hwu, Wen-mei W.; Liang, Zhi-Pei; Sutton, Bradley P.
2013-01-01
Several recent methods have been proposed to obtain significant speed-ups in MRI image reconstruction by leveraging the computational power of GPUs. Previously, we implemented a GPU-based image reconstruction technique called the Illinois Massively Parallel Acquisition Toolkit for Image reconstruction with ENhanced Throughput in MRI (IMPATIENT MRI) for reconstructing data collected along arbitrary 3D trajectories. In this paper, we improve IMPATIENT by removing computational bottlenecks by using a gridding approach to accelerate the computation of various data structures needed by the previous routine. Further, we enhance the routine with capabilities for off-resonance correction and multi-sensor parallel imaging reconstruction. Through implementation of optimized gridding into our iterative reconstruction scheme, speed-ups of more than a factor of 200 are provided in the improved GPU implementation compared to the previous accelerated GPU code. PMID:23682203
Factors influencing the utilization of empirically supported treatments for eating disorders.
Simmons, Angela M; Milnes, Suzanne M; Anderson, Drew A
2008-01-01
This study expands upon previous research investigating the use of empirically supported treatments (ESTs) for eating disorders by surveying a large sample of clinicians who specialize in treating eating disorders. Surveys developed for this study were sent to 698 members of a large, professional, eating disorder organization who were listed as treatment providers on the organization's website. Despite clinicians reporting frequently using CBT techniques, most identified something other than CBT or IPT as their primary approach to treatment. In contrast with previous research, the majority had received prior training in the use of manual-based treatments. However, consistent with previous investigations, most denied regular use of such treatments. Although manual-based CBT and IPT are referred to as "treatments of choice," professional clinicians in the field are not consistently using them. Responses suggest several barriers to the utilization of ESTs in practice.
Deep Visual Attention Prediction
NASA Astrophysics Data System (ADS)
Wang, Wenguan; Shen, Jianbing
2018-05-01
In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.
Activity Recognition on Streaming Sensor Data.
Krishnan, Narayanan C; Cook, Diane J
2014-02-01
Many real-world applications that focus on addressing needs of a human, require information about the activities being performed by the human in real-time. While advances in pervasive computing have lead to the development of wireless and non-intrusive sensors that can capture the necessary activity information, current activity recognition approaches have so far experimented on either a scripted or pre-segmented sequence of sensor events related to activities. In this paper we propose and evaluate a sliding window based approach to perform activity recognition in an on line or streaming fashion; recognizing activities as and when new sensor events are recorded. To account for the fact that different activities can be best characterized by different window lengths of sensor events, we incorporate the time decay and mutual information based weighting of sensor events within a window. Additional contextual information in the form of the previous activity and the activity of the previous window is also appended to the feature describing a sensor window. The experiments conducted to evaluate these techniques on real-world smart home datasets suggests that combining mutual information based weighting of sensor events and adding past contextual information into the feature leads to best performance for streaming activity recognition.
Random walks based multi-image segmentation: Quasiconvexity results and GPU-based solutions
Collins, Maxwell D.; Xu, Jia; Grady, Leo; Singh, Vikas
2012-01-01
We recast the Cosegmentation problem using Random Walker (RW) segmentation as the core segmentation algorithm, rather than the traditional MRF approach adopted in the literature so far. Our formulation is similar to previous approaches in the sense that it also permits Cosegmentation constraints (which impose consistency between the extracted objects from ≥ 2 images) using a nonparametric model. However, several previous nonparametric cosegmentation methods have the serious limitation that they require adding one auxiliary node (or variable) for every pair of pixels that are similar (which effectively limits such methods to describing only those objects that have high entropy appearance models). In contrast, our proposed model completely eliminates this restrictive dependence –the resulting improvements are quite significant. Our model further allows an optimization scheme exploiting quasiconvexity for model-based segmentation with no dependence on the scale of the segmented foreground. Finally, we show that the optimization can be expressed in terms of linear algebra operations on sparse matrices which are easily mapped to GPU architecture. We provide a highly specialized CUDA library for Cosegmentation exploiting this special structure, and report experimental results showing these advantages. PMID:25278742
Hsu, Yi-Yu; Chen, Hung-Yu; Kao, Hung-Yu
2013-01-01
Background Determining the semantic relatedness of two biomedical terms is an important task for many text-mining applications in the biomedical field. Previous studies, such as those using ontology-based and corpus-based approaches, measured semantic relatedness by using information from the structure of biomedical literature, but these methods are limited by the small size of training resources. To increase the size of training datasets, the outputs of search engines have been used extensively to analyze the lexical patterns of biomedical terms. Methodology/Principal Findings In this work, we propose the Mutually Reinforcing Lexical Pattern Ranking (ReLPR) algorithm for learning and exploring the lexical patterns of synonym pairs in biomedical text. ReLPR employs lexical patterns and their pattern containers to assess the semantic relatedness of biomedical terms. By combining sentence structures and the linking activities between containers and lexical patterns, our algorithm can explore the correlation between two biomedical terms. Conclusions/Significance The average correlation coefficient of the ReLPR algorithm was 0.82 for various datasets. The results of the ReLPR algorithm were significantly superior to those of previous methods. PMID:24348899
Deng, Zhi-An; Wang, Guofeng; Hu, Ying; Cui, Yang
2016-01-01
This paper proposes a novel heading estimation approach for indoor pedestrian navigation using the built-in inertial sensors on a smartphone. Unlike previous approaches constraining the carrying position of a smartphone on the user’s body, our approach gives the user a larger freedom by implementing automatic recognition of the device carrying position and subsequent selection of an optimal strategy for heading estimation. We firstly predetermine the motion state by a decision tree using an accelerometer and a barometer. Then, to enable accurate and computational lightweight carrying position recognition, we combine a position classifier with a novel position transition detection algorithm, which may also be used to avoid the confusion between position transition and user turn during pedestrian walking. For a device placed in the trouser pockets or held in a swinging hand, the heading estimation is achieved by deploying a principal component analysis (PCA)-based approach. For a device held in the hand or against the ear during a phone call, user heading is directly estimated by adding the yaw angle of the device to the related heading offset. Experimental results show that our approach can automatically detect carrying positions with high accuracy, and outperforms previous heading estimation approaches in terms of accuracy and applicability. PMID:27187391
Molgaard Nielsen, Anne; Hestbaek, Lise; Vach, Werner; Kent, Peter; Kongsted, Alice
2017-08-09
Heterogeneity in patients with low back pain is well recognised and different approaches to subgrouping have been proposed. One statistical technique that is increasingly being used is Latent Class Analysis as it performs subgrouping based on pattern recognition with high accuracy. Previously, we developed two novel suggestions for subgrouping patients with low back pain based on Latent Class Analysis of patient baseline characteristics (patient history and physical examination), which resulted in 7 subgroups when using a single-stage analysis, and 9 subgroups when using a two-stage approach. However, their prognostic capacity was unexplored. This study (i) determined whether the subgrouping approaches were associated with the future outcomes of pain intensity, pain frequency and disability, (ii) assessed whether one of these two approaches was more strongly or more consistently associated with these outcomes, and (iii) assessed the performance of the novel subgroupings as compared to the following variables: two existing subgrouping tools (STarT Back Tool and Quebec Task Force classification), four baseline characteristics and a group of previously identified domain-specific patient categorisations (collectively, the 'comparator variables'). This was a longitudinal cohort study of 928 patients consulting for low back pain in primary care. The associations between each subgroup approach and outcomes at 2 weeks, 3 and 12 months, and with weekly SMS responses were tested in linear regression models, and their prognostic capacity (variance explained) was compared to that of the comparator variables listed above. The two previously identified subgroupings were similarly associated with all outcomes. The prognostic capacity of both subgroupings was better than that of the comparator variables, except for participants' recovery beliefs and the domain-specific categorisations, but was still limited. The explained variance ranged from 4.3%-6.9% for pain intensity and from 6.8%-20.3% for disability, and highest at the 2 weeks follow-up. Latent Class-derived subgroups provided additional prognostic information when compared to a range of variables, but the improvements were not substantial enough to warrant further development into a new prognostic tool. Further research could investigate if these novel subgrouping approaches may help to improve existing tools that subgroup low back pain patients.
Shukla, Nagesh; Wickramasuriya, Rohan; Miller, Andrew; Perez, Pascal
2015-11-01
This paper proposes an integrated modelling approach for location planning of radiotherapy treatment services based on cancer incidence and road network-based accessibility. Previous research efforts have established travel distance/time barriers as a key factor affecting access to cancer treatment services, as well as epidemiological studies have shown that cancer incidence rates vary with population demography. Our study is built on the evidence that the travel distances to treatment centres and demographic profiles of the accessible regions greatly influence the uptake of cancer radiotherapy (RT) services. An integrated service planning approach that combines spatially-explicit cancer incidence projections, and the placement of new RT services based on road network based accessibility measures have never been attempted. This research presents a novel approach for the location planning of RT services, and demonstrates its viability by modelling cancer incidence rates for different age-sex groups in New South Wales, Australia based on observed cancer incidence trends; and estimations of the road network-based access to current NSW treatment centres. Using three indices (General Efficiency, Service Availability and Equity), we show how the best location for a new RT centre may be chosen when there are multiple competing locations. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Learning from project experiences using a legacy-based approach
NASA Technical Reports Server (NTRS)
Cooper, Lynne P.; Majchrzak, Ann; Faraj, Samer
2005-01-01
As project teams become used more widely, the question of how to capitalize on the knowledge learned in project teams remains an open issue. Using previous research on shared cognition in groups, an approach to promoting post-project learning was developed. This Legacy Review concept was tested on four in tact project teams. The results from those test sessions were used to develop a model of team learning via group cognitive processes. The model and supporting propositions are presented.
NASA Technical Reports Server (NTRS)
Petty, Grant W.; Stettner, David R.
1994-01-01
This paper discusses certain aspects of a new inversion based algorithm for the retrieval of rain rate over the open ocean from the special sensor microwave/imager (SSM/I) multichannel imagery. This algorithm takes a more detailed physical approach to the retrieval problem than previously discussed algorithms that perform explicit forward radiative transfer calculations based on detailed model hydrometer profiles and attempt to match the observations to the predicted brightness temperature.
Discrimination Power of Polynomial-Based Descriptors for Graphs by Using Functional Matrices.
Dehmer, Matthias; Emmert-Streib, Frank; Shi, Yongtang; Stefu, Monica; Tripathi, Shailesh
2015-01-01
In this paper, we study the discrimination power of graph measures that are based on graph-theoretical matrices. The paper generalizes the work of [M. Dehmer, M. Moosbrugger. Y. Shi, Encoding structural information uniquely with polynomial-based descriptors by employing the Randić matrix, Applied Mathematics and Computation, 268(2015), 164-168]. We demonstrate that by using the new functional matrix approach, exhaustively generated graphs can be discriminated more uniquely than shown in the mentioned previous work.
Discrimination Power of Polynomial-Based Descriptors for Graphs by Using Functional Matrices
Dehmer, Matthias; Emmert-Streib, Frank; Shi, Yongtang; Stefu, Monica; Tripathi, Shailesh
2015-01-01
In this paper, we study the discrimination power of graph measures that are based on graph-theoretical matrices. The paper generalizes the work of [M. Dehmer, M. Moosbrugger. Y. Shi, Encoding structural information uniquely with polynomial-based descriptors by employing the Randić matrix, Applied Mathematics and Computation, 268(2015), 164–168]. We demonstrate that by using the new functional matrix approach, exhaustively generated graphs can be discriminated more uniquely than shown in the mentioned previous work. PMID:26479495
Assembly and diploid architecture of an individual human genome via single-molecule technologies
Pendleton, Matthew; Sebra, Robert; Pang, Andy Wing Chun; Ummat, Ajay; Franzen, Oscar; Rausch, Tobias; Stütz, Adrian M; Stedman, William; Anantharaman, Thomas; Hastie, Alex; Dai, Heng; Fritz, Markus Hsi-Yang; Cao, Han; Cohain, Ariella; Deikus, Gintaras; Durrett, Russell E; Blanchard, Scott C; Altman, Roger; Chin, Chen-Shan; Guo, Yan; Paxinos, Ellen E; Korbel, Jan O; Darnell, Robert B; McCombie, W Richard; Kwok, Pui-Yan; Mason, Christopher E; Schadt, Eric E; Bashir, Ali
2015-01-01
We present the first comprehensive analysis of a diploid human genome that combines single-molecule sequencing with single-molecule genome maps. Our hybrid assembly markedly improves upon the contiguity observed from traditional shotgun sequencing approaches, with scaffold N50 values approaching 30 Mb, and we identified complex structural variants (SVs) missed by other high-throughput approaches. Furthermore, by combining Illumina short-read data with long reads, we phased both single-nucleotide variants and SVs, generating haplotypes with over 99% consistency with previous trio-based studies. Our work shows that it is now possible to integrate single-molecule and high-throughput sequence data to generate de novo assembled genomes that approach reference quality. PMID:26121404
Assembly and diploid architecture of an individual human genome via single-molecule technologies.
Pendleton, Matthew; Sebra, Robert; Pang, Andy Wing Chun; Ummat, Ajay; Franzen, Oscar; Rausch, Tobias; Stütz, Adrian M; Stedman, William; Anantharaman, Thomas; Hastie, Alex; Dai, Heng; Fritz, Markus Hsi-Yang; Cao, Han; Cohain, Ariella; Deikus, Gintaras; Durrett, Russell E; Blanchard, Scott C; Altman, Roger; Chin, Chen-Shan; Guo, Yan; Paxinos, Ellen E; Korbel, Jan O; Darnell, Robert B; McCombie, W Richard; Kwok, Pui-Yan; Mason, Christopher E; Schadt, Eric E; Bashir, Ali
2015-08-01
We present the first comprehensive analysis of a diploid human genome that combines single-molecule sequencing with single-molecule genome maps. Our hybrid assembly markedly improves upon the contiguity observed from traditional shotgun sequencing approaches, with scaffold N50 values approaching 30 Mb, and we identified complex structural variants (SVs) missed by other high-throughput approaches. Furthermore, by combining Illumina short-read data with long reads, we phased both single-nucleotide variants and SVs, generating haplotypes with over 99% consistency with previous trio-based studies. Our work shows that it is now possible to integrate single-molecule and high-throughput sequence data to generate de novo assembled genomes that approach reference quality.
Magnetic coupling between liquid 3He and a solid state substrate: a new approach
NASA Astrophysics Data System (ADS)
Klochkov, Alexander V.; Naletov, Vladimir V.; Tayurskii, Dmitrii A.; Tagirov, Murat S.; Suzuki, Haruhiko
2000-07-01
We suggest a new approach for solving the long-standing problem of a magnetic coupling between liquid 3He and a solid state substrate at temperatures above the Fermi temperature. The approach is based on our previous careful investigations of the physical state of a solid substrate by means of several experimental methods (EPR, NMR, conductometry, and magnetization measurements). The developed approach allows, first, to get more detailed information about the magnetic coupling phenomenon by varying the repetition time in pulse NMR investigations of liquid 3He in contact with the solid state substrate and, second, to compare the obtained dependences and the data of NMR-cryoporometry and AFM-microscopy.
NASA Astrophysics Data System (ADS)
Kastor, David; Ray, Sourya; Traschen, Jennie
2017-10-01
We study the problem of finding brane-like solutions to Lovelock gravity, adopting a general approach to establish conditions that a lower dimensional base metric must satisfy in order that a solution to a given Lovelock theory can be constructed in one higher dimension. We find that for Lovelock theories with generic values of the coupling constants, the Lovelock tensors (higher curvature generalizations of the Einstein tensor) of the base metric must all be proportional to the metric. Hence, allowed base metrics form a subclass of Einstein metrics. This subclass includes so-called ‘universal metrics’, which have been previously investigated as solutions to quantum-corrected field equations. For specially tuned values of the Lovelock couplings, we find that the Lovelock tensors of the base metric need to satisfy fewer constraints. For example, for Lovelock theories with a unique vacuum there is only a single such constraint, a case previously identified in the literature, and brane solutions can be straightforwardly constructed.
Bialas, Andrzej
2011-01-01
Intelligent sensors experience security problems very similar to those inherent to other kinds of IT products or systems. The assurance for these products or systems creation methodologies, like Common Criteria (ISO/IEC 15408) can be used to improve the robustness of the sensor systems in high risk environments. The paper presents the background and results of the previous research on patterns-based security specifications and introduces a new ontological approach. The elaborated ontology and knowledge base were validated on the IT security development process dealing with the sensor example. The contribution of the paper concerns the application of the knowledge engineering methodology to the previously developed Common Criteria compliant and pattern-based method for intelligent sensor security development. The issue presented in the paper has a broader significance in terms that it can solve information security problems in many application domains. PMID:22164064
Bialas, Andrzej
2011-01-01
Intelligent sensors experience security problems very similar to those inherent to other kinds of IT products or systems. The assurance for these products or systems creation methodologies, like Common Criteria (ISO/IEC 15408) can be used to improve the robustness of the sensor systems in high risk environments. The paper presents the background and results of the previous research on patterns-based security specifications and introduces a new ontological approach. The elaborated ontology and knowledge base were validated on the IT security development process dealing with the sensor example. The contribution of the paper concerns the application of the knowledge engineering methodology to the previously developed Common Criteria compliant and pattern-based method for intelligent sensor security development. The issue presented in the paper has a broader significance in terms that it can solve information security problems in many application domains.
A broadband vibro-impacting power harvester with symmetrical piezoelectric bimorph-stops
NASA Astrophysics Data System (ADS)
Moss, S.; Barry, A.; Powlesland, I.; Galea, S.; Carman, G. P.
2011-04-01
The certification of retrofitted structural health monitoring (SHM) systems for use on aircraft raises a number of challenges. One critical issue is determining the optimal means of supplying power to these systems, given that access to the existing aircraft power system is often problematic. Previously, the Australian Defence Science and Technology Organisation has shown that a structural strain-based energy harvesting approach can be used to power a device for SHM of aircraft structure. Acceleration-based power harvesting from airframes can be more demanding than a strain-based approach because the vibration spectrum of an aircraft structure can vary dynamically with flight conditions. A vibration spectrum with varying frequency may severely limit the energy harvested by a single-degree-of-freedom resonance-based device, and hence a frequency agile or (relatively) broadband device is often required to maximize the energy harvested. This paper reports on an investigation into the use of a vibro-impact approach to construct a piezoelectric-based kinetic power harvester that can operate in the approximate frequency range of 29-63 Hz.
Andasari, Vivi; Roper, Ryan T.; Swat, Maciej H.; Chaplain, Mark A. J.
2012-01-01
In this paper we present a multiscale, individual-based simulation environment that integrates CompuCell3D for lattice-based modelling on the cellular level and Bionetsolver for intracellular modelling. CompuCell3D or CC3D provides an implementation of the lattice-based Cellular Potts Model or CPM (also known as the Glazier-Graner-Hogeweg or GGH model) and a Monte Carlo method based on the metropolis algorithm for system evolution. The integration of CC3D for cellular systems with Bionetsolver for subcellular systems enables us to develop a multiscale mathematical model and to study the evolution of cell behaviour due to the dynamics inside of the cells, capturing aspects of cell behaviour and interaction that is not possible using continuum approaches. We then apply this multiscale modelling technique to a model of cancer growth and invasion, based on a previously published model of Ramis-Conde et al. (2008) where individual cell behaviour is driven by a molecular network describing the dynamics of E-cadherin and -catenin. In this model, which we refer to as the centre-based model, an alternative individual-based modelling technique was used, namely, a lattice-free approach. In many respects, the GGH or CPM methodology and the approach of the centre-based model have the same overall goal, that is to mimic behaviours and interactions of biological cells. Although the mathematical foundations and computational implementations of the two approaches are very different, the results of the presented simulations are compatible with each other, suggesting that by using individual-based approaches we can formulate a natural way of describing complex multi-cell, multiscale models. The ability to easily reproduce results of one modelling approach using an alternative approach is also essential from a model cross-validation standpoint and also helps to identify any modelling artefacts specific to a given computational approach. PMID:22461894
Improving real-time efficiency of case-based reasoning for medical diagnosis.
Park, Yoon-Joo
2014-01-01
Conventional case-based reasoning (CBR) does not perform efficiently for high volume dataset because of case-retrieval time. Some previous researches overcome this problem by clustering a case-base into several small groups, and retrieve neighbors within a corresponding group to a target case. However, this approach generally produces less accurate predictive performances than the conventional CBR. This paper suggests a new case-based reasoning method called the Clustering-Merging CBR (CM-CBR) which produces similar level of predictive performances than the conventional CBR with spending significantly less computational cost.
[Maxillary swing approach in the management of tumors in the central and lateral cranial base].
Liao, Hua; Hua, Qing-quan; Wu, Zhan-yuan
2006-04-01
A retrospective review of seventeen patients who were operated through the maxillary swing approach was carried out to assess the efficacy of this approach in the management of tumors of the central and lateral cranial base. From May 2000 to January 2005, 17 patients with primary or recurrent neoplasms involving the central cranial or lateral base underwent surgical resection via maxillary swing approach. Ten patients were male, and other seven patients were female, and age range was 7 to 58 years, with a mean age of 42. 6 years. Eight patients had tumors originally involving lateral cranial base, and nine patients had tumors originated from central cranial base. The pathology spectrum was very wide. Among them, five suffered from chordoma, two had rhabdomyosarcoma, two had squamous cell carcinoma, one had malignant fibrous histiocytoma, one had malignant melanoma, one had esthesioneuroblastoma, one had invaded hypophysoma, two had schwannoma, one had pleomorphic adenoma, and one had angiofibroma. Three patients had received previous surgery, two patients had previous radiation therapy and nine patients received postoperative radiotherapy. Sixteen of all seventeen patients had oncologically complete resection, one had near-total resection. This group patients was followed up from 10 to 60 months, with a median follow-up time of 28 months. Two patients died 14 and 26 months after surgery respectively, as a result of local recurrence and metastasis. One patient defaulted follow-up at 12 months after operation, and the other 14 patients were alive at the time of analysis. Of the 12 malignant tumors, the 1-and 2-year survival rate were 91.67% and 72.92%, respectively. The facial wounds of all patients healed primarily, and there were no necrosis of the maxilla, damage of the temporal branch of the facial nerve, lower-lid ectropion, and facial deformity. Epiphora and facial hypoesthesia were detected in all patients. Four patients (23.5%) developed palatal fistula, ten patients developed serous otitis media (58.8%), and four patients developed a certain degree of trismus (23.5%). Cerebrospinal fluid leak occurred in two patients. They subsequently healed with conservative management. The maxillary swing approach is a proven method for access to the central and lateral skull base with good exposure and acceptable morbidity. Complications and sequelae associated with this approach include facial scarring, transaction of the infraorbital nerve, impaired lacrimal drainage, eustachian tube dysfunction and serous otitis, palatal fistula, trismus etc. Some procedures should be performed for reducing the incidence and severity of complications in the maxillary swing approach.
'Vague Oviedo': autonomy, culture and the case of previously competent patients.
Pascalev, Assya; Vidalis, Takis
2010-03-01
The paper examines the ethical and legal challenges of making decisions for previously competent patients and the role of advance directives and legal representatives in light of the Oviedo Convention. The paper identifies gaps in the Convention that result in conflicting instructions in cases of a disagreement between the expressed prior wishes of a patient, and the legal representative. The authors also examine the legal and moral status of informally expressed prior wishes of patients unable to consent. The authors argue that positivist legal reasoning is insufficient for a consistent interpretation of the relevant provisions of the Convention and argue that ethical argumentation is needed to provide guidance in such cases. Based on the ethical arguments, the authors propose a way of reconciling the apparent inconsistencies in the Oviedo Convention. They advance a culturally sensitive approach to the application of the Convention at the national level. This approach understands autonomy as a broader, relational consent and emphasizes the social and cultural embeddedness of the individual. Based on their approach, the authors argue that there exists a moral obligation to respect the prior wishes of the patient even in countries without advance directives. Yet it should be left to the national legislations to determine the extent of this obligation and its concrete forms.
Directly solar-pumped iodine laser for beamed power transmission in space
NASA Technical Reports Server (NTRS)
Choi, S. H.; Meador, W. E.; Lee, J. H.
1992-01-01
A new approach for development of a 50-kW directly solar-pumped iodine laser (DSPIL) system as a space-based power station was made using a confocal unstable resonator (CUR). The CUR-based DSPIL has advantages, such as performance enhancement, reduction of total mass, and simplicity which alleviates the complexities inherent in the previous system, master oscillator/power amplifier (MOPA) configurations. In this design, a single CUR-based DSPIL with 50-kW output power was defined and compared to the MOPA-based DSPIL. Integration of multiple modules for power requirements more than 50-kW is physically and structurally a sound approach as compared to building a single large system. An integrated system of multiple modules can respond to various mission power requirements by combining and aiming the coherent beams at the user's receiver.
A high speed model-based approach for wavefront sensorless adaptive optics systems
NASA Astrophysics Data System (ADS)
Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing
2018-02-01
To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).
Der, Bryan S.; Kluwe, Christien; Miklos, Aleksandr E.; Jacak, Ron; Lyskov, Sergey; Gray, Jeffrey J.; Georgiou, George; Ellington, Andrew D.; Kuhlman, Brian
2013-01-01
Reengineering protein surfaces to exhibit high net charge, referred to as “supercharging”, can improve reversibility of unfolding by preventing aggregation of partially unfolded states. Incorporation of charged side chains should be optimized while considering structural and energetic consequences, as numerous mutations and accumulation of like-charges can also destabilize the native state. A previously demonstrated approach deterministically mutates flexible polar residues (amino acids DERKNQ) with the fewest average neighboring atoms per side chain atom (AvNAPSA). Our approach uses Rosetta-based energy calculations to choose the surface mutations. Both protocols are available for use through the ROSIE web server. The automated Rosetta and AvNAPSA approaches for supercharging choose dissimilar mutations, raising an interesting division in surface charging strategy. Rosetta-supercharged variants of GFP (RscG) ranging from −11 to −61 and +7 to +58 were experimentally tested, and for comparison, we re-tested the previously developed AvNAPSA-supercharged variants of GFP (AscG) with +36 and −30 net charge. Mid-charge variants demonstrated ∼3-fold improvement in refolding with retention of stability. However, as we pushed to higher net charges, expression and soluble yield decreased, indicating that net charge or mutational load may be limiting factors. Interestingly, the two different approaches resulted in GFP variants with similar refolding properties. Our results show that there are multiple sets of residues that can be mutated to successfully supercharge a protein, and combining alternative supercharge protocols with experimental testing can be an effective approach for charge-based improvement to refolding. PMID:23741319
Watson, Nathanial E; Parsons, Brendon A; Synovec, Robert E
2016-08-12
Performance of tile-based Fisher Ratio (F-ratio) data analysis, recently developed for discovery-based studies using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-TOFMS), is evaluated with a metabolomics dataset that had been previously analyzed in great detail, but while taking a brute force approach. The previously analyzed data (referred to herein as the benchmark dataset) were intracellular extracts from Saccharomyces cerevisiae (yeast), either metabolizing glucose (repressed) or ethanol (derepressed), which define the two classes in the discovery-based analysis to find metabolites that are statistically different in concentration between the two classes. Beneficially, this previously analyzed dataset provides a concrete means to validate the tile-based F-ratio software. Herein, we demonstrate and validate the significant benefits of applying tile-based F-ratio analysis. The yeast metabolomics data are analyzed more rapidly in about one week versus one year for the prior studies with this dataset. Furthermore, a null distribution analysis is implemented to statistically determine an adequate F-ratio threshold, whereby the variables with F-ratio values below the threshold can be ignored as not class distinguishing, which provides the analyst with confidence when analyzing the hit table. Forty-six of the fifty-four benchmarked changing metabolites were discovered by the new methodology while consistently excluding all but one of the benchmarked nineteen false positive metabolites previously identified. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Solovyev, Alexander S.; Igashov, Sergey Yu.
2017-12-01
A microscopic approach to description of radiative capture reactions based on a multiscale algebraic version of the resonating group model is developed. The main idea of the approach is to expand wave functions of discrete spectrum and continuum for a nuclear system over different bases of the algebraic version of the resonating group model. These bases differ from each other by values of oscillator radius playing a role of scale parameter. This allows us in a unified way to calculate total and partial cross sections (astrophysical S factors) as well as branching ratio for the radiative capture reaction, to describe phase shifts for the colliding nuclei in the initial channel of the reaction, and at the same time to reproduce breakup thresholds of the final nucleus. The approach is applied to the theoretical study of the mirror 3H(α ,γ )7Li and 3He(α ,γ )7Be reactions, which are of great interest to nuclear astrophysics. The calculated results are compared with existing experimental data and with our previous calculations in the framework of the single-scale algebraic version of the resonating group model.
Adaptive distance metric learning for diffusion tensor image segmentation.
Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W
2014-01-01
High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.
Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation
Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.
2014-01-01
High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858
MATRIX FACTORIZATION-BASED DATA FUSION FOR GENE FUNCTION PREDICTION IN BAKER’S YEAST AND SLIME MOLD
ŽITNIK, MARINKA; ZUPAN, BLAŽ
2014-01-01
The development of effective methods for the characterization of gene functions that are able to combine diverse data sources in a sound and easily-extendible way is an important goal in computational biology. We have previously developed a general matrix factorization-based data fusion approach for gene function prediction. In this manuscript, we show that this data fusion approach can be applied to gene function prediction and that it can fuse various heterogeneous data sources, such as gene expression profiles, known protein annotations, interaction and literature data. The fusion is achieved by simultaneous matrix tri-factorization that shares matrix factors between sources. We demonstrate the effectiveness of the approach by evaluating its performance on predicting ontological annotations in slime mold D. discoideum and on recognizing proteins of baker’s yeast S. cerevisiae that participate in the ribosome or are located in the cell membrane. Our approach achieves predictive performance comparable to that of the state-of-the-art kernel-based data fusion, but requires fewer data preprocessing steps. PMID:24297565
Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.
Youssef, Noha H; Elshahed, Mostafa S
2008-09-01
Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.
Lemoine, E; Merceron, D; Sallantin, J; Nguifo, E M
1999-01-01
This paper describes a new approach to problem solving by splitting up problem component parts between software and hardware. Our main idea arises from the combination of two previously published works. The first one proposed a conceptual environment of concept modelling in which the machine and the human expert interact. The second one reported an algorithm based on reconfigurable hardware system which outperforms any kind of previously published genetic data base scanning hardware or algorithms. Here we show how efficient the interaction between the machine and the expert is when the concept modelling is based on reconfigurable hardware system. Their cooperation is thus achieved with an real time interaction speed. The designed system has been partially applied to the recognition of primate splice junctions sites in genetic sequences.
Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions.
Akkus, Zeynettin; Galimzianova, Alfiia; Hoogi, Assaf; Rubin, Daniel L; Erickson, Bradley J
2017-08-01
Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest. Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data. As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms. This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI. First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions. Next, the performance, speed, and properties of deep learning approaches are summarized and discussed. Finally, we provide a critical assessment of the current state and identify likely future developments and trends.
MPI Runtime Error Detection with MUST: Advances in Deadlock Detection
Hilbrich, Tobias; Protze, Joachim; Schulz, Martin; ...
2013-01-01
The widely used Message Passing Interface (MPI) is complex and rich. As a result, application developers require automated tools to avoid and to detect MPI programming errors. We present the Marmot Umpire Scalable Tool (MUST) that detects such errors with significantly increased scalability. We present improvements to our graph-based deadlock detection approach for MPI, which cover future MPI extensions. Our enhancements also check complex MPI constructs that no previous graph-based detection approach handled correctly. Finally, we present optimizations for the processing of MPI operations that reduce runtime deadlock detection overheads. Existing approaches often require ( p ) analysis time permore » MPI operation, for p processes. We empirically observe that our improvements lead to sub-linear or better analysis time per operation for a wide range of real world applications.« less
Fast Object Motion Estimation Based on Dynamic Stixels.
Morales, Néstor; Morell, Antonio; Toledo, Jonay; Acosta, Leopoldo
2016-07-28
The stixel world is a simplification of the world in which obstacles are represented as vertical instances, called stixels, standing on a surface assumed to be planar. In this paper, previous approaches for stixel tracking are extended using a two-level scheme. In the first level, stixels are tracked by matching them between frames using a bipartite graph in which edges represent a matching cost function. Then, stixels are clustered into sets representing objects in the environment. These objects are matched based on the number of stixels paired inside them. Furthermore, a faster, but less accurate approach is proposed in which only the second level is used. Several configurations of our method are compared to an existing state-of-the-art approach to show how our methodology outperforms it in several areas, including an improvement in the quality of the depth reconstruction.
Restoring Proprioception via a Cortical Prosthesis: A Novel Learning Based Approach
2016-10-01
microstimulation from the neural recordings used for BMI control. This allows us to move to a much more efficient paradigm with continuous brain “ read out” for...microstimulation; movement control 3. ACCOMPLISHMENTS: What were the major goals of the project? Specific Aim 1: Determine whether animals ... animals . However, that signal will correlate on a millisecond timescale with visual feedback of the virtual limb. Based on the previous work (Dadarlat
ERIC Educational Resources Information Center
Nyasulu, Frazier; Barlag, Rebecca
2010-01-01
The reaction kinetics of the iodide-catalyzed decomposition of [subscript 2]O[subscript 2] using the integrated-rate method is described. The method is based on the measurement of the total gas pressure using a datalogger and pressure sensor. This is a modification of a previously reported experiment based on the initial-rate approach. (Contains 2…
Data Warehouse Design from HL7 Clinical Document Architecture Schema.
Pecoraro, Fabrizio; Luzi, Daniela; Ricci, Fabrizio L
2015-01-01
This paper proposes a semi-automatic approach to extract clinical information structured in a HL7 Clinical Document Architecture (CDA) and transform it in a data warehouse dimensional model schema. It is based on a conceptual framework published in a previous work that maps the dimensional model primitives with CDA elements. Its feasibility is demonstrated providing a case study based on the analysis of vital signs gathered during laboratory tests.
Assessing the Effectiveness of "Wise Guys": A Mixed-Methods Approach
ERIC Educational Resources Information Center
Herrman, Judith W.; Gordon, Mellissa; Rahmer, Brian; Moore, Christopher C.; Habermann, Barbara; Haigh, Katherine M.
2017-01-01
Previous research raised questions on the validity of survey studies with the teen population. As one response, our team implemented a mixed-methods study to evaluate an evidence-based, interactive curriculum, "Wise Guys," which is designed to promote healthy relationships and sexual behavior in young men ages 4-17. The current study…
MAP-Motivated Carrier Synchronization of GMSK Based on the Laurent AMP Representation
NASA Technical Reports Server (NTRS)
Simon, M. K.
1998-01-01
Using the MAP estimation approach to carrier synchronization of digital modulations containing ISI together with a two pulse stream AMP representation of GMSK, it is possible to obtain an optimum closed loop configuration in the same manner as has been previously proposed for other conventional modulations with ISI.
The Role of Reading Comprehension in Large-Scale Subject-Matter Assessments
ERIC Educational Resources Information Center
Zhang, Ting
2013-01-01
This study was designed with the overall goal of understanding how difficulties in reading comprehension are associated with early adolescents' performance in large-scale assessments in subject domains including science and civic-related social studies. The current study extended previous research by taking a cognition-centered approach based on…
Neural Networks for the Beginner.
ERIC Educational Resources Information Center
Snyder, Robin M.
Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…
ERIC Educational Resources Information Center
Crisp, Victoria; Novakovic, Nadezda
2009-01-01
The consistency of assessment demands is important to validity. This research investigated the comparability of the demands of college-assessed units within a vocationally related qualification, drawing on methodological approaches that have previously been used to compare assessments. Assessment materials from five colleges were obtained. After…
Innovation Labs: A Professional Approach to Honors
ERIC Educational Resources Information Center
Bormans, Ron
2015-01-01
Honors education at Rotterdam University of Applied Sciences (RUAS) focuses on students who are willing to invest more in their study than the average student. Selection criteria are the students' willingness to develop beyond what is offered in the regular curriculum and are not based on previous accomplishments. The additional challenge of the…
Preventing Tobacco and Alcohol Use among Elementary School Students through Life Skills Training.
ERIC Educational Resources Information Center
Botvin, Gilbert J.; Griffin, Kenneth W.; Paul, Elizabeth; Macaulay, Araxi P.
2003-01-01
Study examined effectiveness of a substance abuse prevention program in preventing tobacco and alcohol use among elementary school students in grades 3 through 6. Program teaches social resistance skills and general personal and social competence skills. Findings indicate a school-based substance abuse prevention approach previously found to be…
ERIC Educational Resources Information Center
Hopfer, S.; Davis, D.; Kam, J. A.; Shin, Y.; Elek, E.; Hecht, M. L.
2010-01-01
This article takes a systematic approach to reviewing substance use prevention programs introduced in elementary school (K-6th grade). Previous studies evaluating such programs among elementary school students showed mixed effects on subsequent substance use and related psychosocial factors. Thirty published evaluation studies of 24 elementary…
Ontology-Based Annotation of Learning Object Content
ERIC Educational Resources Information Center
Gasevic, Dragan; Jovanovic, Jelena; Devedzic, Vladan
2007-01-01
The paper proposes a framework for building ontology-aware learning object (LO) content. Previously ontologies were exclusively employed for enriching LOs' metadata. Although such an approach is useful, as it improves retrieval of relevant LOs from LO repositories, it does not enable one to reuse components of a LO, nor to incorporate an explicit…
An E-Assessment Approach for Evaluation in Engineering Overcrowded Groups
ERIC Educational Resources Information Center
Mora, M. C.; Sancho-Bru, J. L.; Iserte, J. L.; Sanchez, F. T.
2012-01-01
The construction of the European Higher Education Area has been an adaptation challenge for Spanish universities. New methodologies require a more active role on the students' part and come into conflict with the previous educational model characterised by a high student/professor ratio, a lecture-based teaching methodology and a summative…
Migration, Remittances and Educational Outcomes: The Case of Haiti
ERIC Educational Resources Information Center
Bredl, Sebastian
2011-01-01
This paper empirically investigates how migration and the receipt of remittances affect educational outcomes in Haiti. Based on a theoretical approach it tries to disentangle the effects of both phenomena that have mostly been jointly modeled in previous literature. The results suggest that remittances play an important role for poor households in…
Flexible, Carbon-Based Ohmic Contacts for Organic Transistors
NASA Technical Reports Server (NTRS)
Brandon, Erik
2005-01-01
A low-temperature process for fabricating flexible, ohmic contacts for use in organic thin-film transistors (OTFTs) has been developed. Typical drainsource contact materials used previously for OTFTs include (1) vacuum-deposited noble-metal contacts and (2) solution-deposited intrinsically conducting molecular or polymeric contacts. Both of these approaches, however, have serious drawbacks.
Mapping of Supply Chain Learning: A Framework for SMEs
ERIC Educational Resources Information Center
Thakkar, Jitesh; Kanda, Arun; Deshmukh, S. G.
2011-01-01
Purpose: The aim of this paper is to propose a mapping framework for evaluating supply chain learning potential for the context of small- to medium-sized enterprises (SMEs). Design/methodology/approach: The extracts of recently completed case based research for ten manufacturing SME units and facts reported in the previous research are utilized…
Creativity in Men and Women: Threat, Other-Interest, and Self-Assessment
ERIC Educational Resources Information Center
Kemmelmeier, Markus; Walton, Andre P.
2016-01-01
Previous research into gender and creativity has provided little evidence for consistent differences between men and women in creative performance. This research revisits this topic by proposing a person × situation approach, arguing that gender differences in creative performance only occur in certain contexts, but not others. Based on the…
Meadow management and treatment options [chapter 8
Jeanne C. Chambers; Jerry R. Miller
2011-01-01
Restoration and management objectives and approaches are most effective when based on an understanding of ecosystem processes and the long- and short-term causes of disturbance (Wohl and others 2005). As detailed in previous chapters, several factors are critical in developing effective management strategies for streams and their associated meadow ecosystems in the...
Introducing Forum Theatre to Elicit and Advocate Children's Views
ERIC Educational Resources Information Center
Hammond, Nick
2013-01-01
Eliciting and advocating the voice of the child remains at the heart of international political agenda and also remains a central role for educational psychologists (EPs). Previous research indicates that EPs tend to use language-based methods for eliciting and advocating views of children. However, these approaches are often limited. Taking a…
Aesthetic Pursuits: Windows, Frames, Words, Images. Part I
ERIC Educational Resources Information Center
Burke, Ken
2005-01-01
In his previous articles (1997, 1998, 1999), the author developed a theoretical and applied approach to analyzing interactions between the uses of constructive design elements in a wide range of images and the anticipated responses by their viewers. This Image Presentation Theory--IPT--is based in the traditional cinematic concepts of "window" and…
Patterns of functional vision loss in glaucoma determined with archetypal analysis
Elze, Tobias; Pasquale, Louis R.; Shen, Lucy Q.; Chen, Teresa C.; Wiggs, Janey L.; Bex, Peter J.
2015-01-01
Glaucoma is an optic neuropathy accompanied by vision loss which can be mapped by visual field (VF) testing revealing characteristic patterns related to the retinal nerve fibre layer anatomy. While detailed knowledge about these patterns is important to understand the anatomic and genetic aspects of glaucoma, current classification schemes are typically predominantly derived qualitatively. Here, we classify glaucomatous vision loss quantitatively by statistically learning prototypical patterns on the convex hull of the data space. In contrast to component-based approaches, this method emphasizes distinct aspects of the data and provides patterns that are easier to interpret for clinicians. Based on 13 231 reliable Humphrey VFs from a large clinical glaucoma practice, we identify an optimal solution with 17 glaucomatous vision loss prototypes which fit well with previously described qualitative patterns from a large clinical study. We illustrate relations of our patterns to retinal structure by a previously developed mathematical model. In contrast to the qualitative clinical approaches, our results can serve as a framework to quantify the various subtypes of glaucomatous visual field loss. PMID:25505132
NASA Technical Reports Server (NTRS)
Pliutau, Denis; Prasad, Narasimha S.
2012-01-01
In this paper a modeling method based on data reductions is investigated which includes pre analyzed MERRA atmospheric fields for quantitative estimates of uncertainties introduced in the integrated path differential absorption methods for the sensing of various molecules including CO2. This approach represents the extension of our existing lidar modeling framework previously developed and allows effective on- and offline wavelength optimizations and weighting function analysis to minimize the interference effects such as those due to temperature sensitivity and water vapor absorption. The new simulation methodology is different from the previous implementation in that it allows analysis of atmospheric effects over annual spans and the entire Earth coverage which was achieved due to the data reduction methods employed. The effectiveness of the proposed simulation approach is demonstrated with application to the mixing ratio retrievals for the future ASCENDS mission. Independent analysis of multiple accuracy limiting factors including the temperature, water vapor interferences, and selected system parameters is further used to identify favorable spectral regions as well as wavelength combinations facilitating the reduction in total errors in the retrieved XCO2 values.
Motion-adaptive model-assisted compatible coding with spatiotemporal scalability
NASA Astrophysics Data System (ADS)
Lee, JaeBeom; Eleftheriadis, Alexandros
1997-01-01
We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.
Hernández González, Jorge Enrique; Hernández Alvarez, Lilian; Pascutti, Pedro Geraldo; Valiente, Pedro A
2017-09-01
Falcipain-2 (FP-2) is a major hemoglobinase of Plasmodium falciparum, considered an important drug target for the development of antimalarials. A previous study reported a novel series of 20 reversible peptide-based inhibitors of FP-2. However, the lack of tridimensional structures of the complexes hinders further optimization strategies to enhance the inhibitory activity of the compounds. Here we report the prediction of the binding modes of the aforementioned inhibitors to FP-2. A computational approach combining previous knowledge on the determinants of binding to the enzyme, docking, and postdocking refinement steps, is employed. The latter steps comprise molecular dynamics simulations and free energy calculations. Remarkably, this approach leads to the identification of near-native ligand conformations when applied to a validation set of protein-ligand structures. Overall, we proposed substrate-like binding modes of the studied compounds fulfilling the structural requirements for FP-2 binding and yielding free energy values that correlated well with the experimental data. Proteins 2017; 85:1666-1683. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Potential standards support for activity-based GeoINT
NASA Astrophysics Data System (ADS)
Antonisse, Jim
2012-06-01
The Motion Imagery Standards Board (MISB) is engaged in multiple initiatives that may provide support for Activity-Based GeoINT (ABG). This paper describes a suite of approaches based on previous MISB work on a standards-based architecture for tracking. It focuses on ABG in the context of standardized tracker results, and shows how the MISB tracker formulation can formalize important components of the ABG problem. The paper proposes a grammar-based formalism for the reporting of activities within a stream of FMV or wide-area surveillance data. Such a grammar would potentially provide an extensible descriptive language for ABG across the community.
An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352
An iterative approach for the optimization of pavement maintenance management at the network level.
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.
Risk analysis theory applied to fishing operations: A new approach on the decision-making problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cunha, J.C.S.
1994-12-31
In the past the decisions concerning whether to continue or interrupt a fishing operation were based primarily on the operator`s previous experience. This procedure often led to wrong decisions and unnecessary loss of money and time. This paper describes a decision-making method based on risk analysis theory and previous operation results from a field under study. The method leads to more accurate decisions on a daily basis allowing the operator to verify each day of the operation if the decision being carried out is the one with the highest probability to conduct to the best economical result. An example ofmore » the method application is provided at the end of the paper.« less
NASA Astrophysics Data System (ADS)
Demezhko, Dmitry; Gornostaeva, Anastasia; Majorowicz, Jacek; Šafanda, Jan
2018-01-01
Using a previously published temperature log of the 2363-m-deep borehole Hunt well (Alberta, Canada) and the results of its previous interpretation, the new reconstructions of ground surface temperature and surface heat flux histories for the last 30 ka have been obtained. Two ways to adjust the timescale of geothermal reconstructions are discussed, namely the traditional method based on the a priori data on thermal diffusivity value, and the alternative one including the orbital tuning of the surface heat flux and the Earth's insolation changes. It is shown that the second approach provides better agreement between geothermal reconstructions and proxy evidences of deglaciation chronology in the studied region.
Evolving rule-based systems in two medical domains using genetic programming.
Tsakonas, Athanasios; Dounias, Georgios; Jantzen, Jan; Axer, Hubertus; Bjerregaard, Beth; von Keyserlingk, Diedrich Graf
2004-11-01
To demonstrate and compare the application of different genetic programming (GP) based intelligent methodologies for the construction of rule-based systems in two medical domains: the diagnosis of aphasia's subtypes and the classification of pap-smear examinations. Past data representing (a) successful diagnosis of aphasia's subtypes from collaborating medical experts through a free interview per patient, and (b) correctly classified smears (images of cells) by cyto-technologists, previously stained using the Papanicolaou method. Initially a hybrid approach is proposed, which combines standard genetic programming and heuristic hierarchical crisp rule-base construction. Then, genetic programming for the production of crisp rule based systems is attempted. Finally, another hybrid intelligent model is composed by a grammar driven genetic programming system for the generation of fuzzy rule-based systems. Results denote the effectiveness of the proposed systems, while they are also compared for their efficiency, accuracy and comprehensibility, to those of an inductive machine learning approach as well as to those of a standard genetic programming symbolic expression approach. The proposed GP-based intelligent methodologies are able to produce accurate and comprehensible results for medical experts performing competitive to other intelligent approaches. The aim of the authors was the production of accurate but also sensible decision rules that could potentially help medical doctors to extract conclusions, even at the expense of a higher classification score achievement.
Exploring Mouse Protein Function via Multiple Approaches.
Huang, Guohua; Chu, Chen; Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning; Cai, Yu-Dong
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality.
Exploring Mouse Protein Function via Multiple Approaches
Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality. PMID:27846315
Histogram equalization with Bayesian estimation for noise robust speech recognition.
Suh, Youngjoo; Kim, Hoirin
2018-02-01
The histogram equalization approach is an efficient feature normalization technique for noise robust automatic speech recognition. However, it suffers from performance degradation when some fundamental conditions are not satisfied in the test environment. To remedy these limitations of the original histogram equalization methods, class-based histogram equalization approach has been proposed. Although this approach showed substantial performance improvement under noise environments, it still suffers from performance degradation due to the overfitting problem when test data are insufficient. To address this issue, the proposed histogram equalization technique employs the Bayesian estimation method in the test cumulative distribution function estimation. It was reported in a previous study conducted on the Aurora-4 task that the proposed approach provided substantial performance gains in speech recognition systems based on the acoustic modeling of the Gaussian mixture model-hidden Markov model. In this work, the proposed approach was examined in speech recognition systems with deep neural network-hidden Markov model (DNN-HMM), the current mainstream speech recognition approach where it also showed meaningful performance improvement over the conventional maximum likelihood estimation-based method. The fusion of the proposed features with the mel-frequency cepstral coefficients provided additional performance gains in DNN-HMM systems, which otherwise suffer from performance degradation in the clean test condition.
A Semantic Approach with Decision Support for Safety Service in Smart Home Management
Huang, Xiaoci; Yi, Jianjun; Zhu, Xiaomin; Chen, Shaoli
2016-01-01
Research on smart homes (SHs) has increased significantly in recent years because of the convenience provided by having an assisted living environment. The functions of SHs as mentioned in previous studies, particularly safety services, are seldom discussed or mentioned. Thus, this study proposes a semantic approach with decision support for safety service in SH management. The focus of this contribution is to explore a context awareness and reasoning approach for risk recognition in SH that enables the proper decision support for flexible safety service provision. The framework of SH based on a wireless sensor network is described from the perspective of neighbourhood management. This approach is based on the integration of semantic knowledge in which a reasoner can make decisions about risk recognition and safety service. We present a management ontology for a SH and relevant monitoring contextual information, which considers its suitability in a pervasive computing environment and is service-oriented. We also propose a rule-based reasoning method to provide decision support through reasoning techniques and context-awareness. A system prototype is developed to evaluate the feasibility, time response and extendibility of the approach. The evaluation of our approach shows that it is more effective in daily risk event recognition. The decisions for service provision are shown to be accurate. PMID:27527170
Ortega, Julio; Asensio-Cubero, Javier; Gan, John Q; Ortiz, Andrés
2016-07-15
Brain-computer interfacing (BCI) applications based on the classification of electroencephalographic (EEG) signals require solving high-dimensional pattern classification problems with such a relatively small number of training patterns that curse of dimensionality problems usually arise. Multiresolution analysis (MRA) has useful properties for signal analysis in both temporal and spectral analysis, and has been broadly used in the BCI field. However, MRA usually increases the dimensionality of the input data. Therefore, some approaches to feature selection or feature dimensionality reduction should be considered for improving the performance of the MRA based BCI. This paper investigates feature selection in the MRA-based frameworks for BCI. Several wrapper approaches to evolutionary multiobjective feature selection are proposed with different structures of classifiers. They are evaluated by comparing with baseline methods using sparse representation of features or without feature selection. The statistical analysis, by applying the Kolmogorov-Smirnoff and Kruskal-Wallis tests to the means of the Kappa values evaluated by using the test patterns in each approach, has demonstrated some advantages of the proposed approaches. In comparison with the baseline MRA approach used in previous studies, the proposed evolutionary multiobjective feature selection approaches provide similar or even better classification performances, with significant reduction in the number of features that need to be computed.
A Semantic Approach with Decision Support for Safety Service in Smart Home Management.
Huang, Xiaoci; Yi, Jianjun; Zhu, Xiaomin; Chen, Shaoli
2016-08-03
Research on smart homes (SHs) has increased significantly in recent years because of the convenience provided by having an assisted living environment. The functions of SHs as mentioned in previous studies, particularly safety services, are seldom discussed or mentioned. Thus, this study proposes a semantic approach with decision support for safety service in SH management. The focus of this contribution is to explore a context awareness and reasoning approach for risk recognition in SH that enables the proper decision support for flexible safety service provision. The framework of SH based on a wireless sensor network is described from the perspective of neighbourhood management. This approach is based on the integration of semantic knowledge in which a reasoner can make decisions about risk recognition and safety service. We present a management ontology for a SH and relevant monitoring contextual information, which considers its suitability in a pervasive computing environment and is service-oriented. We also propose a rule-based reasoning method to provide decision support through reasoning techniques and context-awareness. A system prototype is developed to evaluate the feasibility, time response and extendibility of the approach. The evaluation of our approach shows that it is more effective in daily risk event recognition. The decisions for service provision are shown to be accurate.
Observability of nonlinear dynamics: normalized results and a time-series approach.
Aguirre, Luis A; Bastos, Saulo B; Alves, Marcela A; Letellier, Christophe
2008-03-01
This paper investigates the observability of nonlinear dynamical systems. Two difficulties associated with previous studies are dealt with. First, a normalized degree observability is defined. This permits the comparison of different systems, which was not generally possible before. Second, a time-series approach is proposed based on omnidirectional nonlinear correlation functions to rank a set of time series of a system in terms of their potential use to reconstruct the original dynamics without requiring the knowledge of the system equations. The two approaches proposed in this paper and a former method were applied to five benchmark systems and an overall agreement of over 92% was found.
Learning polynomial feedforward neural networks by genetic programming and backpropagation.
Nikolaev, N Y; Iba, H
2003-01-01
This paper presents an approach to learning polynomial feedforward neural networks (PFNNs). The approach suggests, first, finding the polynomial network structure by means of a population-based search technique relying on the genetic programming paradigm, and second, further adjustment of the best discovered network weights by an especially derived backpropagation algorithm for higher order networks with polynomial activation functions. These two stages of the PFNN learning process enable us to identify networks with good training as well as generalization performance. Empirical results show that this approach finds PFNN which outperform considerably some previous constructive polynomial network algorithms on processing benchmark time series.
Motion compensated shape error concealment.
Schuster, Guido M; Katsaggelos, Aggelos K
2006-02-01
The introduction of Video Objects (VOs) is one of the innovations of MPEG-4. The alpha-plane of a VO defines its shape at a given instance in time and hence determines the boundary of its texture. In packet-based networks, shape, motion, and texture are subject to loss. While there has been considerable attention paid to the concealment of texture and motion errors, little has been done in the field of shape error concealment. In this paper we propose a post-processing shape error concealment technique that uses the motion compensated boundary information of the previously received alpha-plane. The proposed approach is based on matching received boundary segments in the current frame to the boundary in the previous frame. This matching is achieved by finding a maximally smooth motion vector field. After the current boundary segments are matched to the previous boundary, the missing boundary pieces are reconstructed by motion compensation. Experimental results demonstrating the performance of the proposed motion compensated shape error concealment method, and comparing it with the previously proposed weighted side matching method are presented.
Interactively Open Autonomy Unifies Two Approaches to Function
NASA Astrophysics Data System (ADS)
Collier, John
2004-08-01
Functionality is essential to any form of anticipation beyond simple directedness at an end. In the literature on function in biology, there are two distinct approaches. One, the etiological view, places the origin of function in selection, while the other, the organizational view, individuates function by organizational role. Both approaches have well-known advantages and disadvantages. I propose a reconciliation of the two approaches, based in an interactivist approach to the individuation and stability of organisms. The approach was suggested by Kant in the Critique of Judgment, but since it requires, on his account, the identification a new form of causation, it has not been accessible by analytical techniques. I proceed by construction of the required concept to fit certain design requirements. This construction builds on concepts introduced in my previous four talks to these meetings.
A Neuro-Fuzzy Approach in the Classification of Students' Academic Performance
2013-01-01
Classifying the student academic performance with high accuracy facilitates admission decisions and enhances educational services at educational institutions. The purpose of this paper is to present a neuro-fuzzy approach for classifying students into different groups. The neuro-fuzzy classifier used previous exam results and other related factors as input variables and labeled students based on their expected academic performance. The results showed that the proposed approach achieved a high accuracy. The results were also compared with those obtained from other well-known classification approaches, including support vector machine, Naive Bayes, neural network, and decision tree approaches. The comparative analysis indicated that the neuro-fuzzy approach performed better than the others. It is expected that this work may be used to support student admission procedures and to strengthen the services of educational institutions. PMID:24302928
A neuro-fuzzy approach in the classification of students' academic performance.
Do, Quang Hung; Chen, Jeng-Fung
2013-01-01
Classifying the student academic performance with high accuracy facilitates admission decisions and enhances educational services at educational institutions. The purpose of this paper is to present a neuro-fuzzy approach for classifying students into different groups. The neuro-fuzzy classifier used previous exam results and other related factors as input variables and labeled students based on their expected academic performance. The results showed that the proposed approach achieved a high accuracy. The results were also compared with those obtained from other well-known classification approaches, including support vector machine, Naive Bayes, neural network, and decision tree approaches. The comparative analysis indicated that the neuro-fuzzy approach performed better than the others. It is expected that this work may be used to support student admission procedures and to strengthen the services of educational institutions.
Zhao, Qiang; Lv, Qin; Wang, Hailin
2015-08-15
We previously reported a fluorescence anisotropy (FA) approach for small molecules using tetramethylrhodamine (TMR) labeled aptamer. It relies on target-binding induced change of intramolecular interaction between TMR and guanine (G) base. TMR-labeling sites are crucial for this approach. Only terminal ends and thymine (T) bases could be tested for TMR labeling in our previous work, possibly causing limitation in analysis of different targets with this FA strategy. Here, taking the analysis of adenosine triphosphate (ATP) as an example, we demonstrated a success of conjugating TMR on other bases of aptamer adenine (A) or cytosine (C) bases and an achievement of full mapping various labeling sites of aptamers. We successfully constructed aptamer fluorescence anisotropy (FA) sensors for adenosine triphosphate (ATP). We conjugated single TMR on adenine (A), cytosine (C), or thymine (T) bases or terminals of a 25-mer aptamer against ATP and tested FA responses of 14 TMR-labeled aptamer to ATP. The aptamers having TMR labeled on the 16th base C or 23rd base A were screened out and exhibited significant FA-decreasing or FA-increasing responses upon ATP, respectively. These two favorable TMR-labeled aptamers enabled direct FA sensing ATP with a detection limit of 1 µM and the analysis of ATP in diluted serum. The comprehensive screening various TMR labeling sites of aptamers facilitates the successful construction of FA sensors using TMR-labeled aptamers. It will expand application of TMR-G interaction based aptamer FA strategy to a variety of targets. Copyright © 2015 Elsevier B.V. All rights reserved.
Chen, Liang-Hsuan; Hsueh, Chan-Ching
2007-06-01
Fuzzy regression models are useful to investigate the relationship between explanatory and response variables with fuzzy observations. Different from previous studies, this correspondence proposes a mathematical programming method to construct a fuzzy regression model based on a distance criterion. The objective of the mathematical programming is to minimize the sum of distances between the estimated and observed responses on the X axis, such that the fuzzy regression model constructed has the minimal total estimation error in distance. Only several alpha-cuts of fuzzy observations are needed as inputs to the mathematical programming model; therefore, the applications are not restricted to triangular fuzzy numbers. Three examples, adopted in the previous studies, and a larger example, modified from the crisp case, are used to illustrate the performance of the proposed approach. The results indicate that the proposed model has better performance than those in the previous studies based on either distance criterion or Kim and Bishu's criterion. In addition, the efficiency and effectiveness for solving the larger example by the proposed model are also satisfactory.
McCoy, Allison B; Wright, Adam; Rogith, Deevakar; Fathiamini, Safa; Ottenbacher, Allison J; Sittig, Dean F
2014-04-01
Correlation of data within electronic health records is necessary for implementation of various clinical decision support functions, including patient summarization. A key type of correlation is linking medications to clinical problems; while some databases of problem-medication links are available, they are not robust and depend on problems and medications being encoded in particular terminologies. Crowdsourcing represents one approach to generating robust knowledge bases across a variety of terminologies, but more sophisticated approaches are necessary to improve accuracy and reduce manual data review requirements. We sought to develop and evaluate a clinician reputation metric to facilitate the identification of appropriate problem-medication pairs through crowdsourcing without requiring extensive manual review. We retrieved medications from our clinical data warehouse that had been prescribed and manually linked to one or more problems by clinicians during e-prescribing between June 1, 2010 and May 31, 2011. We identified measures likely to be associated with the percentage of accurate problem-medication links made by clinicians. Using logistic regression, we created a metric for identifying clinicians who had made greater than or equal to 95% appropriate links. We evaluated the accuracy of the approach by comparing links made by those physicians identified as having appropriate links to a previously manually validated subset of problem-medication pairs. Of 867 clinicians who asserted a total of 237,748 problem-medication links during the study period, 125 had a reputation metric that predicted the percentage of appropriate links greater than or equal to 95%. These clinicians asserted a total of 2464 linked problem-medication pairs (983 distinct pairs). Compared to a previously validated set of problem-medication pairs, the reputation metric achieved a specificity of 99.5% and marginally improved the sensitivity of previously described knowledge bases. A reputation metric may be a valuable measure for identifying high quality clinician-entered, crowdsourced data. Copyright © 2013 Elsevier Inc. All rights reserved.
D:L-AMINO Acids and the Turnover of Microbial Biomass
NASA Astrophysics Data System (ADS)
Lomstein, B. A.; Braun, S.; Mhatre, S. S.; Jørgensen, B. B.
2015-12-01
Decades of ocean drilling have demonstrated wide spread microbial life in deep sub-seafloor sediment, and surprisingly high microbial cell numbers. Despite the ubiquity of life in the deep biosphere, the large community sizes and the low energy fluxes in the vast buried ecosystem are still poorly understood. It is not know whether organisms of the deep biosphere are specifically adapted to extremely low energy fluxes or whether most of the observed cells are in a maintenance state. Recently we developed and applied a new culture independent approach - the D:L-amino acid model - to quantify the turnover times of living microbial biomass, microbial necromass and mean metabolic rates. This approach is based on the built-in molecular clock in amino acids that very slowly undergo chemical racemization until they reach an even mixture of L- and D- forms, unless microorganisms spend energy to keep them in the L-form that dominates in living organisms. The approach combines sensitive analyses of amino acids, the unique bacterial endospore marker (dipicolinic acid) with racemization dynamics of stereo-isomeric amino acids. Based on a heating experiment, we recently reported kinetic parameters for racemization of aspartic acid, glutamic acid, serine and alanine in bulk sediment from Aarhus Bay, Denmark. The obtained racemization rate constants were faster than the racemization rate constants of free amino acids, which we have previously applied in Holocene sediment from Aarhus Bay and in up to 10 mio yr old sediment from ODP Leg 201. Another important input parameter for the D:L-amino acid model is the cellular carbon content. It has recently been suggested that the cellular carbon content most likely is lower than previously thought. In recognition of these new findings, previously published data based on the D:L-amino acid model were recalculated and will be presented together with new data from an Arctic Holocene setting with constant sub-zero temperatures.
Closing the loop: from paper to protein annotation using supervised Gene Ontology classification.
Gobeill, Julien; Pasche, Emilie; Vishnyakova, Dina; Ruch, Patrick
2014-01-01
Gene function curation of the literature with Gene Ontology (GO) concepts is one particularly time-consuming task in genomics, and the help from bioinformatics is highly requested to keep up with the flow of publications. In 2004, the first BioCreative challenge already designed a task of automatic GO concepts assignment from a full text. At this time, results were judged far from reaching the performances required by real curation workflows. In particular, supervised approaches produced the most disappointing results because of lack of training data. Ten years later, the available curation data have massively grown. In 2013, the BioCreative IV GO task revisited the automatic GO assignment task. For this issue, we investigated the power of our supervised classifier, GOCat. GOCat computes similarities between an input text and already curated instances contained in a knowledge base to infer GO concepts. The subtask A consisted in selecting GO evidence sentences for a relevant gene in a full text. For this, we designed a state-of-the-art supervised statistical approach, using a naïve Bayes classifier and the official training set, and obtained fair results. The subtask B consisted in predicting GO concepts from the previous output. For this, we applied GOCat and reached leading results, up to 65% for hierarchical recall in the top 20 outputted concepts. Contrary to previous competitions, machine learning has this time outperformed standard dictionary-based approaches. Thanks to BioCreative IV, we were able to design a complete workflow for curation: given a gene name and a full text, this system is able to select evidence sentences for curation and to deliver highly relevant GO concepts. Contrary to previous competitions, machine learning this time outperformed dictionary-based systems. Observed performances are sufficient for being used in a real semiautomatic curation workflow. GOCat is available at http://eagl.unige.ch/GOCat/. http://eagl.unige.ch/GOCat4FT/. © The Author(s) 2014. Published by Oxford University Press.
Ellman, Matthew S; Fortin, Auguste H
2012-06-01
Innovative approaches are needed to teach medical students effective and compassionate communication with seriously ill patients. We describe two such educational experiences in the Yale Medical School curriculum for third-year medical students: 1) Communicating Difficult News Workshop and 2) Ward-Based End-of-Life Care Assignment. These two programs address educational needs to teach important clinical communication and assessment skills to medical students that previously were not consistently or explicitly addressed in the curriculum. The two learning programs share a number of educational approaches driven by the learning objectives, the students' development, and clinical realities. Common educational features include: experiential learning, the Biopsychosocial Model, patient-centered communication, integration into clinical clerkships, structured skill-based learning, self-reflection, and self-care. These shared features - as well as some differences - are explored in this paper in order to illustrate key issues in designing and implementing medical student education in these areas.
Matching Real and Synthetic Panoramic Images Using a Variant of Geometric Hashing
NASA Astrophysics Data System (ADS)
Li-Chee-Ming, J.; Armenakis, C.
2017-05-01
This work demonstrates an approach to automatically initialize a visual model-based tracker, and recover from lost tracking, without prior camera pose information. These approaches are commonly referred to as tracking-by-detection. Previous tracking-by-detection techniques used either fiducials (i.e. landmarks or markers) or the object's texture. The main contribution of this work is the development of a tracking-by-detection algorithm that is based solely on natural geometric features. A variant of geometric hashing, a model-to-image registration algorithm, is proposed that searches for a matching panoramic image from a database of synthetic panoramic images captured in a 3D virtual environment. The approach identifies corresponding features between the matched panoramic images. The corresponding features are to be used in a photogrammetric space resection to estimate the camera pose. The experiments apply this algorithm to initialize a model-based tracker in an indoor environment using the 3D CAD model of the building.
NASA Astrophysics Data System (ADS)
Ouerhani, Y.; Alfalou, A.; Desthieux, M.; Brosseau, C.
2017-02-01
We present a three-step approach based on the commercial VIAPIX® module for road traffic sign recognition and identification. Firstly, detection in a scene of all objects having characteristics of traffic signs is performed. This is followed by a first-level recognition based on correlation which consists in making a comparison between each detected object with a set of reference images of a database. Finally, a second level of identification allows us to confirm or correct the previous identification. In this study, we perform a correlation-based analysis by combining and adapting the Vander Lugt correlator with the nonlinear joint transformation correlator (JTC). Of particular significance, this approach permits to make a reliable decision on road traffic sign identification. We further discuss a robust scheme allowing us to track a detected road traffic sign in a video sequence for the purpose of increasing the decision performance of our system. This approach can have broad practical applications in the maintenance and rehabilitation of transportation infrastructure, or for drive assistance.
Intolerance for approach of ambiguity in social anxiety disorder.
Kuckertz, Jennie M; Strege, Marlene V; Amir, Nader
2017-06-01
Previous research has utilised the approach-avoidance task (AAT) to measure approach and avoidance action tendencies in socially anxious individuals. "Neutral" social stimuli may be perceived as ambiguous and hence threatening to socially anxious individuals, however it is unclear whether this results in difficulty approaching ambiguous ("neutral") versus unambiguous threat (e.g. disgust) faces (i.e. intolerance of ambiguity). Thirty participants with social anxiety disorder (SADs) and 29 non-anxious controls completed an implicit AAT in which they were instructed to approach or avoid neutral and disgust faces (i.e. pull or push a joystick) based on colour of the picture border. Results indicated that SADs demonstrated greater difficulty approaching neutral relative to disgust faces. Moreover, intolerance for approach of ambiguity predicted social anxiety severity while controlling for the effects of trait anxiety and depression. Our results provide further support for the role of intolerance of ambiguity in SAD.
Wollstein, Andreas; Walsh, Susan; Liu, Fan; Chakravarthy, Usha; Rahu, Mati; Seland, Johan H; Soubrane, Gisèle; Tomazzoli, Laura; Topouzis, Fotis; Vingerling, Johannes R; Vioque, Jesus; Böhringer, Stefan; Fletcher, Astrid E; Kayser, Manfred
2017-02-27
Success of genetic association and the prediction of phenotypic traits from DNA are known to depend on the accuracy of phenotype characterization, amongst other parameters. To overcome limitations in the characterization of human iris pigmentation, we introduce a fully automated approach that specifies the areal proportions proposed to represent differing pigmentation types, such as pheomelanin, eumelanin, and non-pigmented areas within the iris. We demonstrate the utility of this approach using high-resolution digital eye imagery and genotype data from 12 selected SNPs from over 3000 European samples of seven populations that are part of the EUREYE study. In comparison to previous quantification approaches, (1) we achieved an overall improvement in eye colour phenotyping, which provides a better separation of manually defined eye colour categories. (2) Single nucleotide polymorphisms (SNPs) known to be involved in human eye colour variation showed stronger associations with our approach. (3) We found new and confirmed previously noted SNP-SNP interactions. (4) We increased SNP-based prediction accuracy of quantitative eye colour. Our findings exemplify that precise quantification using the perceived biological basis of pigmentation leads to enhanced genetic association and prediction of eye colour. We expect our approach to deliver new pigmentation genes when applied to genome-wide association testing.
Neural mechanisms of cue-approach training
Bakkour, Akram; Lewis-Peacock, Jarrod A.; Poldrack, Russell A.; Schonberg, Tom
2016-01-01
Biasing choices may prove a useful way to implement behavior change. Previous work has shown that a simple training task (the cue-approach task), which does not rely on external reinforcement, can robustly influence choice behavior by biasing choice toward items that were targeted during training. In the current study, we replicate previous behavioral findings and explore the neural mechanisms underlying the shift in preferences following cue-approach training. Given recent successes in the development and application of machine learning techniques to task-based fMRI data, which have advanced understanding of the neural substrates of cognition, we sought to leverage the power of these techniques to better understand neural changes during cue-approach training that subsequently led to a shift in choice behavior. Contrary to our expectations, we found that machine learning techniques applied to fMRI data during non-reinforced training were unsuccessful in elucidating the neural mechanism underlying the behavioral effect. However, univariate analyses during training revealed that the relationship between BOLD and choices for Go items increases as training progresses compared to choices of NoGo items primarily in lateral prefrontal cortical areas. This new imaging finding suggests that preferences are shifted via differential engagement of task control networks that interact with value networks during cue-approach training. PMID:27677231
Wollstein, Andreas; Walsh, Susan; Liu, Fan; Chakravarthy, Usha; Rahu, Mati; Seland, Johan H.; Soubrane, Gisèle; Tomazzoli, Laura; Topouzis, Fotis; Vingerling, Johannes R.; Vioque, Jesus; Böhringer, Stefan; Fletcher, Astrid E.; Kayser, Manfred
2017-01-01
Success of genetic association and the prediction of phenotypic traits from DNA are known to depend on the accuracy of phenotype characterization, amongst other parameters. To overcome limitations in the characterization of human iris pigmentation, we introduce a fully automated approach that specifies the areal proportions proposed to represent differing pigmentation types, such as pheomelanin, eumelanin, and non-pigmented areas within the iris. We demonstrate the utility of this approach using high-resolution digital eye imagery and genotype data from 12 selected SNPs from over 3000 European samples of seven populations that are part of the EUREYE study. In comparison to previous quantification approaches, (1) we achieved an overall improvement in eye colour phenotyping, which provides a better separation of manually defined eye colour categories. (2) Single nucleotide polymorphisms (SNPs) known to be involved in human eye colour variation showed stronger associations with our approach. (3) We found new and confirmed previously noted SNP-SNP interactions. (4) We increased SNP-based prediction accuracy of quantitative eye colour. Our findings exemplify that precise quantification using the perceived biological basis of pigmentation leads to enhanced genetic association and prediction of eye colour. We expect our approach to deliver new pigmentation genes when applied to genome-wide association testing. PMID:28240252
Shakeri, Abolfazl; Masullo, Milena; D'Urso, Gilda; Iranshahi, Mehrdad; Montoro, Paola; Pizza, Cosimo; Piacente, Sonia
2018-05-15
Chemical investigations on Glycyrrhiza spp. have mostly been focused on G. glabra (typically cultivated in Europe, henceforth called European licorice), G. uralensis and G. inflata (known as Chinese licorice) with little information on the constituents of other Glycyrrhiza species. According to the growing interest in further Glycyrrhiza spp. to be used as sweeteners, the roots of G. triphylla have been investigated. The LC-ESI/LTQOrbitrap/MS profile of the methanolic extract of G. triphylla roots guided the isolation of 21 compounds, of which the structures were elucidated by 1D- and 2D-NMR experiments. Based on this approach, 6 previously unreported compounds including two isoflavones 7,5'-dihydroxy-6,3'-dimethoxy-isoflavone-7-O-β-d-glucopyranoside (4) and 7,5'-dihydroxy-6,3'-dimethoxy-isoflavone-7-O-(7,8-dihydro-p-hydroxycinnamoyl)-β-d-glucopyranoside (7) and four saponins, named licoricesaponins M3 (13), N2 (14), O2 (16) and P2 (18), have been characterized. It is to be noted that the accurate masses of some compounds here reported for the first time corresponded to those of compounds previously described in Glycyrrhiza spp. Thus an approach based only on MS analysis could be misleading; only isolation followed by NMR analysis allowed us to unambiguously assign the structures of these previously unreported compounds. Copyright © 2017 Elsevier Ltd. All rights reserved.
Della-Torre, E; Berti, A; Yacoub, M R; Guglielmi, B; Tombetti, E; Sabbadini, M G; Voltolini, S; Colombo, G
2015-05-01
The purpose of the present work is to evaluate the efficacy of an approach that combines clinical history, skin tests results, and premedication, in preventing recurrent hypersensitivity reactions to iodinated contrast media (ICM). Skin Prick tests, Intradermal tests, and Patch tests were performed in 36 patients with a previous reaction to ICM. All patients underwent a second contrast enhanced radiological procedure with an alternative ICM selected on the basis of the proposed approach. After alternative ICM re-injection, only one patient presented a mild NIR. The proposed algorithm, validated in clinical settings where repeated radiological exams are needed, offers a safe and practical approach for protecting patients from recurrent hypersensitivity reactions to ICM.
Least-Squares Support Vector Machine Approach to Viral Replication Origin Prediction
Cruz-Cano, Raul; Chew, David S.H.; Kwok-Pui, Choi; Ming-Ying, Leung
2010-01-01
Replication of their DNA genomes is a central step in the reproduction of many viruses. Procedures to find replication origins, which are initiation sites of the DNA replication process, are therefore of great importance for controlling the growth and spread of such viruses. Existing computational methods for viral replication origin prediction have mostly been tested within the family of herpesviruses. This paper proposes a new approach by least-squares support vector machines (LS-SVMs) and tests its performance not only on the herpes family but also on a collection of caudoviruses coming from three viral families under the order of caudovirales. The LS-SVM approach provides sensitivities and positive predictive values superior or comparable to those given by the previous methods. When suitably combined with previous methods, the LS-SVM approach further improves the prediction accuracy for the herpesvirus replication origins. Furthermore, by recursive feature elimination, the LS-SVM has also helped find the most significant features of the data sets. The results suggest that the LS-SVMs will be a highly useful addition to the set of computational tools for viral replication origin prediction and illustrate the value of optimization-based computing techniques in biomedical applications. PMID:20729987
Least-Squares Support Vector Machine Approach to Viral Replication Origin Prediction.
Cruz-Cano, Raul; Chew, David S H; Kwok-Pui, Choi; Ming-Ying, Leung
2010-06-01
Replication of their DNA genomes is a central step in the reproduction of many viruses. Procedures to find replication origins, which are initiation sites of the DNA replication process, are therefore of great importance for controlling the growth and spread of such viruses. Existing computational methods for viral replication origin prediction have mostly been tested within the family of herpesviruses. This paper proposes a new approach by least-squares support vector machines (LS-SVMs) and tests its performance not only on the herpes family but also on a collection of caudoviruses coming from three viral families under the order of caudovirales. The LS-SVM approach provides sensitivities and positive predictive values superior or comparable to those given by the previous methods. When suitably combined with previous methods, the LS-SVM approach further improves the prediction accuracy for the herpesvirus replication origins. Furthermore, by recursive feature elimination, the LS-SVM has also helped find the most significant features of the data sets. The results suggest that the LS-SVMs will be a highly useful addition to the set of computational tools for viral replication origin prediction and illustrate the value of optimization-based computing techniques in biomedical applications.
Towards aspect-oriented functional--structural plant modelling.
Cieslak, Mikolaj; Seleznyova, Alla N; Prusinkiewicz, Przemyslaw; Hanan, Jim
2011-10-01
Functional-structural plant models (FSPMs) are used to integrate knowledge and test hypotheses of plant behaviour, and to aid in the development of decision support systems. A significant amount of effort is being put into providing a sound methodology for building them. Standard techniques, such as procedural or object-oriented programming, are not suited for clearly separating aspects of plant function that criss-cross between different components of plant structure, which makes it difficult to reuse and share their implementations. The aim of this paper is to present an aspect-oriented programming approach that helps to overcome this difficulty. The L-system-based plant modelling language L+C was used to develop an aspect-oriented approach to plant modelling based on multi-modules. Each element of the plant structure was represented by a sequence of L-system modules (rather than a single module), with each module representing an aspect of the element's function. Separate sets of productions were used for modelling each aspect, with context-sensitive rules facilitated by local lists of modules to consider/ignore. Aspect weaving or communication between aspects was made possible through the use of pseudo-L-systems, where the strict-predecessor of a production rule was specified as a multi-module. The new approach was used to integrate previously modelled aspects of carbon dynamics, apical dominance and biomechanics with a model of a developing kiwifruit shoot. These aspects were specified independently and their implementation was based on source code provided by the original authors without major changes. This new aspect-oriented approach to plant modelling is well suited for studying complex phenomena in plant science, because it can be used to integrate separate models of individual aspects of plant development and function, both previously constructed and new, into clearly organized, comprehensive FSPMs. In a future work, this approach could be further extended into an aspect-oriented programming language for FSPMs.
NASA Astrophysics Data System (ADS)
Raei, Ehsan; Nikoo, Mohammad Reza; Pourshahabi, Shokoufeh
2017-08-01
In the present study, a BIOPLUME III simulation model is coupled with a non-dominating sorting genetic algorithm (NSGA-II)-based model for optimal design of in situ groundwater bioremediation system, considering preferences of stakeholders. Ministry of Energy (MOE), Department of Environment (DOE), and National Disaster Management Organization (NDMO) are three stakeholders in the groundwater bioremediation problem in Iran. Based on the preferences of these stakeholders, the multi-objective optimization model tries to minimize: (1) cost; (2) sum of contaminant concentrations that violate standard; (3) contaminant plume fragmentation. The NSGA-II multi-objective optimization method gives Pareto-optimal solutions. A compromised solution is determined using fallback bargaining with impasse to achieve a consensus among the stakeholders. In this study, two different approaches are investigated and compared based on two different domains for locations of injection and extraction wells. At the first approach, a limited number of predefined locations is considered according to previous similar studies. At the second approach, all possible points in study area are investigated to find optimal locations, arrangement, and flow rate of injection and extraction wells. Involvement of the stakeholders, investigating all possible points instead of a limited number of locations for wells, and minimizing the contaminant plume fragmentation during bioremediation are new innovations in this research. Besides, the simulation period is divided into smaller time intervals for more efficient optimization. Image processing toolbox in MATLAB® software is utilized for calculation of the third objective function. In comparison with previous studies, cost is reduced using the proposed methodology. Dispersion of the contaminant plume is reduced in both presented approaches using the third objective function. Considering all possible points in the study area for determining the optimal locations of the wells in the second approach leads to more desirable results, i.e. decreasing the contaminant concentrations to a standard level and 20% to 40% cost reduction.
Towards aspect-oriented functional–structural plant modelling
Cieslak, Mikolaj; Seleznyova, Alla N.; Prusinkiewicz, Przemyslaw; Hanan, Jim
2011-01-01
Background and Aims Functional–structural plant models (FSPMs) are used to integrate knowledge and test hypotheses of plant behaviour, and to aid in the development of decision support systems. A significant amount of effort is being put into providing a sound methodology for building them. Standard techniques, such as procedural or object-oriented programming, are not suited for clearly separating aspects of plant function that criss-cross between different components of plant structure, which makes it difficult to reuse and share their implementations. The aim of this paper is to present an aspect-oriented programming approach that helps to overcome this difficulty. Methods The L-system-based plant modelling language L+C was used to develop an aspect-oriented approach to plant modelling based on multi-modules. Each element of the plant structure was represented by a sequence of L-system modules (rather than a single module), with each module representing an aspect of the element's function. Separate sets of productions were used for modelling each aspect, with context-sensitive rules facilitated by local lists of modules to consider/ignore. Aspect weaving or communication between aspects was made possible through the use of pseudo-L-systems, where the strict-predecessor of a production rule was specified as a multi-module. Key Results The new approach was used to integrate previously modelled aspects of carbon dynamics, apical dominance and biomechanics with a model of a developing kiwifruit shoot. These aspects were specified independently and their implementation was based on source code provided by the original authors without major changes. Conclusions This new aspect-oriented approach to plant modelling is well suited for studying complex phenomena in plant science, because it can be used to integrate separate models of individual aspects of plant development and function, both previously constructed and new, into clearly organized, comprehensive FSPMs. In a future work, this approach could be further extended into an aspect-oriented programming language for FSPMs. PMID:21724653
Murakami, Nozomu; Tanabe, Kouichi; Morita, Tatsuya; Fujikawa, Yasunaga; Koseki, Shiro; Kajiura, Shinya; Nakajima, Kazunori; Hayashi, Ryuji
2018-05-03
To examine the clinical outcomes of a project to enhance the awareness of community-based palliative care (awareness-enhancing project), focusing on home death and care rates in communities. A single-center study on community-based intervention was conducted. The awareness-enhancing project, consisting of three intervention approaches (outreach, palliative care education for community-based medical professionals, and information-sharing tool use), was executed, and changes in the home death rate in the community were examined. The home death rate markedly exceeded the national mean from 2010. In 2012-2013, it was as high as 19.9%, greater than the previous 5.9% (p = 0.001). Through multivariate analysis, the participation of home care physicians and visiting nurses in a palliative care education program, and patients' Palliative Prognostic Index values were identified as factors significantly influencing the home death rate. The three intervention approaches time dependently increased the home death rate as a clinical outcome in the community, although they targeted limited areas. These approaches may aid in increasing the number of individuals who die in their homes.
Kukhareva, Polina V; Kawamoto, Kensaku; Shields, David E; Barfuss, Darryl T; Halley, Anne M; Tippetts, Tyler J; Warner, Phillip B; Bray, Bruce E; Staes, Catherine J
2014-01-01
Electronic quality measurement (QM) and clinical decision support (CDS) are closely related but are typically implemented independently, resulting in significant duplication of effort. While it seems intuitive that technical approaches could be re-used across these two related use cases, such reuse is seldom reported in the literature, especially for standards-based approaches. Therefore, we evaluated the feasibility of using a standards-based CDS framework aligned with anticipated EHR certification criteria to implement electronic QM. The CDS-QM framework was used to automate a complex national quality measure (SCIP-VTE-2) at an academic healthcare system which had previously relied on time-consuming manual chart abstractions. Compared with 305 manually-reviewed reference cases, the recall of automated measurement was 100%. The precision was 96.3% (CI:92.6%-98.5%) for ascertaining the denominator and 96.2% (CI:92.3%-98.4%) for the numerator. We therefore validated that a standards-based CDS-QM framework can successfully enable automated QM, and we identified benefits and challenges with this approach. PMID:25954389
Granacher, Urs; Muehlbauer, Thomas; Gollhofer, Albert; Kressig, Reto W; Zahner, Lukas
2011-01-01
The risk of sustaining a fall and fall-related injuries is particularly high in children and seniors, which is why there is a need to develop fall-preventive intervention programs. An intergenerational approach in balance and strength promotion appears to have great potential because it is specifically tailored to the physical, social and behavioural needs of children and seniors. Burtscher and Kopp [Gerontology, DOI: 10.1159/000322930] raised the question whether our previously published mini-review is evidence-based or evidence-inspired. These authors postulate that we did not follow a 4-stage conceptual model for the development of injury and/or fall-preventive intervention programs. In response to this criticism, we present information from the mini-review that comply with the 4-stage model incorporating evidence-based and evidence-inspired components. We additionally provide information on how to implement an intergenerational balance and resistance training approach in a school setting based on a study that is being currently conducted. Copyright © 2010 S. Karger AG, Basel.
Artificial intelligence systems based on texture descriptors for vaccine development.
Nanni, Loris; Brahnam, Sheryl; Lumini, Alessandra
2011-02-01
The aim of this work is to analyze and compare several feature extraction methods for peptide classification that are based on the calculation of texture descriptors starting from a matrix representation of the peptide. This texture-based representation of the peptide is then used to train a support vector machine classifier. In our experiments, the best results are obtained using local binary patterns variants and the discrete cosine transform with selected coefficients. These results are better than those previously reported that employed texture descriptors for peptide representation. In addition, we perform experiments that combine standard approaches based on amino acid sequence. The experimental section reports several tests performed on a vaccine dataset for the prediction of peptides that bind human leukocyte antigens and on a human immunodeficiency virus (HIV-1). Experimental results confirm the usefulness of our novel descriptors. The matlab implementation of our approaches is available at http://bias.csr.unibo.it/nanni/TexturePeptide.zip.
Wide-Baseline Stereo-Based Obstacle Mapping for Unmanned Surface Vehicles
Mou, Xiaozheng; Wang, Han
2018-01-01
This paper proposes a wide-baseline stereo-based static obstacle mapping approach for unmanned surface vehicles (USVs). The proposed approach eliminates the complicated calibration work and the bulky rig in our previous binocular stereo system, and raises the ranging ability from 500 to 1000 m with a even larger baseline obtained from the motion of USVs. Integrating a monocular camera with GPS and compass information in this proposed system, the world locations of the detected static obstacles are reconstructed while the USV is traveling, and an obstacle map is then built. To achieve more accurate and robust performance, multiple pairs of frames are leveraged to synthesize the final reconstruction results in a weighting model. Experimental results based on our own dataset demonstrate the high efficiency of our system. To the best of our knowledge, we are the first to address the task of wide-baseline stereo-based obstacle mapping in a maritime environment. PMID:29617293
Localized Dictionaries Based Orientation Field Estimation for Latent Fingerprints.
Xiao Yang; Jianjiang Feng; Jie Zhou
2014-05-01
Dictionary based orientation field estimation approach has shown promising performance for latent fingerprints. In this paper, we seek to exploit stronger prior knowledge of fingerprints in order to further improve the performance. Realizing that ridge orientations at different locations of fingerprints have different characteristics, we propose a localized dictionaries-based orientation field estimation algorithm, in which noisy orientation patch at a location output by a local estimation approach is replaced by real orientation patch in the local dictionary at the same location. The precondition of applying localized dictionaries is that the pose of the latent fingerprint needs to be estimated. We propose a Hough transform-based fingerprint pose estimation algorithm, in which the predictions about fingerprint pose made by all orientation patches in the latent fingerprint are accumulated. Experimental results on challenging latent fingerprint datasets show the proposed method outperforms previous ones markedly.
BcL-xL Conformational Changes upon Fragment Binding Revealed by NMR
Aguirre, Clémentine; ten Brink, Tim; Walker, Olivier; Guillière, Florence; Davesne, Dany; Krimm, Isabelle
2013-01-01
Protein-protein interactions represent difficult but increasingly important targets for the design of therapeutic compounds able to interfere with biological processes. Recently, fragment-based strategies have been proposed as attractive approaches for the elaboration of protein-protein surface inhibitors from fragment-like molecules. One major challenge in targeting protein-protein interactions is related to the structural adaptation of the protein surface upon molecular recognition. Methods capable of identifying subtle conformational changes of proteins upon fragment binding are therefore required at the early steps of the drug design process. In this report we present a fast NMR method able to probe subtle conformational changes upon fragment binding. The approach relies on the comparison of experimental fragment-induced Chemical Shift Perturbation (CSP) of amine protons to CSP simulated for a set of docked fragment poses, considering the ring-current effect from fragment binding. We illustrate the method by the retrospective analysis of the complex between the anti-apoptotic Bcl-xL protein and the fragment 4′-fluoro-[1,1′-biphenyl]-4-carboxylic acid that was previously shown to bind one of the Bcl-xL hot spots. The CSP-based approach shows that the protein undergoes a subtle conformational rearrangement upon interaction, for residues located in helices 2, 3 and the very beginning of 5. Our observations are corroborated by residual dipolar coupling measurements performed on the free and fragment-bound forms of the Bcl-xL protein. These NMR-based results are in total agreement with previous molecular dynamic calculations that evidenced a high flexibility of Bcl-xL around the binding site. Here we show that CSP of protein amine protons are useful and reliable structural probes. Therefore, we propose to use CSP simulation to assess protein conformational changes upon ligand binding in the fragment-based drug design approach. PMID:23717610
Gene genealogies for genetic association mapping, with application to Crohn's disease
Burkett, Kelly M.; Greenwood, Celia M. T.; McNeney, Brad; Graham, Jinko
2013-01-01
A gene genealogy describes relationships among haplotypes sampled from a population. Knowledge of the gene genealogy for a set of haplotypes is useful for estimation of population genetic parameters and it also has potential application in finding disease-predisposing genetic variants. As the true gene genealogy is unknown, Markov chain Monte Carlo (MCMC) approaches have been used to sample genealogies conditional on data at multiple genetic markers. We previously implemented an MCMC algorithm to sample from an approximation to the distribution of the gene genealogy conditional on haplotype data. Our approach samples ancestral trees, recombination and mutation rates at a genomic focal point. In this work, we describe how our sampler can be used to find disease-predisposing genetic variants in samples of cases and controls. We use a tree-based association statistic that quantifies the degree to which case haplotypes are more closely related to each other around the focal point than control haplotypes, without relying on a disease model. As the ancestral tree is a latent variable, so is the tree-based association statistic. We show how the sampler can be used to estimate the posterior distribution of the latent test statistic and corresponding latent p-values, which together comprise a fuzzy p-value. We illustrate the approach on a publicly-available dataset from a study of Crohn's disease that consists of genotypes at multiple SNP markers in a small genomic region. We estimate the posterior distribution of the tree-based association statistic and the recombination rate at multiple focal points in the region. Reassuringly, the posterior mean recombination rates estimated at the different focal points are consistent with previously published estimates. The tree-based association approach finds multiple sub-regions where the case haplotypes are more genetically related than the control haplotypes, and that there may be one or multiple disease-predisposing loci. PMID:24348515
PoMo: An Allele Frequency-Based Approach for Species Tree Estimation
De Maio, Nicola; Schrempf, Dominik; Kosiol, Carolin
2015-01-01
Incomplete lineage sorting can cause incongruencies of the overall species-level phylogenetic tree with the phylogenetic trees for individual genes or genomic segments. If these incongruencies are not accounted for, it is possible to incur several biases in species tree estimation. Here, we present a simple maximum likelihood approach that accounts for ancestral variation and incomplete lineage sorting. We use a POlymorphisms-aware phylogenetic MOdel (PoMo) that we have recently shown to efficiently estimate mutation rates and fixation biases from within and between-species variation data. We extend this model to perform efficient estimation of species trees. We test the performance of PoMo in several different scenarios of incomplete lineage sorting using simulations and compare it with existing methods both in accuracy and computational speed. In contrast to other approaches, our model does not use coalescent theory but is allele frequency based. We show that PoMo is well suited for genome-wide species tree estimation and that on such data it is more accurate than previous approaches. PMID:26209413
A Component-Based Extension Framework for Large-Scale Parallel Simulations in NEURON
King, James G.; Hines, Michael; Hill, Sean; Goodman, Philip H.; Markram, Henry; Schürmann, Felix
2008-01-01
As neuronal simulations approach larger scales with increasing levels of detail, the neurosimulator software represents only a part of a chain of tools ranging from setup, simulation, interaction with virtual environments to analysis and visualizations. Previously published approaches to abstracting simulator engines have not received wide-spread acceptance, which in part may be to the fact that they tried to address the challenge of solving the model specification problem. Here, we present an approach that uses a neurosimulator, in this case NEURON, to describe and instantiate the network model in the simulator's native model language but then replaces the main integration loop with its own. Existing parallel network models are easily adopted to run in the presented framework. The presented approach is thus an extension to NEURON but uses a component-based architecture to allow for replaceable spike exchange components and pluggable components for monitoring, analysis, or control that can run in this framework alongside with the simulation. PMID:19430597
Automated construction of arterial and venous trees in retinal images.
Hu, Qiao; Abràmoff, Michael D; Garvin, Mona K
2015-10-01
While many approaches exist to segment retinal vessels in fundus photographs, only a limited number focus on the construction and disambiguation of arterial and venous trees. Previous approaches are local and/or greedy in nature, making them susceptible to errors or limiting their applicability to large vessels. We propose a more global framework to generate arteriovenous trees in retinal images, given a vessel segmentation. In particular, our approach consists of three stages. The first stage is to generate an overconnected vessel network, named the vessel potential connectivity map (VPCM), consisting of vessel segments and the potential connectivity between them. The second stage is to disambiguate the VPCM into multiple anatomical trees, using a graph-based metaheuristic algorithm. The third stage is to classify these trees into arterial or venous (A/V) trees. We evaluated our approach with a ground truth built based on a public database, showing a pixel-wise classification accuracy of 88.15% using a manual vessel segmentation as input, and 86.11% using an automatic vessel segmentation as input.
A nonlinear viscoelastic approach to durability predictions for polymer based composite structures
NASA Technical Reports Server (NTRS)
Brinson, Hal F.
1991-01-01
Current industry approaches for the durability assessment of metallic structures are briefly reviewed. For polymer based composite structures, it is suggested that new approaches must be adopted to include memory or viscoelastic effects which could lead to delayed failures that might not be predicted using current techniques. A durability or accelerated life assessment plan for fiber reinforced plastics (FRP) developed and documented over the last decade or so is reviewed and discussed. Limitations to the plan are outlined and suggestions to remove the limitations are given. These include the development of a finite element code to replace the previously used lamination theory code and the development of new specimen geometries to evaluate delamination failures. The new DCB model is reviewed and results are presented. Finally, it is pointed out that new procedures are needed to determine interfacial properties and current efforts underway to determine such properties are reviewed. Suggestions for additional efforts to develop a consistent and accurate durability predictive approach for FRP structures are outlined.
A nonlinear viscoelastic approach to durability predictions for polymer based composite structures
NASA Technical Reports Server (NTRS)
Brinson, Hal F.; Hiel, C. C.
1990-01-01
Current industry approaches for the durability assessment of metallic structures are briefly reviewed. For polymer based composite structures, it is suggested that new approaches must be adopted to include memory or viscoelastic effects which could lead to delayed failures that might not be predicted using current techniques. A durability or accelerated life assessment plan for fiber reinforced plastics (FRP) developed and documented over the last decade or so is reviewed and discussed. Limitations to the plan are outlined and suggestions to remove the limitations are given. These include the development of a finite element code to replace the previously used lamination theory code and the development of new specimen geometries to evaluate delamination failures. The new DCB model is reviewed and results are presented. Finally, it is pointed out that new procedures are needed to determine interfacial properties and current efforts underway to determine such properties are reviewed. Suggestions for additional efforts to develop a consistent and accurate durability predictive approach for FRP structures is outlined.
Aboriginal Suicidal Behaviour Research: From Risk Factors to Culturally-Sensitive Interventions
Katz, Laurence Y.; Elias, Brenda; O’Neil, John; Enns, Murray; Cox, Brian J.; Belik, Shay-Lee; Sareen, Jitender
2006-01-01
Introduction There is a significant amount of research demonstrating that the rate of completed suicide among Aboriginal populations is much higher than in the general population. Unfortunately, there is a paucity of research evaluating the risk factors for completed suicide and suicidal behavior in this population. There is an even greater shortage of research on evidence-based interventions for suicidal behaviour. Method A literature review was conducted to facilitate the development of an approach to the study of this complex problem. Results An approach to developing a research program that informs each step of the process with evidence from the previous steps was developed. The study of risk factors and interventions is described. Conclusions Research into the risk factors and evidence-based interventions for Aboriginal suicidal behavior are required. A programmatic approach is described in detail in this paper. It is hoped this informed approach would systematically address this important public health issue that afflicts a significant proportion of the Canadian population. PMID:18392204
Corriveau, H; Arsenault, A B; Dutil, E; Lepage, Y
1992-01-01
An evaluation based on the Bobath approach to treatment has previously been developed and partially validated. The purpose of the present study was to verify the content validity of this evaluation with the use of a statistical approach known as principal components analysis. Thirty-eight hemiplegic subjects participated in the study. Analysis of the scores on each of six parameters (sensorium, active movements, muscle tone, reflex activity, postural reactions, and pain) was evaluated on three occasions across a 2-month period. Each time this produced three factors that contained 70% of the variation in the data set. The first component mainly reflected variations in mobility, the second mainly variations in muscle tone, and the third mainly variations in sensorium and pain. The results of such exploratory analysis highlight the fact that some of the parameters are not only important but also interrelated. These results seem to partially support the conceptual framework substantiating the Bobath approach to treatment.
Manthous, Constantine A; Jackson, William L
2007-03-01
The successful management of mass casualties arising from detonation of a nuclear device (NDD) would require significant preparation at all levels of the healthcare system. This article briefly outlines previously published models of destruction and casualties, details approaches to on-site triage and medical evacuation, and offers pathophysiology-based suggestions for treatment of the critically injured. Documentation from previous bomb blasts and nuclear accidents is reviewed to assist in forecasting needs of both systems and patients in the event of an NDD in a major metropolitan area. This review extracts data from previously published models of destruction and casualties projected from an NDD, the primary literature detailing observations of patients' pathophysiology following NDDs in Japan and relevant nuclear accidents, and available contemporary resources for first responders and healthcare providers. The blast and radiation exposures that accompany an NDD will significantly affect local and regional public resources. Morbidity and mortality likely to arise in the setting of dose-dependent organ dysfunction may be minimized by rigorous a priori planning/training for field triage decisions, coordination of medical and civil responses to effect rapid responses and medical evacuation routes, radiation-specific interventions, and modern intensive care. Although the responses of emergency and healthcare systems following NDD will vary depending on the exact mechanism, magnitude, and location of the event, dose exposures and individual pathophysiology evolution are reasonably predictable. Triage decisions, resource requirements, and bedside therapeutic plans can be evidence-based and can be developed rapidly with appropriate preparation and planning.
Landform partitioning and estimates of deep storage of soil organic matter in Zackenberg, Greenland
NASA Astrophysics Data System (ADS)
Palmtag, Juri; Cable, Stefanie; Christiansen, Hanne H.; Hugelius, Gustaf; Kuhry, Peter
2018-05-01
Soils in the northern high latitudes are a key component in the global carbon cycle, with potential feedback on climate. This study aims to improve the previous soil organic carbon (SOC) and total nitrogen (TN) storage estimates for the Zackenberg area (NE Greenland) that were based on a land cover classification (LCC) approach, by using geomorphological upscaling. In addition, novel organic carbon (OC) estimates for deeper alluvial and deltaic deposits (down to 300 cm depth) are presented. We hypothesise that landforms will better represent the long-term slope and depositional processes that result in deep SOC burial in this type of mountain permafrost environments. The updated mean SOC storage for the 0-100 cm soil depth is 4.8 kg C m-2, which is 42 % lower than the previous estimate of 8.3 kg C m-2 based on land cover upscaling. Similarly, the mean soil TN storage in the 0-100 cm depth decreased with 44 % from 0.50 kg (± 0.1 CI) to 0.28 (±0.1 CI) kg TN m-2. We ascribe the differences to a previous areal overestimate of SOC- and TN-rich vegetated land cover classes. The landform-based approach more correctly constrains the depositional areas in alluvial fans and deltas with high SOC and TN storage. These are also areas of deep carbon storage with an additional 2.4 kg C m-2 in the 100-300 cm depth interval. This research emphasises the need to consider geomorphology when assessing SOC pools in mountain permafrost landscapes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barua, Bipul; Mohanty, Subhasish; Listwan, Joseph T.
In this paper, a cyclic-plasticity based fully mechanistic fatigue modeling approach is presented. This is based on time-dependent stress-strain evolution of the material over the entire fatigue life rather than just based on the end of live information typically used for empirical S~N curve based fatigue evaluation approaches. Previously we presented constant amplitude fatigue test based related material models for 316 SS base, 508 LAS base and 316 SS- 316 SS weld which are used in nuclear reactor components such as pressure vessels, nozzles, and surge line pipes. However, we found that constant amplitude fatigue data based models have limitationmore » in capturing the stress-strain evolution under arbitrary fatigue loading. To address the above mentioned limitation, in this paper, we present a more advanced approach that can be used for modeling the cyclic stress-strain evolution and fatigue life not only under constant amplitude but also under any arbitrary (random/variable) fatigue loading. The related material model and analytical model results are presented for 316 SS base metal. Two methodologies (either based on time/cycle or based on accumulated plastic strain energy) to track the material parameters at a given time/cycle are discussed and associated analytical model results are presented. From the material model and analytical cyclic plasticity model results, it is found that the proposed cyclic plasticity model can predict all the important stages of material behavior during the entire fatigue life of the specimens with more than 90% accuracy« less
Barua, Bipul; Mohanty, Subhasish; Listwan, Joseph T.; ...
2017-12-05
In this paper, a cyclic-plasticity based fully mechanistic fatigue modeling approach is presented. This is based on time-dependent stress-strain evolution of the material over the entire fatigue life rather than just based on the end of live information typically used for empirical S~N curve based fatigue evaluation approaches. Previously we presented constant amplitude fatigue test based related material models for 316 SS base, 508 LAS base and 316 SS- 316 SS weld which are used in nuclear reactor components such as pressure vessels, nozzles, and surge line pipes. However, we found that constant amplitude fatigue data based models have limitationmore » in capturing the stress-strain evolution under arbitrary fatigue loading. To address the above mentioned limitation, in this paper, we present a more advanced approach that can be used for modeling the cyclic stress-strain evolution and fatigue life not only under constant amplitude but also under any arbitrary (random/variable) fatigue loading. The related material model and analytical model results are presented for 316 SS base metal. Two methodologies (either based on time/cycle or based on accumulated plastic strain energy) to track the material parameters at a given time/cycle are discussed and associated analytical model results are presented. From the material model and analytical cyclic plasticity model results, it is found that the proposed cyclic plasticity model can predict all the important stages of material behavior during the entire fatigue life of the specimens with more than 90% accuracy« less
Effects of Loading and Doping on Iron-Based CO2 Hydrogenation Catalysts
2009-08-24
dopant had on the overall catalyst’s activity and production distribution. 24-08-2009 Memorandum Report Naval Research Laboratory, Code 6183 4555...approach in producing a greater yield of hydrocarbon (HC) products above methane. The use of traditional Fischer-Tropsch synthesis (FTS) cobalt ...previous work done by our group [14] it is apparent that direct hydrogenation of CO2 over a general Cobalt -based FTS catalyst (namely Co-Pt/Al2O3
NASA Astrophysics Data System (ADS)
Argyropoulou, Evangelia
2015-04-01
The current study was focused on the seafloor morphology of the North Aegean Basin in Greece, through Object Based Image Analysis (OBIA) using a Digital Elevation Model. The goal was the automatic extraction of morphologic and morphotectonic features, resulting into fault surface extraction. An Object Based Image Analysis approach was developed based on the bathymetric data and the extracted features, based on morphological criteria, were compared with the corresponding landforms derived through tectonic analysis. A digital elevation model of 150 meters spatial resolution was used. At first, slope, profile curvature, and percentile were extracted from this bathymetry grid. The OBIA approach was developed within the eCognition environment. Four segmentation levels were created having as a target "level 4". At level 4, the final classes of geomorphological features were classified: discontinuities, fault-like features and fault surfaces. On previous levels, additional landforms were also classified, such as continental platform and continental slope. The results of the developed approach were evaluated by two methods. At first, classification stability measures were computed within eCognition. Then, qualitative and quantitative comparison of the results took place with a reference tectonic map which has been created manually based on the analysis of seismic profiles. The results of this comparison were satisfactory, a fact which determines the correctness of the developed OBIA approach.
Dolci, Ricardo Landini Lutaif; Todeschini, Alexandre Bossi; Santos, Américo Rubens Leite Dos; Lazarini, Paulo Roberto
2018-04-19
One of the main concerns in endoscopic endonasal approaches to the skull base has been the high incidence and morbidity associated with cerebrospinal fluid leaks. The introduction and routine use of vascularized flaps allowed a marked decrease in this complication followed by a great expansion in the indications and techniques used in endoscopic endonasal approaches, extending to defects from huge tumours and previously inaccessible areas of the skull base. Describe the technique of performing endoscopic double flap multi-layered reconstruction of the anterior skull base without craniotomy. Step by step description of the endoscopic double flap technique (nasoseptal and pericranial vascularized flaps and fascia lata free graft) as used and illustrated in two patients with an olfactory groove meningioma who underwent an endoscopic approach. Both patients achieved a gross total resection: subsequent reconstruction of the anterior skull base was performed with the nasoseptal and pericranial flaps onlay and a fascia lata free graft inlay. Both patients showed an excellent recovery, no signs of cerebrospinal fluid leak, meningitis, flap necrosis, chronic meningeal or sinonasal inflammation or cerebral herniation having developed. This endoscopic double flap technique we have described is a viable, versatile and safe option for anterior skull base reconstructions, decreasing the incidence of complications in endoscopic endonasal approaches. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Chen, Jinyuan; Liu, Zhoujie; Peng, Huaping; Zheng, Yanjie; Lin, Zhen; Liu, Ailin; Chen, Wei; Lin, Xinhua
2017-12-15
Previously reported electrochemical DNA biosensors based on in-situ polymerization approach reveal that terminal deoxynucleoside transferase (TdTase) has good amplifying performance and promising application in the design of electrochemical DNA biosensor. However, this method, in which the background is significantly affected by the amount of TdTase, suffers from being easy to produce false positive result and poor stability. Herein, we firstly present a novel electrochemical DNA biosensor based on grafting-to mode of TdTase-mediated extension, in which DNA targets are polymerized in homogeneous solution and then hybridized with DNA probes on BSA-based DNA carrier platform. It is surprising to find that the background in the grafting-to mode of TdTase-based electrochemical DNA biosensor have little interference from the employed TdTase. Most importantly, the proposed electrochemical DNA biosensor shows greatly improved detection performance over the in-situ polymerization approach-based electrochemical DNA biosensor. Copyright © 2017 Elsevier B.V. All rights reserved.
Next-generation libraries for robust RNA interference-based genome-wide screens
Kampmann, Martin; Horlbeck, Max A.; Chen, Yuwen; Tsai, Jordan C.; Bassik, Michael C.; Gilbert, Luke A.; Villalta, Jacqueline E.; Kwon, S. Chul; Chang, Hyeshik; Kim, V. Narry; Weissman, Jonathan S.
2015-01-01
Genetic screening based on loss-of-function phenotypes is a powerful discovery tool in biology. Although the recent development of clustered regularly interspaced short palindromic repeats (CRISPR)-based screening approaches in mammalian cell culture has enormous potential, RNA interference (RNAi)-based screening remains the method of choice in several biological contexts. We previously demonstrated that ultracomplex pooled short-hairpin RNA (shRNA) libraries can largely overcome the problem of RNAi off-target effects in genome-wide screens. Here, we systematically optimize several aspects of our shRNA library, including the promoter and microRNA context for shRNA expression, selection of guide strands, and features relevant for postscreen sample preparation for deep sequencing. We present next-generation high-complexity libraries targeting human and mouse protein-coding genes, which we grouped into 12 sublibraries based on biological function. A pilot screen suggests that our next-generation RNAi library performs comparably to current CRISPR interference (CRISPRi)-based approaches and can yield complementary results with high sensitivity and high specificity. PMID:26080438
Diagnostic devices for isothermal nucleic acid amplification.
Chang, Chia-Chen; Chen, Chien-Cheng; Wei, Shih-Chung; Lu, Hui-Hsin; Liang, Yang-Hung; Lin, Chii-Wann
2012-01-01
Since the development of the polymerase chain reaction (PCR) technique, genomic information has been retrievable from lesser amounts of DNA than previously possible. PCR-based amplifications require high-precision instruments to perform temperature cycling reactions; further, they are cumbersome for routine clinical use. However, the use of isothermal approaches can eliminate many complications associated with thermocycling. The application of diagnostic devices for isothermal DNA amplification has recently been studied extensively. In this paper, we describe the basic concepts of several isothermal amplification approaches and review recent progress in diagnostic device development.
Diagnostic Devices for Isothermal Nucleic Acid Amplification
Chang, Chia-Chen; Chen, Chien-Cheng; Wei, Shih-Chung; Lu, Hui-Hsin; Liang, Yang-Hung; Lin, Chii-Wann
2012-01-01
Since the development of the polymerase chain reaction (PCR) technique, genomic information has been retrievable from lesser amounts of DNA than previously possible. PCR-based amplifications require high-precision instruments to perform temperature cycling reactions; further, they are cumbersome for routine clinical use. However, the use of isothermal approaches can eliminate many complications associated with thermocycling. The application of diagnostic devices for isothermal DNA amplification has recently been studied extensively. In this paper, we describe the basic concepts of several isothermal amplification approaches and review recent progress in diagnostic device development. PMID:22969402
Shape and texture fused recognition of flying targets
NASA Astrophysics Data System (ADS)
Kovács, Levente; Utasi, Ákos; Kovács, Andrea; Szirányi, Tamás
2011-06-01
This paper presents visual detection and recognition of flying targets (e.g. planes, missiles) based on automatically extracted shape and object texture information, for application areas like alerting, recognition and tracking. Targets are extracted based on robust background modeling and a novel contour extraction approach, and object recognition is done by comparisons to shape and texture based query results on a previously gathered real life object dataset. Application areas involve passive defense scenarios, including automatic object detection and tracking with cheap commodity hardware components (CPU, camera and GPS).
Chigerwe, Munashe; Ilkiw, Jan E; Boudreaux, Karen A
2011-01-01
The objectives of the present study were to evaluate first-, second-, third-, and fourth-year veterinary medical students' approaches to studying and learning as well as the factors within the curriculum that may influence these approaches. A questionnaire consisting of the short version of the Approaches and Study Skills Inventory for Students (ASSIST) was completed by 405 students, and it included questions relating to conceptions about learning, approaches to studying, and preferences for different types of courses and teaching. Descriptive statistics, factor analysis, Cronbach's alpha analysis, and log-linear analysis were performed on the data. Deep, strategic, and surface learning approaches emerged. There were a few differences between our findings and those presented in previous studies in terms of the correlation of the subscale monitoring effectiveness, which showed loading with both the deep and strategic learning approaches. In addition, the subscale alertness to assessment demands showed correlation with the surface learning approach. The perception of high workloads, the use of previous test files as a method for studying, and examinations that are based only on material provided in lecture notes were positively associated with the surface learning approach. Focusing on improving specific teaching and assessment methods that enhance deep learning is anticipated to enhance students' positive learning experience. These teaching methods include instructors who encourage students to be critical thinkers, the integration of course material in other disciplines, courses that encourage thinking and reading about the learning material, and books and articles that challenge students while providing explanations beyond lecture material.
A multi-frequency receiver function inversion approach for crustal velocity structure
NASA Astrophysics Data System (ADS)
Li, Xuelei; Li, Zhiwei; Hao, Tianyao; Wang, Sheng; Xing, Jian
2017-05-01
In order to constrain the crustal velocity structures better, we developed a new nonlinear inversion approach based on multi-frequency receiver function waveforms. With the global optimizing algorithm of Differential Evolution (DE), low-frequency receiver function waveforms can primarily constrain large-scale velocity structures, while high-frequency receiver function waveforms show the advantages in recovering small-scale velocity structures. Based on the synthetic tests with multi-frequency receiver function waveforms, the proposed approach can constrain both long- and short-wavelength characteristics of the crustal velocity structures simultaneously. Inversions with real data are also conducted for the seismic stations of KMNB in southeast China and HYB in Indian continent, where crustal structures have been well studied by former researchers. Comparisons of inverted velocity models from previous and our studies suggest good consistency, but better waveform fitness with fewer model parameters are achieved by our proposed approach. Comprehensive tests with synthetic and real data suggest that the proposed inversion approach with multi-frequency receiver function is effective and robust in inverting the crustal velocity structures.
Bellebaum, Christian; Kuchinke, Lars; Roser, Patrik
2017-02-01
Modafinil is becoming increasingly popular as a cognitive enhancer. Research on the effects of modafinil on cognitive function have yielded mixed results, with negative findings for simple memory and attention tasks and enhancing effects for more complex tasks. In the present study we examined whether modafinil, due to its known effect on the dopamine level in the striatum, alters feedback-related choice behaviour. We applied a task that separately tests the choice of previously rewarded behaviours (approach) and avoidance of previously punished behaviours. 18 participants received a single dose of 200 mg modafinil. Their performance was compared to a group of 22 participants who received placebo in a double-blind design. Modafinil but not placebo induced a significant bias towards approach behaviour as compared to the frequency of avoidance behaviour. General attention, overall feedback-based acquisition of choice behaviour and reaction times in high vs low conflict choices were not significantly affected by modafinil. This finding suggests that modafinil has a specific effect on dopamine-mediated choice behaviour based on the history of feedback, while a contribution of noradrenaline is also conceivable. The described change in decision making cannot be considered as cognitive enhancement, but might rather have detrimental effects on decisions in everyday life.
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans
2015-02-01
The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this relative improvement decreases with increasing number of sample points and input parameter dimensions. Since the computational time and efforts for generating the sample designs in the two approaches are identical, the use of midpoint LHS as the initial design in OLHS is thus recommended.
z'-BAND GROUND-BASED DETECTION OF THE SECONDARY ECLIPSE OF WASP-19b
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burton, J. R.; Watson, C. A.; Pollacco, D.
2012-08-01
We present the ground-based detection of the secondary eclipse of the transiting exoplanet WASP-19b. The observations were made in the Sloan z' band using the ULTRACAM triple-beam CCD camera mounted on the New Technology Telescope. The measurement shows a 0.088% {+-} 0.019% eclipse depth, matching previous predictions based on H- and K-band measurements. We discuss in detail our approach to the removal of errors arising due to systematics in the data set, in addition to fitting a model transit to our data. This fit returns an eclipse center, T{sub 0}, of 2455578.7676 HJD, consistent with a circular orbit. Our measurementmore » of the secondary eclipse depth is also compared to model atmospheres of WASP-19b and is found to be consistent with previous measurements at longer wavelengths for the model atmospheres we investigated.« less
ERIC Educational Resources Information Center
Poza-Lujan, Jose-Luis; Calafate, Carlos T.; Posadas-Yagüe, Juan-Luis; Cano, Juan-Carlos
2016-01-01
Current opinion on undergraduate studies has led to a reformulation of teaching methodologies to base them not just on learning, but also on skills and competencies. In this approach, the teaching/learning process should accomplish both knowledge assimilation and skill development. Previous works demonstrated that a strategy that uses continuous…
USDA-ARS?s Scientific Manuscript database
Irrigation is a widely used water management practice that is often poorly parameterized in land surface and climate models. Previous studies have addressed this issue via use of irrigation area, applied water inventory data, or soil moisture content. These approaches have a variety of drawbacks i...
USDA-ARS?s Scientific Manuscript database
To accurately measure gene expression using PCR-based approaches, there is the need for reference genes that have low variance in expression (housekeeping genes) to normalise the data for RNA quantity and quality. For non-model species such as Malus x domestica (apples), previously, the selection of...
Early Prediction of Student Profiles Based on Performance and Gaming Preferences
ERIC Educational Resources Information Center
Barata, Gabriel; Gama, Sandra; Jorge, Joaquim; Gonçalves, Daniel
2016-01-01
State of the art research shows that gamified learning can be used to engage students and help them perform better. However, most studies use a one-size-fits-all approach to gamification, where individual differences and needs are ignored. In a previous study, we identified four types of students attending a gamified college course, characterized…
The Multisyllabic Word Dilemma: Helping Students Build Meaning, Spell, and Read "Big" Words.
ERIC Educational Resources Information Center
Cunningham, Patricia M.
1998-01-01
Looks at what is known about multisyllabic words, which is a lot more than educators knew when the previous generation of multisyllabic word instruction was created. Reviews the few studies that have carried out instructional approaches to increase students' ability to decode big words. Outlines a program of instruction, based on what is currently…
Inferring ancestral distribution area and survival vegetation of Caragana (Fabaceae) in Tertiary
Mingli Zhang; Juanjuan Xue; Qiang Zhang; Stewart C. Sanderson
2015-01-01
Caragana, a leguminous genus mainly restricted to temperate Central and East Asia, occurs in arid, semiarid, and humid belts, and has forest, grassland, and desert ecotypes. Based on the previous molecular phylogenetic tree and dating, biogeographical analyses of extant species area and ecotype were conducted by means of four ancestral optimization approaches: S-DIVA,...
Researcher Creations? The Positioning of Policy Texts in Higher Education Research
ERIC Educational Resources Information Center
Ashwin, Paul; Smith, Karen
2015-01-01
In this article we explore the way in which policy texts are positioned in a selection of higher education journal articles. Previous research has suggested that policy implementation studies have taken an uncritical approach to researching policies. Based on an analysis of articles published in higher education and policy journals in 2011, we…
Brain-Based Aspects of Cognitive Learning Approaches in Second Language Learning
ERIC Educational Resources Information Center
Moghaddam, Alireza Navid; Araghi, Seyed Mahdi
2013-01-01
Language learning process is one of the complicated behaviors of human beings which has called many scholars and experts' attention especially after the middle of last century by the advent of cognitive psychology that later on we see its implication to education. Unlike previous thought of schools, cognitive psychology deals with the way in which…
Method and apparatus for modeling interactions
Xavier, Patrick G.
2002-01-01
The present invention provides a method and apparatus for modeling interactions that overcomes drawbacks. The method of the present invention comprises representing two bodies undergoing translations by two swept volume representations. Interactions such as nearest approach and collision can be modeled based on the swept body representations. The present invention is more robust and allows faster modeling than previous methods.
ERIC Educational Resources Information Center
Moodley, P.; Kritzinger, A.; Vinck, B.
2016-01-01
In a previous study Moodley, Kritzinger and Vinck (2014) found that formal English Additional Language (EAL) instruction contributed significantly better to listening and speaking skills in Grade R learners, than did a play-based approach. The finding in multilingual rural Mpumalanga schools was in agreement with numerous studies elsewhere.…
Introduction of Communicative Language Teaching in Tourism in Cuba.
ERIC Educational Resources Information Center
Valdes, Antonio Irizar; Jhones, Ada Chiappy
1991-01-01
Describes experimental program based on the ideas of the communicative approach to teaching English as a foreign language that was implemented at the Centre for Studies in Tourism in Havana in 1987. Special emphasis is on the difficulties encountered by teachers in a foreign language setting who had been previously used to teaching prescribed,…
Juvenile tree growth on some volcanic ash soils disturbed by prior forest harvest.
J. Michael Geist; John W. Hazard; Kenneth W. Seidel
2008-01-01
The effects of mechanical disturbance from traditional ground-based logging and site preparation on volcanic ash soil and associated tree growth were investigated by using two study approaches in a retrospective study. This research was conducted on volcanic ash soils within previously harvested units in the Blue Mountains of northeast Oregon and southwest Washington....
A Crisis of Confidence: Women Coaches' Responses to Their Engagement in Resistance
ERIC Educational Resources Information Center
Norman, Leanne
2014-01-01
This study centres upon the accounts of master women coaches based in the UK, exploring how they have individually experienced such acts of resistance as reaching the top of such a male dominated profession. By going beyond previous positivist feminist approaches to this focus of inquiry, I employ a feminist cultural studies framework to…
Planning and Scheduling of Software Manufacturing Projects
1991-03-01
based on the previous results in social analysis of computing, operations research in manufacturing, artificial intelligence in manufacturing...planning and scheduling, and the traditional approaches to planning in artificial intelligence, and extends the techniques that have been developed by them...social analysis of computing, operations research in manufacturing, artificial intelligence in manufacturing planning and scheduling, and the
Automated DBS microsampling, microscale automation and microflow LC-MS for therapeutic protein PK.
Zhang, Qian; Tomazela, Daniela; Vasicek, Lisa A; Spellman, Daniel S; Beaumont, Maribel; Shyong, BaoJen; Kenny, Jacqueline; Fauty, Scott; Fillgrove, Kerry; Harrelson, Jane; Bateman, Kevin P
2016-04-01
Reduce animal usage for discovery-stage PK studies for biologics programs using microsampling-based approaches and microscale LC-MS. We report the development of an automated DBS-based serial microsampling approach for studying the PK of therapeutic proteins in mice. Automated sample preparation and microflow LC-MS were used to enable assay miniaturization and improve overall assay throughput. Serial sampling of mice was possible over the full 21-day study period with the first six time points over 24 h being collected using automated DBS sample collection. Overall, this approach demonstrated comparable data to a previous study using single mice per time point liquid samples while reducing animal and compound requirements by 14-fold. Reduction in animals and drug material is enabled by the use of automated serial DBS microsampling for mice studies in discovery-stage studies of protein therapeutics.
A Bayesian model averaging method for improving SMT phrase table
NASA Astrophysics Data System (ADS)
Duan, Nan
2013-03-01
Previous methods on improving translation quality by employing multiple SMT models usually carry out as a second-pass decision procedure on hypotheses from multiple systems using extra features instead of using features in existing models in more depth. In this paper, we propose translation model generalization (TMG), an approach that updates probability feature values for the translation model being used based on the model itself and a set of auxiliary models, aiming to alleviate the over-estimation problem and enhance translation quality in the first-pass decoding phase. We validate our approach for translation models based on auxiliary models built by two different ways. We also introduce novel probability variance features into the log-linear models for further improvements. We conclude our approach can be developed independently and integrated into current SMT pipeline directly. We demonstrate BLEU improvements on the NIST Chinese-to-English MT tasks for single-system decodings.
Postgenomic strategies in antibacterial drug discovery.
Brötz-Oesterhelt, Heike; Sass, Peter
2010-10-01
During the last decade the field of antibacterial drug discovery has changed in many aspects including bacterial organisms of primary interest, discovery strategies applied and pharmaceutical companies involved. Target-based high-throughput screening had been disappointingly unsuccessful for antibiotic research. Understanding of this lack of success has increased substantially and the lessons learned refer to characteristics of targets, screening libraries and screening strategies. The 'genomics' approach was replaced by a diverse array of discovery strategies, for example, searching for new natural product leads among previously abandoned compounds or new microbial sources, screening for synthetic inhibitors by targeted approaches including structure-based design and analyses of focused libraries and designing resistance-breaking properties into antibiotics of established classes. Furthermore, alternative treatment options are being pursued including anti-virulence strategies and immunotherapeutic approaches. This article summarizes the lessons learned from the genomics era and describes discovery strategies resulting from that knowledge.
NASA Technical Reports Server (NTRS)
Halbig, Michael C.; Singh, Mrityunjay
2015-01-01
Advanced silicon carbide-based ceramics and composites are being developed for a wide variety of high temperature extreme environment applications. Robust high temperature joining and integration technologies are enabling for the fabrication and manufacturing of large and complex shaped components. The development of a new joining approach called SET (Single-step Elevated Temperature) joining will be described along with the overview of previously developed joining approaches including high temperature brazing, ARCJoinT (Affordable, Robust Ceramic Joining Technology), diffusion bonding, and REABOND (Refractory Eutectic Assisted Bonding). Unlike other approaches, SET joining does not have any lower temperature phases and will therefore have a use temperature above 1315C. Optimization of the composition for full conversion to silicon carbide will be discussed. The goal is to find a composition with no remaining carbon or free silicon. Green tape interlayers were developed for joining. Microstructural analysis and preliminary mechanical tests of the joints will be presented.
SLS Navigation Model-Based Design Approach
NASA Technical Reports Server (NTRS)
Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas
2018-01-01
The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and management of design requirements to the development of usable models, model requirements, and model verification and validation efforts. The models themselves are represented in C/C++ code and accompanying data files. Under the idealized process, potential ambiguity in specification is reduced because the model must be implementable versus a requirement which is not necessarily subject to this constraint. Further, the models are shown to emulate the hardware during validation. For models developed by the Navigation Team, a common interface/standalone environment was developed. The common environment allows for easy implementation in design and analysis tools. Mechanisms such as unit test cases ensure implementation as the developer intended. The model verification and validation process provides a very high level of component design insight. The origin and implementation of the SLS variant of Model-based Design is described from the perspective of the SLS Navigation Team. The format of the models and the requirements are described. The Model-based Design approach has many benefits but is not without potential complications. Key lessons learned associated with the implementation of the Model Based Design approach and process from infancy to verification and certification are discussed
Static and Dynamic Frequency Scaling on Multicore CPUs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Wenlei; Hong, Changwan; Chunduri, Sudheer
2016-12-28
Dynamic voltage and frequency scaling (DVFS) adapts CPU power consumption by modifying a processor’s operating frequency (and the associated voltage). Typical approaches employing DVFS involve default strategies such as running at the lowest or the highest frequency, or observing the CPU’s runtime behavior and dynamically adapting the voltage/frequency configuration based on CPU usage. In this paper, we argue that many previous approaches suffer from inherent limitations, such as not account- ing for processor-specific impact of frequency changes on energy for different workload types. We first propose a lightweight runtime-based approach to automatically adapt the frequency based on the CPU workload,more » that is agnostic of the processor characteristics. We then show that further improvements can be achieved for affine kernels in the application, using a compile-time characterization instead of run-time monitoring to select the frequency and number of CPU cores to use. Our framework relies on a one-time energy characterization of CPU-specific DVFS profiles followed by a compile-time categorization of loop-based code segments in the application. These are combined to determine a priori of the frequency and the number of cores to use to execute the application so as to optimize energy or energy-delay product, outperforming runtime approach. Extensive evaluation on 60 benchmarks and five multi-core CPUs show that our approach systematically outperforms the powersave Linux governor, while improving overall performance.« less
Carter, Cindy L; Onicescu, Georgiana; Cartmell, Kathleen B; Sterba, Katherine R; Tomsic, James; Alberg, Anthony J
2012-08-01
Physical activity benefits cancer survivors, but the comparative effectiveness of a team-based delivery approach remains unexplored. The hypothesis tested was that a team-based physical activity intervention delivery approach has added physical and psychological benefits compared to a group-based approach. A team-based sport accessible to survivors is dragon boating, which requires no previous experience and allows for diverse skill levels. In a non-randomized trial, cancer survivors chose between two similarly structured 8-week programs, a dragon boat paddling team (n = 68) or group-based walking program (n = 52). Three separate intervention rounds were carried out in 2007-2008. Pre-post testing measured physical and psychosocial outcomes. Compared to walkers, paddlers had significantly greater (all p < 0.01) team cohesion, program adherence/attendance, and increased upper-body strength. For quality-of-life outcomes, both interventions were associated with pre-post improvements, but with no clear-cut pattern of between-intervention differences. These hypothesis-generating findings suggest that a short-term, team-based physical activity program (dragon boat paddling) was associated with increased cohesion and adherence/attendance. Improvements in physical fitness and psychosocial benefits were comparable to a traditional, group-based walking program. Compared to a group-based intervention delivery format, the team-based intervention delivery format holds promise for promoting physical activity program adherence/attendance in cancer survivors.
GRILLIX: a 3D turbulence code based on the flux-coordinate independent approach
NASA Astrophysics Data System (ADS)
Stegmeir, Andreas; Coster, David; Ross, Alexander; Maj, Omar; Lackner, Karl; Poli, Emanuele
2018-03-01
The GRILLIX code is presented with which plasma turbulence/transport in various geometries can be simulated in 3D. The distinguishing feature of the code is that it is based on the flux-coordinate independent approach (FCI) (Hariri and Ottaviani 2013 Comput. Phys. Commun. 184 2419; Stegmeir et al 2016 Comput. Phys. Commun. 198 139). Cylindrical or Cartesian grids are used on which perpendicular operators are discretised via standard finite difference methods and parallel operators via a field line tracing and interpolation procedure (field line map). This offers a very high flexibility with respect to geometry, especially a separatrix with X-point(s) or a magnetic axis can be treated easily in contrast to approaches which are based on field aligned coordinates and suffer from coordinate singularities. Aiming finally for simulation of edge and scrape-off layer (SOL) turbulence, an isothermal electrostatic drift-reduced Braginskii model (Zeiler et al 1997 Phys. Plasmas 4 2134) has been implemented in GRILLIX. We present the numerical approach, which is based on a toroidally staggered formulation of the FCI, we show verification of the code with the method of manufactured solutions and show a benchmark based on a TORPEX blob experiment, previously performed by several edge/SOL codes (Riva et al 2016 Plasma Phys. Control. Fusion 58 044005). Examples for slab, circular, limiter and diverted geometry are presented. Finally, the results show that the FCI approach in general and GRILLIX in particular are viable approaches in order to tackle simulation of edge/SOL turbulence in diverted geometry.
Context-based virtual metrology
NASA Astrophysics Data System (ADS)
Ebersbach, Peter; Urbanowicz, Adam M.; Likhachev, Dmitriy; Hartig, Carsten; Shifrin, Michael
2018-03-01
Hybrid and data feed forward methodologies are well established for advanced optical process control solutions in highvolume semiconductor manufacturing. Appropriate information from previous measurements, transferred into advanced optical model(s) at following step(s), provides enhanced accuracy and exactness of the measured topographic (thicknesses, critical dimensions, etc.) and material parameters. In some cases, hybrid or feed-forward data are missed or invalid for dies or for a whole wafer. We focus on approaches of virtual metrology to re-create hybrid or feed-forward data inputs in high-volume manufacturing. We discuss missing data inputs reconstruction which is based on various interpolation and extrapolation schemes and uses information about wafer's process history. Moreover, we demonstrate data reconstruction approach based on machine learning techniques utilizing optical model and measured spectra. And finally, we investigate metrics that allow one to assess error margin of virtual data input.
Standard metrics for a plug-and-play tracker
NASA Astrophysics Data System (ADS)
Antonisse, Jim; Young, Darrell
2012-06-01
The Motion Imagery Standards Board (MISB) has previously established a metadata "micro-architecture" for standards-based tracking. The intent of this work is to facilitate both the collaborative development of competent tracking systems, and the potentially distributed and dispersed execution of tracker system components in real-world execution environments. The approach standardizes a set of five quasi-sequential modules in image-based tracking. However, in order to make the plug-and-play architecture truly useful we need metrics associated with each module (so that, for instance, a researcher who "plugs in" a new component can ascertain whether he/she did better or worse with the component). This paper proposes the choice of a new, unifying set of metrics based on an informationtheoretic approach to tracking, which the MISB is nominating as DoD/IC/NATO standards.
Innocenti, Paolo; Woodward, Hannah L; Solanki, Savade; Naud, Sébastien; Westwood, Isaac M; Cronin, Nora; Hayes, Angela; Roberts, Jennie; Henley, Alan T; Baker, Ross; Faisal, Amir; Mak, Grace Wing-Yan; Box, Gary; Valenti, Melanie; De Haven Brandon, Alexis; O'Fee, Lisa; Saville, Harry; Schmitt, Jessica; Matijssen, Berry; Burke, Rosemary; van Montfort, Rob L M; Raynaud, Florence I; Eccles, Suzanne A; Linardopoulos, Spiros; Blagg, Julian; Hoelder, Swen
2016-04-28
Monopolar spindle 1 (MPS1) plays a central role in the transition of cells from metaphase to anaphase and is one of the main components of the spindle assembly checkpoint. Chromosomally unstable cancer cells rely heavily on MPS1 to cope with the stress arising from abnormal numbers of chromosomes and centrosomes and are thus more sensitive to MPS1 inhibition than normal cells. We report the discovery and optimization of a series of new pyrido[3,4-d]pyrimidine based inhibitors via a structure-based hybridization approach from our previously reported inhibitor CCT251455 and a modestly potent screening hit. Compounds in this novel series display excellent potency and selectivity for MPS1, which translates into biomarker modulation in an in vivo human tumor xenograft model.
Real-time traffic sign recognition based on a general purpose GPU and deep-learning.
Lim, Kwangyong; Hong, Yongwon; Choi, Yeongwoo; Byun, Hyeran
2017-01-01
We present a General Purpose Graphics Processing Unit (GPGPU) based real-time traffic sign detection and recognition method that is robust against illumination changes. There have been many approaches to traffic sign recognition in various research fields; however, previous approaches faced several limitations when under low illumination or wide variance of light conditions. To overcome these drawbacks and improve processing speeds, we propose a method that 1) is robust against illumination changes, 2) uses GPGPU-based real-time traffic sign detection, and 3) performs region detecting and recognition using a hierarchical model. This method produces stable results in low illumination environments. Both detection and hierarchical recognition are performed in real-time, and the proposed method achieves 0.97 F1-score on our collective dataset, which uses the Vienna convention traffic rules (Germany and South Korea).
A service oriented approach for guidelines-based clinical decision support using BPMN.
Rodriguez-Loya, Salvador; Aziz, Ayesha; Chatwin, Chris
2014-01-01
Evidence-based medical practice requires that clinical guidelines need to be documented in such a way that they represent a clinical workflow in its most accessible form. In order to optimize clinical processes to improve clinical outcomes, we propose a Service Oriented Architecture (SOA) based approach for implementing clinical guidelines that can be accessed from an Electronic Health Record (EHR) application with a Web Services enabled communication mechanism with the Enterprise Service Bus. We have used Business Process Modelling Notation (BPMN) for modelling and presenting the clinical pathway in the form of a workflow. The aim of this study is to produce spontaneous alerts in the healthcare workflow in the diagnosis of Chronic Obstructive Pulmonary Disease (COPD). The use of BPMN as a tool to automate clinical guidelines has not been previously employed for providing Clinical Decision Support (CDS).
Enhancing data utilization through adoption of cloud-based data architectures (Invited Paper 211869)
NASA Astrophysics Data System (ADS)
Kearns, E. J.
2017-12-01
A traditional approach to data distribution and utilization of open government data involves continuously moving those data from a central government location to each potential user, who would then utilize them on their local computer systems. An alternate approach would be to bring those users to the open government data, where users would also have access to computing and analytics capabilities that would support data utilization. NOAA's Big Data Project is exploring such an alternate approach through an experimental collaboration with Amazon Web Services, Google Cloud Platform, IBM, Microsoft Azure, and the Open Commons Consortium. As part of this ongoing experiment, NOAA is providing open data of interest which are freely hosted by the Big Data Project Collaborators, who provide a variety of cloud-based services and capabilities to enable utilization by data users. By the terms of the agreement, the Collaborators may charge for those value-added services and processing capacities to recover their costs to freely host the data and to generate profits if so desired. Initial results have shown sustained increases in data utilization from 2 to over 100 times previously-observed access patterns from traditional approaches. Significantly increased utilization speed as compared to the traditional approach has also been observed by NOAA data users who have volunteered their experiences on these cloud-based systems. The potential for implementing and sustaining the alternate cloud-based approach as part of a change in operational data utilization strategies will be discussed.
NASA Astrophysics Data System (ADS)
Li, B.; Lee, H. C.; Duan, X.; Shen, C.; Zhou, L.; Jia, X.; Yang, M.
2017-09-01
The dual-energy CT-based (DECT) approach holds promise in reducing the overall uncertainty in proton stopping-power-ratio (SPR) estimation as compared to the conventional stoichiometric calibration approach. The objective of this study was to analyze the factors contributing to uncertainty in SPR estimation using the DECT-based approach and to derive a comprehensive estimate of the range uncertainty associated with SPR estimation in treatment planning. Two state-of-the-art DECT-based methods were selected and implemented on a Siemens SOMATOM Force DECT scanner. The uncertainties were first divided into five independent categories. The uncertainty associated with each category was estimated for lung, soft and bone tissues separately. A single composite uncertainty estimate was eventually determined for three tumor sites (lung, prostate and head-and-neck) by weighting the relative proportion of each tissue group for that specific site. The uncertainties associated with the two selected DECT methods were found to be similar, therefore the following results applied to both methods. The overall uncertainty (1σ) in SPR estimation with the DECT-based approach was estimated to be 3.8%, 1.2% and 2.0% for lung, soft and bone tissues, respectively. The dominant factor contributing to uncertainty in the DECT approach was the imaging uncertainties, followed by the DECT modeling uncertainties. Our study showed that the DECT approach can reduce the overall range uncertainty to approximately 2.2% (2σ) in clinical scenarios, in contrast to the previously reported 1%.
Quenching the XXZ spin chain: quench action approach versus generalized Gibbs ensemble
NASA Astrophysics Data System (ADS)
Mestyán, M.; Pozsgay, B.; Takács, G.; Werner, M. A.
2015-04-01
Following our previous work (Pozsgay et al 2014 Phys. Rev. Lett. 113 117203) we present here a detailed comparison of the quench action approach and the predictions of the generalized Gibbs ensemble, with the result that while the quench action formalism correctly captures the steady state, the GGE does not give a correct description of local short-distance correlation functions. We extend our studies to include another initial state, the so-called q-dimer state. We present important details of our construction, including new results concerning exact overlaps for the dimer and q-dimer states, and we also give an exact solution of the quench-action-based overlap-TBA for the q-dimer. Furthermore, we extend our computations to include the xx spin correlations besides the zz correlations treated previously, and give a detailed discussion of the underlying reasons for the failure of the GGE, especially in the light of new developments.
Hurricane Harvey Rainfall, Did It Exceed PMP and What are the Implications?
NASA Astrophysics Data System (ADS)
Kappel, B.; Hultstrand, D.; Muhlestein, G.
2017-12-01
Rainfall resulting from Hurricane Harvey reached historic levels over the coastal regions of Texas and Louisiana during the last week of August 2017. Although extreme rainfall from this landfalling tropical system is not uncommon in the region, Harvey was unique in that it persisted over the same general location for several days, producing volumes of rainfall not previously observed in the United States. Devastating flooding and severe stress to infrastructure in the region was the result. Coincidentally, Applied Weather Associates had recently completed an updated statewide Probable Maximum Precipitation (PMP) study for Texas. This storm proved to be a real-time test of the adequacy of those values. AWA calculates PMP following a storm-based approach. This same approach was use in the HMRs. Therefore inclusion of all PMP-type storms is critically important to ensuring that appropriate PMP values are produced. This presentation will discuss the analysis of the Harvey rainfall using the Storm Precipitation Analysis System (SPAS) program used to analyze all storms used in PMP development, compare the results of the Harvey rainfall analysis against previous similar storms, and provide comparisons of the Harvey rainfall against previous and current PMP depths. Discussion will be included regarding the implications of the storm on previous and future PMP estimates, dam safety design, and infrastructure vulnerable to extreme flooding.
Membrane-Based Characterization of a Gas Component — A Transient Sensor Theory
Lazik, Detlef
2014-01-01
Based on a multi-gas solution-diffusion problem for a dense symmetrical membrane this paper presents a transient theory of a planar, membrane-based sensor cell for measuring gas from both initial conditions: dynamic and thermodynamic equilibrium. Using this theory, the ranges for which previously developed, simpler approaches are valid will be discussed; these approaches are of vital interest for membrane-based gas sensor applications. Finally, a new theoretical approach is introduced to identify varying gas components by arranging sensor cell pairs resulting in a concentration independent gas-specific critical time. Literature data for the N2, O2, Ar, CH4, CO2, H2 and C4H10 diffusion coefficients and solubilities for a polydimethylsiloxane membrane were used to simulate gas specific sensor responses. The results demonstrate the influence of (i) the operational mode; (ii) sensor geometry and (iii) gas matrices (air, Ar) on that critical time. Based on the developed theory the case-specific suitable membrane materials can be determined and both operation and design options for these sensors can be optimized for individual applications. The results of mixing experiments for different gases (O2, CO2) in a gas matrix of air confirmed the theoretical predictions. PMID:24608004
NASA Astrophysics Data System (ADS)
Belazi, Akram; Abd El-Latif, Ahmed A.; Diaconu, Adrian-Viorel; Rhouma, Rhouma; Belghith, Safya
2017-01-01
In this paper, a new chaos-based partial image encryption scheme based on Substitution-boxes (S-box) constructed by chaotic system and Linear Fractional Transform (LFT) is proposed. It encrypts only the requisite parts of the sensitive information in Lifting-Wavelet Transform (LWT) frequency domain based on hybrid of chaotic maps and a new S-box. In the proposed encryption scheme, the characteristics of confusion and diffusion are accomplished in three phases: block permutation, substitution, and diffusion. Then, we used dynamic keys instead of fixed keys used in other approaches, to control the encryption process and make any attack impossible. The new S-box was constructed by mixing of chaotic map and LFT to insure the high confidentiality in the inner encryption of the proposed approach. In addition, the hybrid compound of S-box and chaotic systems strengthened the whole encryption performance and enlarged the key space required to resist the brute force attacks. Extensive experiments were conducted to evaluate the security and efficiency of the proposed approach. In comparison with previous schemes, the proposed cryptosystem scheme showed high performances and great potential for prominent prevalence in cryptographic applications.
Towards a Viscous Wall Model for Immersed Boundary Methods
NASA Technical Reports Server (NTRS)
Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.
2016-01-01
Immersed boundary methods are frequently employed for simulating flows at low Reynolds numbers or for applications where viscous boundary layer effects can be neglected. The primary shortcoming of Cartesian mesh immersed boundary methods is the inability of efficiently resolving thin turbulent boundary layers in high-Reynolds number flow application. The inefficiency of resolving the thin boundary is associated with the use of constant aspect ratio Cartesian grid cells. Conventional CFD approaches can efficiently resolve the large wall normal gradients by utilizing large aspect ratio cells near the wall. This paper presents different approaches for immersed boundary methods to account for the viscous boundary layer interaction with the flow-field away from the walls. Different wall modeling approaches proposed in previous research studies are addressed and compared to a new integral boundary layer based approach. In contrast to common wall-modeling approaches that usually only utilize local flow information, the integral boundary layer based approach keeps the streamwise history of the boundary layer. This allows the method to remain effective at much larger y+ values than local wall modeling approaches. After a theoretical discussion of the different approaches, the method is applied to increasingly more challenging flow fields including fully attached, separated, and shock-induced separated (laminar and turbulent) flows.
NASA Astrophysics Data System (ADS)
Liu, Shun; Xu, Jinglei; Yu, Kaikai
2017-06-01
This paper proposes an improved approach for extraction of pressure fields from velocity data, such as obtained by particle image velocimetry (PIV), especially for steady compressible flows with strong shocks. The principle of this approach is derived from Navier-Stokes equations, assuming adiabatic condition and neglecting viscosity of flow field boundaries measured by PIV. The computing method is based on MacCormack's technique in computational fluid dynamics. Thus, this approach is called the MacCormack method. Moreover, the MacCormack method is compared with several approaches proposed in previous literature, including the isentropic method, the spatial integration and the Poisson method. The effects of velocity error level and PIV spatial resolution on these approaches are also quantified by using artificial velocity data containing shock waves. The results demonstrate that the MacCormack method has higher reconstruction accuracy than other approaches, and its advantages become more remarkable with shock strengthening. Furthermore, the performance of the MacCormack method is also validated by using synthetic PIV images with an oblique shock wave, confirming the feasibility and advantage of this approach in real PIV experiments. This work is highly significant for the studies on aerospace engineering, especially the outer flow fields of supersonic aircraft and the internal flow fields of ramjets.
Understanding the kinetic mechanism of RNA single base pair formation
Xu, Xiaojun; Yu, Tao; Chen, Shi-Jie
2016-01-01
RNA functions are intrinsically tied to folding kinetics. The most elementary step in RNA folding is the closing and opening of a base pair. Understanding this elementary rate process is the basis for RNA folding kinetics studies. Previous studies mostly focused on the unfolding of base pairs. Here, based on a hybrid approach, we investigate the folding process at level of single base pairing/stacking. The study, which integrates molecular dynamics simulation, kinetic Monte Carlo simulation, and master equation methods, uncovers two alternative dominant pathways: Starting from the unfolded state, the nucleotide backbone first folds to the native conformation, followed by subsequent adjustment of the base conformation. During the base conformational rearrangement, the backbone either retains the native conformation or switches to nonnative conformations in order to lower the kinetic barrier for base rearrangement. The method enables quantification of kinetic partitioning among the different pathways. Moreover, the simulation reveals several intriguing ion binding/dissociation signatures for the conformational changes. Our approach may be useful for developing a base pair opening/closing rate model. PMID:26699466
Genome alignment with graph data structures: a comparison
2014-01-01
Background Recent advances in rapid, low-cost sequencing have opened up the opportunity to study complete genome sequences. The computational approach of multiple genome alignment allows investigation of evolutionarily related genomes in an integrated fashion, providing a basis for downstream analyses such as rearrangement studies and phylogenetic inference. Graphs have proven to be a powerful tool for coping with the complexity of genome-scale sequence alignments. The potential of graphs to intuitively represent all aspects of genome alignments led to the development of graph-based approaches for genome alignment. These approaches construct a graph from a set of local alignments, and derive a genome alignment through identification and removal of graph substructures that indicate errors in the alignment. Results We compare the structures of commonly used graphs in terms of their abilities to represent alignment information. We describe how the graphs can be transformed into each other, and identify and classify graph substructures common to one or more graphs. Based on previous approaches, we compile a list of modifications that remove these substructures. Conclusion We show that crucial pieces of alignment information, associated with inversions and duplications, are not visible in the structure of all graphs. If we neglect vertex or edge labels, the graphs differ in their information content. Still, many ideas are shared among all graph-based approaches. Based on these findings, we outline a conceptual framework for graph-based genome alignment that can assist in the development of future genome alignment tools. PMID:24712884
Robustly Aligning a Shape Model and Its Application to Car Alignment of Unknown Pose.
Li, Yan; Gu, Leon; Kanade, Takeo
2011-09-01
Precisely localizing in an image a set of feature points that form a shape of an object, such as car or face, is called alignment. Previous shape alignment methods attempted to fit a whole shape model to the observed data, based on the assumption of Gaussian observation noise and the associated regularization process. However, such an approach, though able to deal with Gaussian noise in feature detection, turns out not to be robust or precise because it is vulnerable to gross feature detection errors or outliers resulting from partial occlusions or spurious features from the background or neighboring objects. We address this problem by adopting a randomized hypothesis-and-test approach. First, a Bayesian inference algorithm is developed to generate a shape-and-pose hypothesis of the object from a partial shape or a subset of feature points. For alignment, a large number of hypotheses are generated by randomly sampling subsets of feature points, and then evaluated to find the one that minimizes the shape prediction error. This method of randomized subset-based matching can effectively handle outliers and recover the correct object shape. We apply this approach on a challenging data set of over 5,000 different-posed car images, spanning a wide variety of car types, lighting, background scenes, and partial occlusions. Experimental results demonstrate favorable improvements over previous methods on both accuracy and robustness.
A homotopy algorithm for digital optimal projection control GASD-HADOC
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G., Jr.; Richter, Stephen; Davis, Lawrence D.
1993-01-01
The linear-quadratic-gaussian (LQG) compensator was developed to facilitate the design of control laws for multi-input, multi-output (MIMO) systems. The compensator is computed by solving two algebraic equations for which standard closed-loop solutions exist. Unfortunately, the minimal dimension of an LQG compensator is almost always equal to the dimension of the plant and can thus often violate practical implementation constraints on controller order. This deficiency is especially highlighted when considering control-design for high-order systems such as flexible space structures. This deficiency motivated the development of techniques that enable the design of optimal controllers whose dimension is less than that of the design plant. A homotopy approach based on the optimal projection equations that characterize the necessary conditions for optimal reduced-order control. Homotopy algorithms have global convergence properties and hence do not require that the initializing reduced-order controller be close to the optimal reduced-order controller to guarantee convergence. However, the homotopy algorithm previously developed for solving the optimal projection equations has sublinear convergence properties and the convergence slows at higher authority levels and may fail. A new homotopy algorithm for synthesizing optimal reduced-order controllers for discrete-time systems is described. Unlike the previous homotopy approach, the new algorithm is a gradient-based, parameter optimization formulation and was implemented in MATLAB. The results reported may offer the foundation for a reliable approach to optimal, reduced-order controller design.
A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling
NASA Astrophysics Data System (ADS)
Shapiro, B.; Jin, Q.
2015-12-01
Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.
Route Optimization for Offloading Congested Meter Fixes
NASA Technical Reports Server (NTRS)
Xue, Min; Zelinski, Shannon
2016-01-01
The Optimized Route Capability (ORC) concept proposed by the FAA facilitates traffic managers to identify and resolve arrival flight delays caused by bottlenecks formed at arrival meter fixes when there exists imbalance between arrival fixes and runways. ORC makes use of the prediction capability of existing automation tools, monitors the traffic delays based on these predictions, and searches the best reroutes upstream of the meter fixes based on the predictions and estimated arrival schedules when delays are over a predefined threshold. Initial implementation and evaluation of the ORC concept considered only reroutes available at the time arrival congestion was first predicted. This work extends previous work by introducing an additional dimension in reroute options such that ORC can find the best time to reroute and overcome the 'firstcome- first-reroute' phenomenon. To deal with the enlarged reroute solution space, a genetic algorithm was developed to solve this problem. Experiments were conducted using the same traffic scenario used in previous work, when an arrival rush was created for one of the four arrival meter fixes at George Bush Intercontinental Houston Airport. Results showed the new approach further improved delay savings. The suggested route changes from the new approach were on average 30 minutes later than those using other approaches, and fewer numbers of reroutes were required. Fewer numbers of reroutes reduce operational complexity and later reroutes help decision makers deal with uncertain situations.
Low-Rank Discriminant Embedding for Multiview Learning.
Li, Jingjing; Wu, Yue; Zhao, Jidong; Lu, Ke
2017-11-01
This paper focuses on the specific problem of multiview learning where samples have the same feature set but different probability distributions, e.g., different viewpoints or different modalities. Since samples lying in different distributions cannot be compared directly, this paper aims to learn a latent subspace shared by multiple views assuming that the input views are generated from this latent subspace. Previous approaches usually learn the common subspace by either maximizing the empirical likelihood, or preserving the geometric structure. However, considering the complementarity between the two objectives, this paper proposes a novel approach, named low-rank discriminant embedding (LRDE), for multiview learning by taking full advantage of both sides. By further considering the duality between data points and features of multiview scene, i.e., data points can be grouped based on their distribution on features, while features can be grouped based on their distribution on the data points, LRDE not only deploys low-rank constraints on both sample level and feature level to dig out the shared factors across different views, but also preserves geometric information in both the ambient sample space and the embedding feature space by designing a novel graph structure under the framework of graph embedding. Finally, LRDE jointly optimizes low-rank representation and graph embedding in a unified framework. Comprehensive experiments in both multiview manner and pairwise manner demonstrate that LRDE performs much better than previous approaches proposed in recent literatures.
Young, Brian; Walker, Michael J; Strunce, Joseph; Boyles, Robert
2004-11-01
Case series. To describe an impairment-based physical therapy treatment approach for 4 patients with plantar heel pain. There is limited evidence from clinical trials on which to base treatment decision making for plantar heel pain. Four patients completed a course of physical therapy based on an impairment-based model. All patients received manual physical therapy and stretching. Two patients were also treated with custom orthoses, and 1 patient received an additional strengthening program. Outcome measures included a numeric pain rating scale (NPRS) and self-reported functional status. Symptom duration ranged from 6 to 52 weeks (mean duration+/-SD, 33+/-19 weeks). Treatment duration ranged from 8 to 49 days (mean duration+/-SD, 23+/-18 days), with number of treatment sessions ranging from 2 to 7 (mode, 3). All 4 patients reported a decrease in NPRS scores from an average (+/-SD) of 5.8+/-2.2 to 0 (out of 10) during previously painful activities. Additionally, all patients returned to prior activity levels. In this case series, patients with plantar heel pain treated with an impairment-based physical therapy approach emphasizing manual therapy demonstrated complete pain relief and full return to activities. Further research is necessary to determine the effectiveness of impairment-based physical therapy interventions for patients with plantar heel pain/plantar fasciitis.
NASA Astrophysics Data System (ADS)
Liu, Chen; Han, Runze; Zhou, Zheng; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng
2018-04-01
In this work we present a novel convolution computing architecture based on metal oxide resistive random access memory (RRAM) to process the image data stored in the RRAM arrays. The proposed image storage architecture shows performances of better speed-device consumption efficiency compared with the previous kernel storage architecture. Further we improve the architecture for a high accuracy and low power computing by utilizing the binary storage and the series resistor. For a 28 × 28 image and 10 kernels with a size of 3 × 3, compared with the previous kernel storage approach, the newly proposed architecture shows excellent performances including: 1) almost 100% accuracy within 20% LRS variation and 90% HRS variation; 2) more than 67 times speed boost; 3) 71.4% energy saving.
Immunotherapy Approaches for Malignant Glioma From 2007 to 2009
Sampson, John H.
2012-01-01
Malignant glioma is a deadly disease for which there have been few therapeutic advances over the past century. Although previous treatments were largely unsuccessful, glioma may be an ideal target for immune-based therapy. Recently, translational research led to several clinical trials based on tumor immunotherapy to treat patients with malignant glioma. Here we review 17 recent glioma immunotherapy clinical trials, published over the past 3 years. Various approaches were used, including passive transfer of naked and radiolabeled antibodies, tumor antigen-specific peptide immunization, and the use of patient tumor cells with or without dendritic cells as vaccines. We compare and discuss the current state of the art of clinical immunotherapy treatment, as well as its limited successes, pitfalls, and future potential. PMID:20424975
Body-Earth Mover's Distance: A Matching-Based Approach for Sleep Posture Recognition.
Xu, Xiaowei; Lin, Feng; Wang, Aosen; Hu, Yu; Huang, Ming-Chun; Xu, Wenyao
2016-10-01
Sleep posture is a key component in sleep quality assessment and pressure ulcer prevention. Currently, body pressure analysis has been a popular method for sleep posture recognition. In this paper, a matching-based approach, Body-Earth Mover's Distance (BEMD), for sleep posture recognition is proposed. BEMD treats pressure images as weighted 2D shapes, and combines EMD and Euclidean distance for similarity measure. Compared with existing work, sleep posture recognition is achieved with posture similarity rather than multiple features for specific postures. A pilot study is performed with 14 persons for six different postures. The experimental results show that the proposed BEMD can achieve 91.21% accuracy, which outperforms the previous method with an improvement of 8.01%.
Clinical management of poor adherence to CPAP: motivational enhancement.
Aloia, Mark S; Arnedt, J Todd; Riggs, Raine L; Hecht, Jacki; Borrelli, Belinda
2004-01-01
Adherence to continuous positive airway pressure (CPAP) in patients with sleep apnea hypopnea syndrome (SAHS) is poor. Previous studies have attempted to identify specific barriers to treatment, but none has identified the sole cause for the problem. We outline a behavioral approach to the problem of CPAP adherence that is based on the theories of the transtheoretical model and social cognitive theory. We used these theories to guide the development of an intervention based on the methods of motivational interviewing. We present our motivational enhancement therapy for CPAP (ME-CPAP) here, with some brief pilot data to show its efficacy. Finally, we outline some strengths and weaknesses of taking a behavior change approach to the problem of poor CPAP adherence.
Moon, Sanghoon; Kim, Young Jin; Hong, Chang Bum; Kim, Dong-Joon; Lee, Jong-Young; Kim, Bong-Jo
2011-11-01
To date, hundreds of thousands of copy-number variation (CNV) data have been reported using various platforms. The proportion of Asians in these data is, however, relatively small as compared with that of other ethnic groups, such as Caucasians and Yorubas. Because of limitations in platform resolution and the high noise level in signal intensity, in most CNV studies (particularly those using single nucleotide polymorphism arrays), the average number of CNVs in an individual is less than the number of known CNVs. In this study, we ascertained reliable, common CNV regions (CNVRs) and identified actual frequency rates in the Korean population to provide more CNV information. We performed two-stage analyses for detecting structural variations with two platforms. We discovered 576 common CNVRs (88 CNV segments on average in an individual), and 87% (501 of 576) of these CNVRs overlapped by ≥1 bp with previously validated CNV events. Interestingly, from the frequency analysis of CNV profiles, 52 of 576 CNVRs had a frequency rate of <1% in the 8842 individuals. Compared with other common CNV studies, this study found six common CNVRs that were not reported in previous CNV studies. In conclusion, we propose the data-driven detection approach to discover common CNVRs including those of unreported in the previous Korean CNV study while minimizing false positives. Through our approach, we successfully discovered more common CNVRs than previous Korean CNV study and conducted frequency analysis. These results will be a valuable resource for the effective level of CNVs in the Korean population.
Interval-based reconstruction for uncertainty quantification in PET
NASA Astrophysics Data System (ADS)
Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis
2018-02-01
A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.
Park, Hyunseok; Magee, Christopher L
2017-01-01
The aim of this paper is to propose a new method to identify main paths in a technological domain using patent citations. Previous approaches for using main path analysis have greatly improved our understanding of actual technological trajectories but nonetheless have some limitations. They have high potential to miss some dominant patents from the identified main paths; nonetheless, the high network complexity of their main paths makes qualitative tracing of trajectories problematic. The proposed method searches backward and forward paths from the high-persistence patents which are identified based on a standard genetic knowledge persistence algorithm. We tested the new method by applying it to the desalination and the solar photovoltaic domains and compared the results to output from the same domains using a prior method. The empirical results show that the proposed method can dramatically reduce network complexity without missing any dominantly important patents. The main paths identified by our approach for two test cases are almost 10x less complex than the main paths identified by the existing approach. The proposed approach identifies all dominantly important patents on the main paths, but the main paths identified by the existing approach miss about 20% of dominantly important patents.
2017-01-01
The aim of this paper is to propose a new method to identify main paths in a technological domain using patent citations. Previous approaches for using main path analysis have greatly improved our understanding of actual technological trajectories but nonetheless have some limitations. They have high potential to miss some dominant patents from the identified main paths; nonetheless, the high network complexity of their main paths makes qualitative tracing of trajectories problematic. The proposed method searches backward and forward paths from the high-persistence patents which are identified based on a standard genetic knowledge persistence algorithm. We tested the new method by applying it to the desalination and the solar photovoltaic domains and compared the results to output from the same domains using a prior method. The empirical results show that the proposed method can dramatically reduce network complexity without missing any dominantly important patents. The main paths identified by our approach for two test cases are almost 10x less complex than the main paths identified by the existing approach. The proposed approach identifies all dominantly important patents on the main paths, but the main paths identified by the existing approach miss about 20% of dominantly important patents. PMID:28135304
Tensor scale: An analytic approach with efficient computation and applications☆
Xu, Ziyue; Saha, Punam K.; Dasgupta, Soura
2015-01-01
Scale is a widely used notion in computer vision and image understanding that evolved in the form of scale-space theory where the key idea is to represent and analyze an image at various resolutions. Recently, we introduced a notion of local morphometric scale referred to as “tensor scale” using an ellipsoidal model that yields a unified representation of structure size, orientation and anisotropy. In the previous work, tensor scale was described using a 2-D algorithmic approach and a precise analytic definition was missing. Also, the application of tensor scale in 3-D using the previous framework is not practical due to high computational complexity. In this paper, an analytic definition of tensor scale is formulated for n-dimensional (n-D) images that captures local structure size, orientation and anisotropy. Also, an efficient computational solution in 2- and 3-D using several novel differential geometric approaches is presented and the accuracy of results is experimentally examined. Also, a matrix representation of tensor scale is derived facilitating several operations including tensor field smoothing to capture larger contextual knowledge. Finally, the applications of tensor scale in image filtering and n-linear interpolation are presented and the performance of their results is examined in comparison with respective state-of-art methods. Specifically, the performance of tensor scale based image filtering is compared with gradient and Weickert’s structure tensor based diffusive filtering algorithms. Also, the performance of tensor scale based n-linear interpolation is evaluated in comparison with standard n-linear and windowed-sinc interpolation methods. PMID:26236148
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao
Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose amore » Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.« less
Word-level language modeling for P300 spellers based on discriminative graphical models
NASA Astrophysics Data System (ADS)
Delgado Saa, Jaime F.; de Pesters, Adriana; McFarland, Dennis; Çetin, Müjdat
2015-04-01
Objective. In this work we propose a probabilistic graphical model framework that uses language priors at the level of words as a mechanism to increase the performance of P300-based spellers. Approach. This paper is concerned with brain-computer interfaces based on P300 spellers. Motivated by P300 spelling scenarios involving communication based on a limited vocabulary, we propose a probabilistic graphical model framework and an associated classification algorithm that uses learned statistical models of language at the level of words. Exploiting such high-level contextual information helps reduce the error rate of the speller. Main results. Our experimental results demonstrate that the proposed approach offers several advantages over existing methods. Most importantly, it increases the classification accuracy while reducing the number of times the letters need to be flashed, increasing the communication rate of the system. Significance. The proposed approach models all the variables in the P300 speller in a unified framework and has the capability to correct errors in previous letters in a word, given the data for the current one. The structure of the model we propose allows the use of efficient inference algorithms, which in turn makes it possible to use this approach in real-time applications.
Goal-Based Domain Modeling as a Basis for Cross-Disciplinary Systems Engineering
NASA Astrophysics Data System (ADS)
Jarke, Matthias; Nissen, Hans W.; Rose, Thomas; Schmitz, Dominik
Small and medium-sized enterprises (SMEs) are important drivers for innovation. In particular, project-driven SMEs that closely cooperate with their customers have specific needs in regard to information engineering of their development process. They need a fast requirements capture since this is most often included in the (unpaid) offer development phase. At the same time, they need to maintain and reuse the knowledge and experiences they have gathered in previous projects extensively as it is their core asset. The situation is complicated further if the application field crosses disciplinary boundaries. To bridge the gaps and perspectives, we focus on shared goals and dependencies captured in models at a conceptual level. Such a model-based approach also offers a smarter connection to subsequent development stages, including a high share of automated code generation. In the approach presented here, the agent- and goal-oriented formalism i * is therefore extended by domain models to facilitate information organization. This extension permits a domain model-based similarity search, and a model-based transformation towards subsequent development stages. Our approach also addresses the evolution of domain models reflecting the experiences from completed projects. The approach is illustrated with a case study on software-intensive control systems in an SME of the automotive domain.
A Model-Based Joint Identification of Differentially Expressed Genes and Phenotype-Associated Genes
Seo, Minseok; Shin, Su-kyung; Kwon, Eun-Young; Kim, Sung-Eun; Bae, Yun-Jung; Lee, Seungyeoun; Sung, Mi-Kyung; Choi, Myung-Sook; Park, Taesung
2016-01-01
Over the last decade, many analytical methods and tools have been developed for microarray data. The detection of differentially expressed genes (DEGs) among different treatment groups is often a primary purpose of microarray data analysis. In addition, association studies investigating the relationship between genes and a phenotype of interest such as survival time are also popular in microarray data analysis. Phenotype association analysis provides a list of phenotype-associated genes (PAGs). However, it is sometimes necessary to identify genes that are both DEGs and PAGs. We consider the joint identification of DEGs and PAGs in microarray data analyses. The first approach we used was a naïve approach that detects DEGs and PAGs separately and then identifies the genes in an intersection of the list of PAGs and DEGs. The second approach we considered was a hierarchical approach that detects DEGs first and then chooses PAGs from among the DEGs or vice versa. In this study, we propose a new model-based approach for the joint identification of DEGs and PAGs. Unlike the previous two-step approaches, the proposed method identifies genes simultaneously that are DEGs and PAGs. This method uses standard regression models but adopts different null hypothesis from ordinary regression models, which allows us to perform joint identification in one-step. The proposed model-based methods were evaluated using experimental data and simulation studies. The proposed methods were used to analyze a microarray experiment in which the main interest lies in detecting genes that are both DEGs and PAGs, where DEGs are identified between two diet groups and PAGs are associated with four phenotypes reflecting the expression of leptin, adiponectin, insulin-like growth factor 1, and insulin. Model-based approaches provided a larger number of genes, which are both DEGs and PAGs, than other methods. Simulation studies showed that they have more power than other methods. Through analysis of data from experimental microarrays and simulation studies, the proposed model-based approach was shown to provide a more powerful result than the naïve approach and the hierarchical approach. Since our approach is model-based, it is very flexible and can easily handle different types of covariates. PMID:26964035
Pilot dynamics for instrument approach tasks: Full panel multiloop and flight director operations
NASA Technical Reports Server (NTRS)
Weir, D. H.; Mcruer, D. T.
1972-01-01
Measurements and interpretations of single and mutiloop pilot response properties during simulated instrument approach are presented. Pilot subjects flew Category 2-like ILS approaches in a fixed base DC-8 simulaton. A conventional instrument panel and controls were used, with simulated vertical gust and glide slope beam bend forcing functions. Reduced and interpreted pilot describing functions and remmant are given for pitch attitude, flight director, and multiloop (longitudinal) control tasks. The response data are correlated with simultaneously recorded eye scanning statistics, previously reported in NASA CR-1535. The resulting combined response and scanning data and their interpretations provide a basis for validating and extending the theory of manual control displays.
An experimental study of nonlinear dynamic system identification
NASA Technical Reports Server (NTRS)
Stry, Greselda I.; Mook, D. Joseph
1990-01-01
A technique for robust identification of nonlinear dynamic systems is developed and illustrated using both simulations and analog experiments. The technique is based on the Minimum Model Error optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature of the current work is the ability to identify nonlinear dynamic systems without prior assumptions regarding the form of the nonlinearities, in constrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.
NASA Astrophysics Data System (ADS)
McCaul, G. M. G.; Lorenz, C. D.; Kantorovich, L.
2017-03-01
We present a partition-free approach to the evolution of density matrices for open quantum systems coupled to a harmonic environment. The influence functional formalism combined with a two-time Hubbard-Stratonovich transformation allows us to derive a set of exact differential equations for the reduced density matrix of an open system, termed the extended stochastic Liouville-von Neumann equation. Our approach generalizes previous work based on Caldeira-Leggett models and a partitioned initial density matrix. This provides a simple, yet exact, closed-form description for the evolution of open systems from equilibriated initial conditions. The applicability of this model and the potential for numerical implementations are also discussed.
From IHE Audit Trails to XES Event Logs Facilitating Process Mining.
Paster, Ferdinand; Helm, Emmanuel
2015-01-01
Recently Business Intelligence approaches like process mining are applied to the healthcare domain. The goal of process mining is to gain process knowledge, compliance and room for improvement by investigating recorded event data. Previous approaches focused on process discovery by event data from various specific systems. IHE, as a globally recognized basis for healthcare information systems, defines in its ATNA profile how real-world events must be recorded in centralized event logs. The following approach presents how audit trails collected by the means of ATNA can be transformed to enable process mining. Using the standardized audit trails provides the ability to apply these methods to all IHE based information systems.
Roeder, Ingo; Herberg, Maria; Horn, Matthias
2009-04-01
Previously, we have modeled hematopoietic stem cell organization by a stochastic, single cell-based approach. Applications to different experimental systems demonstrated that this model consistently explains a broad variety of in vivo and in vitro data. A major advantage of the agent-based model (ABM) is the representation of heterogeneity within the hematopoietic stem cell population. However, this advantage comes at the price of time-consuming simulations if the systems become large. One example in this respect is the modeling of disease and treatment dynamics in patients with chronic myeloid leukemia (CML), where the realistic number of individual cells to be considered exceeds 10(6). To overcome this deficiency, without losing the representation of the inherent heterogeneity of the stem cell population, we here propose to approximate the ABM by a system of partial differential equations (PDEs). The major benefit of such an approach is its independence from the size of the system. Although this mean field approach includes a number of simplifying assumptions compared to the ABM, it retains the key structure of the model including the "age"-structure of stem cells. We show that the PDE model qualitatively and quantitatively reproduces the results of the agent-based approach.
Absolute order-of-magnitude reasoning applied to a social multi-criteria evaluation framework
NASA Astrophysics Data System (ADS)
Afsordegan, A.; Sánchez, M.; Agell, N.; Aguado, J. C.; Gamboa, G.
2016-03-01
A social multi-criteria evaluation framework for solving a real-case problem of selecting a wind farm location in the regions of Urgell and Conca de Barberá in Catalonia (northeast of Spain) is studied. This paper applies a qualitative multi-criteria decision analysis approach based on linguistic labels assessment able to address uncertainty and deal with different levels of precision. This method is based on qualitative reasoning as an artificial intelligence technique for assessing and ranking multi-attribute alternatives with linguistic labels in order to handle uncertainty. This method is suitable for problems in the social framework such as energy planning which require the construction of a dialogue process among many social actors with high level of complexity and uncertainty. The method is compared with an existing approach, which has been applied previously in the wind farm location problem. This approach, consisting of an outranking method, is based on Condorcet's original method. The results obtained by both approaches are analysed and their performance in the selection of the wind farm location is compared in aggregation procedures. Although results show that both methods conduct to similar alternatives rankings, the study highlights both their advantages and drawbacks.
Royston, Thomas J.; Dai, Zoujun; Chaunsali, Rajesh; Liu, Yifei; Peng, Ying; Magin, Richard L.
2011-01-01
Previous studies of the first author and others have focused on low audible frequency (<1 kHz) shear and surface wave motion in and on a viscoelastic material comprised of or representative of soft biological tissue. A specific case considered has been surface (Rayleigh) wave motion caused by a circular disk located on the surface and oscillating normal to it. Different approaches to identifying the type and coefficients of a viscoelastic model of the material based on these measurements have been proposed. One approach has been to optimize coefficients in an assumed viscoelastic model type to match measurements of the frequency-dependent Rayleigh wave speed. Another approach has been to optimize coefficients in an assumed viscoelastic model type to match the complex-valued frequency response function (FRF) between the excitation location and points at known radial distances from it. In the present article, the relative merits of these approaches are explored theoretically, computationally, and experimentally. It is concluded that matching the complex-valued FRF may provide a better estimate of the viscoelastic model type and parameter values; though, as the studies herein show, there are inherent limitations to identifying viscoelastic properties based on surface wave measurements. PMID:22225067
Baas, Matthijs; Nijstad, Bernard A; Boot, Nathalie C; De Dreu, Carsten K W
2016-06-01
Although many believe that creativity associates with a vulnerability to psychopathology, research findings are inconsistent. Here we address this possible linkage between risk of psychopathology and creativity in nonclinical samples. We propose that propensity for specific psychopathologies can be linked to basic motivational approach and avoidance systems, and that approach and avoidance motivation differentially influences creativity. Based on this reasoning, we predict that propensity for approach-based psychopathologies (e.g., positive schizotypy and risk of bipolar disorder) associates with increased creativity, whereas propensity for avoidance-based psychopathologies (e.g., anxiety, negative schizotypy, and depressive mood) associates with reduced creativity. Previous meta-analyses resonate with this proposition and showed small positive relations between positive schizotypy and creativity and small negative relations between negative schizotypy and creativity and between anxiety and creativity. To this we add new meta-analytic findings showing that risk of bipolar disorder (e.g., hypomania, mania) positively associates with creativity (k = 28, r = .224), whereas depressive mood negatively associates (albeit weakly) with creativity (k = 39, r = -.064). Our theoretical framework, along with the meta-analytic results, indicates when and why specific psychopathologies, and their inclinations, associate with increased or, instead, reduced creativity. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Research on Capturing of Customer Requirements Based on Innovation Theory
NASA Astrophysics Data System (ADS)
junwu, Ding; dongtao, Yang; zhenqiang, Bao
To exactly and effectively capture customer requirements information, a new customer requirements capturing modeling method was proposed. Based on the analysis of function requirement models of previous products and the application of technology system evolution laws of the Theory of Innovative Problem Solving (TRIZ), the customer requirements could be evolved from existing product designs, through modifying the functional requirement unit and confirming the direction of evolution design. Finally, a case study was provided to illustrate the feasibility of the proposed approach.
Methods for Reachability-based Hybrid Controller Design
2012-05-10
approaches for airport runways ( Teo and Tomlin, 2003). The results of the reachability calculations were validated in extensive simulations as well as...UAV flight experiments (Jang and Tomlin, 2005; Teo , 2005). While the focus of these previous applications lies largely in safety verification, the work...B([15, 0],a0)× [−π,π])\\ V,∀qi ∈ Q, where a0 = 30m is the protected radius (chosen based upon published data of the wingspan of a Boeing KC -135
A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification
Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.
2015-01-01
In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898
Automatic parameter selection for feature-based multi-sensor image registration
NASA Astrophysics Data System (ADS)
DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan
2006-05-01
Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.
Intercomparison of 3D pore-scale flow and solute transport simulation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaofan; Mehmani, Yashar; Perkins, William A.
2016-09-01
Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include methods that 1) explicitly model the three-dimensional geometry of pore spaces and 2) those that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of class 1, based on direct numerical simulation using computational fluid dynamics (CFD) codes, against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of class 1 based on the immersed-boundary method (IMB),more » lattice Boltzmann method (LBM), smoothed particle hydrodynamics (SPH), as well as a model of class 2 (a pore-network model or PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and nonreactive solute transport, and intercompare the model results with previously reported experimental observations. Experimental observations are limited to measured pore-scale velocities, so solute transport comparisons are made only among the various models. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations).« less
NASA Astrophysics Data System (ADS)
Yang, Guang; Ye, Xujiong; Slabaugh, Greg; Keegan, Jennifer; Mohiaddin, Raad; Firmin, David
2016-03-01
In this paper, we propose a novel self-learning based single-image super-resolution (SR) method, which is coupled with dual-tree complex wavelet transform (DTCWT) based denoising to better recover high-resolution (HR) medical images. Unlike previous methods, this self-learning based SR approach enables us to reconstruct HR medical images from a single low-resolution (LR) image without extra training on HR image datasets in advance. The relationships between the given image and its scaled down versions are modeled using support vector regression with sparse coding and dictionary learning, without explicitly assuming reoccurrence or self-similarity across image scales. In addition, we perform DTCWT based denoising to initialize the HR images at each scale instead of simple bicubic interpolation. We evaluate our method on a variety of medical images. Both quantitative and qualitative results show that the proposed approach outperforms bicubic interpolation and state-of-the-art single-image SR methods while effectively removing noise.
Boari, Nicola; Gagliardi, Filippo; Roberti, Fabio; Barzaghi, Lina Raffaella; Caputy, Anthony J; Mortini, Pietro
2013-05-01
Several surgical approaches have been previously reported for the treatment of olfactory groove meningiomas (OGM).The trans-frontal-sinus subcranial approach (TFSSA) for the removal of large OGMs is described, comparing it with other reported approaches in terms of advantages and drawbacks. The TFSSA was performed on cadaveric specimens to illustrate the surgical technique. The surgical steps of the TFSSA and the related anatomical pictures are reported. The approach was adopted in a clinical setting; a case illustration is reported to demonstrate the feasibility of the described approach and to provide intraoperative pictures. The TFSSA represents a possible route to treat large OGMs. The subcranial approach provides early devascularization of the tumor, direct tumor access from the base without traction on the frontal lobes, good overview of dissection of the optic nerves and anterior cerebral arteries, and dural reconstruction with pedicled pericranial flap. Georg Thieme Verlag KG Stuttgart · New York.
Quantification of Viral and Prokaryotic Production Rates in Benthic Ecosystems: A Methods Comparison
Rastelli, Eugenio; Dell’Anno, Antonio; Corinaldesi, Cinzia; Middelboe, Mathias; Noble, Rachel T.; Danovaro, Roberto
2016-01-01
Viruses profoundly influence benthic marine ecosystems by infecting and subsequently killing their prokaryotic hosts, thereby impacting the cycling of carbon and nutrients. Previously conducted studies, based on different methodologies, have provided widely differing estimates of the relevance of viruses on benthic prokaryotes. There has been no attempt so far to compare these independent approaches, including contextual comparisons among different approaches for sample manipulation (i.e., dilution or not of the sediments during incubations), between methods based on epifluorescence microscopy (EFM) or radiotracers, and between the use of different radiotracers. Therefore, it has been difficult to identify the most suitable methodologies and protocols to be used as standard approaches for the quantification of viral infections of prokaryotes. Here, we compared for the first time different methods for determining viral and prokaryotic production rates in marine sediments collected at two benthic sites, differing in depth and environmental conditions. We used a highly replicated experimental design, testing the potential biases associated to the incubation of sediments as diluted or undiluted. In parallel, we also compared EFM counts with the 3H-thymidine incubations for the determination of viral production rates, and the use of 3H-thymidine versus 3H-leucine radiotracers for the determination of prokaryotic production. We show here that, independent from sediment dilution, EFM-based values of viral production ranged from 1.4 to 4.6 × 107 viruses g-1 h-1, and were similar but overall less variable compared to those obtained by the 3H-thymidine method (0.3 to 9.0 × 107 viruses g-1h-1). In addition, the prokaryotic production rates were not affected by sediment dilution, and the use of different radiotracers provided very consistent estimates (10.3–35.1 and 9.3–34.6 ngC g-1h-1 using the 3H-thymidine or 3H-leucine method, respectively). These results indicated that viral lysis was responsible for the abatement of 55–81% of the prokaryotic heterotrophic production, corroborating previous findings of the major role of viruses in benthic deep-sea ecosystems. Moreover, our methodological comparison for the analysis of viral production in marine sediments suggests that microscopy-based approaches are simpler and more cost-effective than those based on radiotracers. These approaches also reduce time to results and overcome issues related to generation of radioactive waste. PMID:27713739
Coping efficiently with now-relative medical data.
Stantic, Bela; Terenziani, Paolo; Sattar, Abdul
2008-11-06
In Medical Informatics, there is an increasing awareness that temporal information plays a crucial role, so that suitable database approaches are needed to store and support it. Specifically, most clinical data are intrinsically temporal, and a relevant part of them are now-relative (i.e., they are valid at the current time). Even if previous studies indicate that the treatment of now-relative data has a crucial impact on efficiency, current approaches have several limitations. In this paper we propose a novel approach, which is based on a new representation of now, and on query transformations. We also experimentally demonstrate that our approach outperforms its best competitors in the literature to the extent of a factor of more than ten, both in number of disk accesses and of CPU usage.
NASA Astrophysics Data System (ADS)
Guo, Yang; Becker, Ute; Neese, Frank
2018-03-01
Local correlation theories have been developed in two main flavors: (1) "direct" local correlation methods apply local approximation to the canonical equations and (2) fragment based methods reconstruct the correlation energy from a series of smaller calculations on subsystems. The present work serves two purposes. First, we investigate the relative efficiencies of the two approaches using the domain-based local pair natural orbital (DLPNO) approach as the "direct" method and the cluster in molecule (CIM) approach as the fragment based approach. Both approaches are applied in conjunction with second-order many-body perturbation theory (MP2) as well as coupled-cluster theory with single-, double- and perturbative triple excitations [CCSD(T)]. Second, we have investigated the possible merits of combining the two approaches by performing CIM calculations with DLPNO methods serving as the method of choice for performing the subsystem calculations. Our cluster-in-molecule approach is closely related to but slightly deviates from approaches in the literature since we have avoided real space cutoffs. Moreover, the neglected distant pair correlations in the previous CIM approach are considered approximately. Six very large molecules (503-2380 atoms) were studied. At both MP2 and CCSD(T) levels of theory, the CIM and DLPNO methods show similar efficiency. However, DLPNO methods are more accurate for 3-dimensional systems. While we have found only little incentive for the combination of CIM with DLPNO-MP2, the situation is different for CIM-DLPNO-CCSD(T). This combination is attractive because (1) the better parallelization opportunities offered by CIM; (2) the methodology is less memory intensive than the genuine DLPNO-CCSD(T) method and, hence, allows for large calculations on more modest hardware; and (3) the methodology is applicable and efficient in the frequently met cases, where the largest subsystem calculation is too large for the canonical CCSD(T) method.
Lin, Meihua; Li, Haoli; Zhao, Xiaolei; Qin, Jiheng
2013-01-01
Genome-wide analysis of gene-gene interactions has been recognized as a powerful avenue to identify the missing genetic components that can not be detected by using current single-point association analysis. Recently, several model-free methods (e.g. the commonly used information based metrics and several logistic regression-based metrics) were developed for detecting non-linear dependence between genetic loci, but they are potentially at the risk of inflated false positive error, in particular when the main effects at one or both loci are salient. In this study, we proposed two conditional entropy-based metrics to challenge this limitation. Extensive simulations demonstrated that the two proposed metrics, provided the disease is rare, could maintain consistently correct false positive rate. In the scenarios for a common disease, our proposed metrics achieved better or comparable control of false positive error, compared to four previously proposed model-free metrics. In terms of power, our methods outperformed several competing metrics in a range of common disease models. Furthermore, in real data analyses, both metrics succeeded in detecting interactions and were competitive with the originally reported results or the logistic regression approaches. In conclusion, the proposed conditional entropy-based metrics are promising as alternatives to current model-based approaches for detecting genuine epistatic effects. PMID:24339984
ERIC Educational Resources Information Center
Quennerstedt, Mikael; Annerstedt, Claes; Barker, Dean; Karlefors, Inger; Larsson, Håkan; Redelius, Karin; Öhman, Marie
2014-01-01
This paper outlines a method for exploring learning in educational practice. The suggested method combines an explicit learning theory with robust methodological steps in order to explore aspects of learning in school physical education. The design of the study is based on sociocultural learning theory, and the approach adds to previous research…
USDA-ARS?s Scientific Manuscript database
Nitrous oxide (N2O) emissions are increasing at an unprecedented rate due to increased nitrogen (N) fertilizers use. Thus, new innovative management tools are needed to reduce emissions. One potential approach is the use of microbial inoculants in agricultural production. In a previous incubation st...
ERIC Educational Resources Information Center
Schmitz, Birgit; Klemke, Roland; Specht, Marcus
2013-01-01
Mobile and in particular pervasive games are a strong component of future scenarios for teaching and learning. Based on results from a previous review of practical papers, this work explores the educational potential of pervasive games for learning by analysing underlying game mechanisms. In order to determine and classify cognitive and affective…
Cell culture experiments planned for the space bioreactor
NASA Technical Reports Server (NTRS)
Morrison, Dennis R.; Cross, John H.
1987-01-01
Culturing of cells in a pilot-scale bioreactor remains to be done in microgravity. An approach is presented based on several studies of cell culture systems. Previous and current cell culture research in microgravity which is specifically directed towards development of a space bioprocess is described. Cell culture experiments planned for a microgravity sciences mission are described in abstract form.
ERIC Educational Resources Information Center
Guymon, Ronald E.
An innovative classroom-based approach to reading instruction in the context of Spanish instruction was proposed. The effects of this instruction on the pronunciation ability of students were analyzed. The subjects were 30 adult missionary trainees who had no previous exposure to Spanish. The dependent variable was measured using two instruments.…
ERIC Educational Resources Information Center
Gibbard, Deborah; Smith, Clare
2016-01-01
Primary language delay remains one of the most prevalent developmental delays in early childhood, particularly in disadvantaged areas. Previous research has established language difficulties and social disadvantage being particular risk factors for adverse outcomes later in life. To help prevent low educational achievement and poorer outcomes,…
A Qualitative Approach to Understanding the Role of Lecture Capture in Student Learning Experiences
ERIC Educational Resources Information Center
Hall, Gareth; Ivaldi, Antonia
2017-01-01
Lectures continue to be the dominant form of university teaching, and lecture capture technologies are tentatively taken up to support this form of delivery, rather than being used as a viable alternative. Much of the previous research, however, has been self-reports or survey-based, with far less attention given to qualitative explorations. This…
Cellular and Nuclear Alignment Analysis for Determining Epithelial Cell Chirality
Raymond, Michael J.; Ray, Poulomi; Kaur, Gurleen; Singh, Ajay V.; Wan, Leo Q.
2015-01-01
Left-right (LR) asymmetry is a biologically conserved property in living organisms that can be observed in the asymmetrical arrangement of organs and tissues and in tissue morphogenesis, such as the directional looping of the gastrointestinal tract and heart. The expression of LR asymmetry in embryonic tissues can be appreciated in biased cell alignment. Previously an in vitro chirality assay was reported by patterning multiple cells on microscale defined geometries and quantified the cell phenotype–dependent LR asymmetry, or cell chirality. However, morphology and chirality of individual cells on micropatterned surfaces has not been well characterized. Here, a Python-based algorithm was developed to identify and quantify immunofluorescence stained individual epithelial cells on multicellular patterns. This approach not only produces results similar to the image intensity gradient-based method reported previously, but also can capture properties of single cells such as area and aspect ratio. We also found that cell nuclei exhibited biased alignment. Around 35% cells were misaligned and were typically smaller and less elongated. This new imaging analysis approach is an effective tool for measuring single cell chirality inside multicellular structures and can potentially help unveil biophysical mechanisms underlying cellular chiral bias both in vitro and in vivo. PMID:26294010
An Ensemble Approach for Drug Side Effect Prediction
Jahid, Md Jamiul; Ruan, Jianhua
2014-01-01
In silico prediction of drug side-effects in early stage of drug development is becoming more popular now days, which not only reduces the time for drug design but also reduces the drug development costs. In this article we propose an ensemble approach to predict drug side-effects of drug molecules based on their chemical structure. Our idea originates from the observation that similar drugs have similar side-effects. Based on this observation we design an ensemble approach that combine the results from different classification models where each model is generated by a different set of similar drugs. We applied our approach to 1385 side-effects in the SIDER database for 888 drugs. Results show that our approach outperformed previously published approaches and standard classifiers. Furthermore, we applied our method to a number of uncharacterized drug molecules in DrugBank database and predict their side-effect profiles for future usage. Results from various sources confirm that our method is able to predict the side-effects for uncharacterized drugs and more importantly able to predict rare side-effects which are often ignored by other approaches. The method described in this article can be useful to predict side-effects in drug design in an early stage to reduce experimental cost and time. PMID:25327524
Bouchard, M
2001-01-01
In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.
Robust Path Planning and Feedback Design Under Stochastic Uncertainty
NASA Technical Reports Server (NTRS)
Blackmore, Lars
2008-01-01
Autonomous vehicles require optimal path planning algorithms to achieve mission goals while avoiding obstacles and being robust to uncertainties. The uncertainties arise from exogenous disturbances, modeling errors, and sensor noise, which can be characterized via stochastic models. Previous work defined a notion of robustness in a stochastic setting by using the concept of chance constraints. This requires that mission constraint violation can occur with a probability less than a prescribed value.In this paper we describe a novel method for optimal chance constrained path planning with feedback design. The approach optimizes both the reference trajectory to be followed and the feedback controller used to reject uncertainty. Our method extends recent results in constrained control synthesis based on convex optimization to solve control problems with nonconvex constraints. This extension is essential for path planning problems, which inherently have nonconvex obstacle avoidance constraints. Unlike previous approaches to chance constrained path planning, the new approach optimizes the feedback gain as wellas the reference trajectory.The key idea is to couple a fast, nonconvex solver that does not take into account uncertainty, with existing robust approaches that apply only to convex feasible regions. By alternating between robust and nonrobust solutions, the new algorithm guarantees convergence to a global optimum. We apply the new method to an unmanned aircraft and show simulation results that demonstrate the efficacy of the approach.
Cobalt: A GPU-based correlator and beamformer for LOFAR
NASA Astrophysics Data System (ADS)
Broekema, P. Chris; Mol, J. Jan David; Nijboer, R.; van Amesfoort, A. S.; Brentjens, M. A.; Loose, G. Marcel; Klijn, W. F. A.; Romein, J. W.
2018-04-01
For low-frequency radio astronomy, software correlation and beamforming on general purpose hardware is a viable alternative to custom designed hardware. LOFAR, a new-generation radio telescope centered in the Netherlands with international stations in Germany, France, Ireland, Poland, Sweden and the UK, has successfully used software real-time processors based on IBM Blue Gene technology since 2004. Since then, developments in technology have allowed us to build a system based on commercial off-the-shelf components that combines the same capabilities with lower operational cost. In this paper, we describe the design and implementation of a GPU-based correlator and beamformer with the same capabilities as the Blue Gene based systems. We focus on the design approach taken, and show the challenges faced in selecting an appropriate system. The design, implementation and verification of the software system show the value of a modern test-driven development approach. Operational experience, based on three years of operations, demonstrates that a general purpose system is a good alternative to the previous supercomputer-based system or custom-designed hardware.
A unified EM approach to bladder wall segmentation with coupled level-set constraints
Han, Hao; Li, Lihong; Duan, Chaijie; Zhang, Hao; Zhao, Yang; Liang, Zhengrong
2013-01-01
Magnetic resonance (MR) imaging-based virtual cystoscopy (VCys), as a non-invasive, safe and cost-effective technique, has shown its promising virtue for early diagnosis and recurrence management of bladder carcinoma. One primary goal of VCys is to identify bladder lesions with abnormal bladder wall thickness, and consequently a precise segmentation of the inner and outer borders of the wall is required. In this paper, we propose a unified expectation-maximization (EM) approach to the maximum-a-posteriori (MAP) solution of bladder wall segmentation, by integrating a novel adaptive Markov random field (AMRF) model and the coupled level-set (CLS) information into the prior term. The proposed approach is applied to the segmentation of T1-weighted MR images, where the wall is enhanced while the urine and surrounding soft tissues are suppressed. By introducing scale-adaptive neighborhoods as well as adaptive weights into the conventional MRF model, the AMRF model takes into account the local information more accurately. In order to mitigate the influence of image artifacts adjacent to the bladder wall and to preserve the continuity of the wall surface, we apply geometrical constraints on the wall using our previously developed CLS method. This paper not only evaluates the robustness of the presented approach against the known ground truth of simulated digital phantoms, but further compares its performance with our previous CLS approach via both volunteer and patient studies. Statistical analysis on experts’ scores of the segmented borders from both approaches demonstrates that our new scheme is more effective in extracting the bladder wall. Based on the wall thickness calibrated from the segmented single-layer borders, a three-dimensional virtual bladder model can be constructed and the wall thickness can be mapped on to the model, where the bladder lesions will be eventually detected via experts’ visualization and/or computer-aided detection. PMID:24001932
Varga, Peter; Inzana, Jason A; Schwiedrzik, Jakob; Zysset, Philippe K; Gueorguiev, Boyko; Blauth, Michael; Windolf, Markus
2017-05-01
High incidence and increased mortality related to secondary, contralateral proximal femoral fractures may justify invasive prophylactic augmentation that reinforces the osteoporotic proximal femur to reduce fracture risk. Bone cement-based approaches (femoroplasty) may deliver the required strengthening effect; however, the significant variation in the results of previous studies calls for a systematic analysis and optimization of this method. Our hypothesis was that efficient generalized augmentation strategies can be identified via computational optimization. This study investigated, by means of finite element analysis, the effect of cement location and volume on the biomechanical properties of fifteen proximal femora in sideways fall. Novel cement cloud locations were developed using the principles of bone remodeling and compared to the "single central" location that was previously reported to be optimal. The new augmentation strategies provided significantly greater biomechanical benefits compared to the "single central" cement location. Augmenting with approximately 12ml of cement in the newly identified location achieved increases of 11% in stiffness, 64% in yield force, 156% in yield energy and 59% in maximum force, on average, compared to the non-augmented state. The weaker bones experienced a greater biomechanical benefit from augmentation than stronger bones. The effect of cement volume on the biomechanical properties was approximately linear. Results of the "single central" model showed good agreement with previous experimental studies. These findings indicate enhanced potential of cement-based prophylactic augmentation using the newly developed cementing strategy. Future studies should determine the required level of strengthening and confirm these numerical results experimentally. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hu, Xinyao; Zhao, Jun; Peng, Dongsheng; Sun, Zhenglong; Qu, Xingda
2018-02-01
Postural control is a complex skill based on the interaction of dynamic sensorimotor processes, and can be challenging for people with deficits in sensory functions. The foot plantar center of pressure (COP) has often been used for quantitative assessment of postural control. Previously, the foot plantar COP was mainly measured by force plates or complicated and expensive insole-based measurement systems. Although some low-cost instrumented insoles have been developed, their ability to accurately estimate the foot plantar COP trajectory was not robust. In this study, a novel individual-specific nonlinear model was proposed to estimate the foot plantar COP trajectories with an instrumented insole based on low-cost force sensitive resistors (FSRs). The model coefficients were determined by a least square error approximation algorithm. Model validation was carried out by comparing the estimated COP data with the reference data in a variety of postural control assessment tasks. We also compared our data with the COP trajectories estimated by the previously well accepted weighted mean approach. Comparing with the reference measurements, the average root mean square errors of the COP trajectories of both feet were 2.23 mm (±0.64) (left foot) and 2.72 mm (±0.83) (right foot) along the medial-lateral direction, and 9.17 mm (±1.98) (left foot) and 11.19 mm (±2.98) (right foot) along the anterior-posterior direction. The results are superior to those reported in previous relevant studies, and demonstrate that our proposed approach can be used for accurate foot plantar COP trajectory estimation. This study could provide an inexpensive solution to fall risk assessment in home settings or community healthcare center for the elderly. It has the potential to help prevent future falls in the elderly.
Hu, Xinyao; Zhao, Jun; Peng, Dongsheng
2018-01-01
Postural control is a complex skill based on the interaction of dynamic sensorimotor processes, and can be challenging for people with deficits in sensory functions. The foot plantar center of pressure (COP) has often been used for quantitative assessment of postural control. Previously, the foot plantar COP was mainly measured by force plates or complicated and expensive insole-based measurement systems. Although some low-cost instrumented insoles have been developed, their ability to accurately estimate the foot plantar COP trajectory was not robust. In this study, a novel individual-specific nonlinear model was proposed to estimate the foot plantar COP trajectories with an instrumented insole based on low-cost force sensitive resistors (FSRs). The model coefficients were determined by a least square error approximation algorithm. Model validation was carried out by comparing the estimated COP data with the reference data in a variety of postural control assessment tasks. We also compared our data with the COP trajectories estimated by the previously well accepted weighted mean approach. Comparing with the reference measurements, the average root mean square errors of the COP trajectories of both feet were 2.23 mm (±0.64) (left foot) and 2.72 mm (±0.83) (right foot) along the medial–lateral direction, and 9.17 mm (±1.98) (left foot) and 11.19 mm (±2.98) (right foot) along the anterior–posterior direction. The results are superior to those reported in previous relevant studies, and demonstrate that our proposed approach can be used for accurate foot plantar COP trajectory estimation. This study could provide an inexpensive solution to fall risk assessment in home settings or community healthcare center for the elderly. It has the potential to help prevent future falls in the elderly. PMID:29389857
Komisarchik, G; Gelbstein, Y; Fuks, D
2016-11-30
Lead telluride based compounds are of great interest due to their enhanced thermoelectric transport properties. Nevertheless, the donor type impurities in this class of materials are currently mainly limited and alternative types of donor impurities are still required for optimizing the thermoelectric performance. In the current research titanium as a donor impurity in PbTe is examined. Although titanium is known to form resonant levels above the conduction band in PbTe, it does not enhance the thermo-power beyond the classical predictions. Recent experiments showed that alloying with a small amount of Ti (∼0.1 at%) gives a significant increase in the figure of merit. In the current research ab initio calculations were applied in order to correlate the reported experimental results with a thermoelectric optimization model. It was found that a Ti concentration of ∼1.4 at% in the Pb sublattice is expected to maximize the thermoelectric power factor. Using a statistical thermodynamic approach and in agreement with the previously reported appearance of a secondary intermetallic phase, the actual Ti solubility limit in PbTe is found to be ∼0.3 at%. Based on the proposed model, the mechanism for the formation of the previously observed secondary phase is attributed to phase separation reactions, characterized by a positive enthalpy of formation in the system. With extrapolation of the obtained ab initio results, it is demonstrated that lower Ti-doping concentrations than previously experimentally reported ones are expected to provide power factor values close to the maximal one, making doping with Ti a promising opportunity for the generation of highly efficient n-type PbTe-based thermoelectric materials.
Partially restored resting-state functional connectivity in women recovered from anorexia nervosa.
Boehm, Ilka; Geisler, Daniel; Tam, Friederike; King, Joseph A; Ritschel, Franziska; Seidel, Maria; Bernardoni, Fabio; Murr, Julia; Goschke, Thomas; Calhoun, Vince D; Roessner, Veit; Ehrlich, Stefan
2016-10-01
We have previously shown increased resting-state functional connectivity (rsFC) in the frontoparietal network (FPN) and the default mode network (DMN) in patients with acute anorexia nervosa. Based on these findings we investigated within-network rsFC in patients recovered from anorexia nervosa to examine whether these abnormalities are a state or trait marker of the disease. To extend the understanding of functional connectivity in patients with anorexia nervosa, we also estimated rsFC between large-scale networks. Girls and women recovered from anorexia nervosa and pair-wise, age- and sex-matched healthy controls underwent a resting-state fMRI scan. Using independent component analyses (ICA), we isolated the FPN, DMN and salience network. We used standard comparisons as well as a hypothesis-based approach to test the findings of our previous rsFC study in this recovered cohort. Temporal correlations between network time-course pairs were computed to investigate functional network connectivity (FNC). Thirty-one patients recovered from anorexia nervosa and 31 controls participated in our study. Standard group comparisons revealed reduced rsFC between the dorsolateral prefrontal cortex (dlPFC) and the FPN in the recovered group. Using a hypothesis-based approach we extended the previous finding of increased rsFC between the angular gyrus and the FPN in patients recovered from anorexia nervosa. No group differences in FNC were revealed. The study design did not allow us to conclude that the difference found in rsFC constitutes a scar effect of the disease. This study suggests that some abnormal rsFC patterns found in patients recovered from anorexia nervosa normalize after long-term weight restoration, while distorted rsFC in the FPN, a network that has been associated with cognitive control, may constitute a trait marker of the disorder.
Waghorn, Geoffrey; Dias, Shannon; Gladman, Beverley; Harris, Meredith; Saha, Sukanta
2014-12-01
The Individual Placement and Support (IPS) approach is an evidence-based form of supported employment for people with severe and persistent mental illness. This approach is not yet widely available in Australia even though there is mounting evidence of its generalisability outside the USA. One previous Australian randomised controlled trial found that IPS is effective for young people with first episode psychosis. The aim of the current trial was to assess the effectiveness of evidence-based supported employment when implemented for Australian adult consumers of public mental health services by utilising existing service systems. A four-site randomised control trial design (n = 208) was conducted in Brisbane (two sites), Townsville and Cairns. The intervention consisted of an IPS supported employment service hosted by a community mental health team. The control condition was delivered at each site by mental health teams referring consumers to other disability employment services in the local area. At 12 months, those in the IPS condition had 2.4 times greater odds of commencing employment than those in the control condition (42.5% vs. 23.5%). The conditions did not differ on secondary employment outcomes including job duration, hours worked, or job diversity. Attrition was higher than expected in both conditions with 28.4% completing the baseline interview but taking no further part in the study. The results support previous international findings that IPS-supported employment is more effective than non-integrated supported employment. IPS can be successfully implemented this way in Australia, but with a loss of effect strength compared to previous USA trials. © 2014 Occupational Therapy Australia.
Mateus, Octávio; Benson, Roger B.J.
2015-01-01
Diplodocidae are among the best known sauropod dinosaurs. Several species were described in the late 1800s or early 1900s from the Morrison Formation of North America. Since then, numerous additional specimens were recovered in the USA, Tanzania, Portugal, and Argentina, as well as possibly Spain, England, Georgia, Zimbabwe, and Asia. To date, the clade includes about 12 to 15 nominal species, some of them with questionable taxonomic status (e.g., ‘Diplodocus’ hayi or Dyslocosaurus polyonychius), and ranging in age from Late Jurassic to Early Cretaceous. However, intrageneric relationships of the iconic, multi-species genera Apatosaurus and Diplodocus are still poorly known. The way to resolve this issue is a specimen-based phylogenetic analysis, which has been previously implemented for Apatosaurus, but is here performed for the first time for the entire clade of Diplodocidae. The analysis includes 81 operational taxonomic units, 49 of which belong to Diplodocidae. The set of OTUs includes all name-bearing type specimens previously proposed to belong to Diplodocidae, alongside a set of relatively complete referred specimens, which increase the amount of anatomically overlapping material. Non-diplodocid outgroups were selected to test the affinities of potential diplodocid specimens that have subsequently been suggested to belong outside the clade. The specimens were scored for 477 morphological characters, representing one of the most extensive phylogenetic analyses of sauropod dinosaurs. Character states were figured and tables given in the case of numerical characters. The resulting cladogram recovers the classical arrangement of diplodocid relationships. Two numerical approaches were used to increase reproducibility in our taxonomic delimitation of species and genera. This resulted in the proposal that some species previously included in well-known genera like Apatosaurus and Diplodocus are generically distinct. Of particular note is that the famous genus Brontosaurus is considered valid by our quantitative approach. Furthermore, “Diplodocus” hayi represents a unique genus, which will herein be called Galeamopus gen. nov. On the other hand, these numerical approaches imply synonymization of “Dinheirosaurus” from the Late Jurassic of Portugal with the Morrison Formation genus Supersaurus. Our use of a specimen-, rather than species-based approach increases knowledge of intraspecific and intrageneric variation in diplodocids, and the study demonstrates how specimen-based phylogenetic analysis is a valuable tool in sauropod taxonomy, and potentially in paleontology and taxonomy as a whole. PMID:25870766
2011-01-01
Background Several computational candidate gene selection and prioritization methods have recently been developed. These in silico selection and prioritization techniques are usually based on two central approaches - the examination of similarities to known disease genes and/or the evaluation of functional annotation of genes. Each of these approaches has its own caveats. Here we employ a previously described method of candidate gene prioritization based mainly on gene annotation, in accompaniment with a technique based on the evaluation of pertinent sequence motifs or signatures, in an attempt to refine the gene prioritization approach. We apply this approach to X-linked mental retardation (XLMR), a group of heterogeneous disorders for which some of the underlying genetics is known. Results The gene annotation-based binary filtering method yielded a ranked list of putative XLMR candidate genes with good plausibility of being associated with the development of mental retardation. In parallel, a motif finding approach based on linear discriminatory analysis (LDA) was employed to identify short sequence patterns that may discriminate XLMR from non-XLMR genes. High rates (>80%) of correct classification was achieved, suggesting that the identification of these motifs effectively captures genomic signals associated with XLMR vs. non-XLMR genes. The computational tools developed for the motif-based LDA is integrated into the freely available genomic analysis portal Galaxy (http://main.g2.bx.psu.edu/). Nine genes (APLN, ZC4H2, MAGED4, MAGED4B, RAP2C, FAM156A, FAM156B, TBL1X, and UXT) were highlighted as highly-ranked XLMR methods. Conclusions The combination of gene annotation information and sequence motif-orientated computational candidate gene prediction methods highlight an added benefit in generating a list of plausible candidate genes, as has been demonstrated for XLMR. Reviewers: This article was reviewed by Dr Barbara Bardoni (nominated by Prof Juergen Brosius); Prof Neil Smalheiser and Dr Dustin Holloway (nominated by Prof Charles DeLisi). PMID:21668950
[Private health insurance in Brazil: approaches to public/private patterns in healthcare].
Sestelo, José Antonio de Freitas; Souza, Luis Eugenio Portela Fernandes de; Bahia, Lígia
2013-05-01
This article draws on a previous review of 270 articles on private health plans published from 2000 to 2010 and selects 17 that specifically address the issue of the relationship between the public and private healthcare sectors. Content analysis considered the studies' concepts and terms, related theoretical elements, and predominant lines of argument. A reading of the argumentative strategies detected the existence of a critical view of the modus operandi in the public/private relationship based on Social Medicine and the theoretical tenets of the Brazilian Health Reform Movement. The study also identified contributions based on neoliberal business approaches that focus strictly on economic issues to discuss private health insurance. Understanding the public/private link in healthcare obviously requires the development of a solid empirical base, analyzed with adequate theoretical assumptions due to the inherent degree of complexity in the public/private healthcare interface.
Real-time traffic sign recognition based on a general purpose GPU and deep-learning
Hong, Yongwon; Choi, Yeongwoo; Byun, Hyeran
2017-01-01
We present a General Purpose Graphics Processing Unit (GPGPU) based real-time traffic sign detection and recognition method that is robust against illumination changes. There have been many approaches to traffic sign recognition in various research fields; however, previous approaches faced several limitations when under low illumination or wide variance of light conditions. To overcome these drawbacks and improve processing speeds, we propose a method that 1) is robust against illumination changes, 2) uses GPGPU-based real-time traffic sign detection, and 3) performs region detecting and recognition using a hierarchical model. This method produces stable results in low illumination environments. Both detection and hierarchical recognition are performed in real-time, and the proposed method achieves 0.97 F1-score on our collective dataset, which uses the Vienna convention traffic rules (Germany and South Korea). PMID:28264011
Drost, Derek R; Novaes, Evandro; Boaventura-Novaes, Carolina; Benedict, Catherine I; Brown, Ryan S; Yin, Tongming; Tuskan, Gerald A; Kirst, Matias
2009-06-01
Microarrays have demonstrated significant power for genome-wide analyses of gene expression, and recently have also revolutionized the genetic analysis of segregating populations by genotyping thousands of loci in a single assay. Although microarray-based genotyping approaches have been successfully applied in yeast and several inbred plant species, their power has not been proven in an outcrossing species with extensive genetic diversity. Here we have developed methods for high-throughput microarray-based genotyping in such species using a pseudo-backcross progeny of 154 individuals of Populus trichocarpa and P. deltoides analyzed with long-oligonucleotide in situ-synthesized microarray probes. Our analysis resulted in high-confidence genotypes for 719 single-feature polymorphism (SFP) and 1014 gene expression marker (GEM) candidates. Using these genotypes and an established microsatellite (SSR) framework map, we produced a high-density genetic map comprising over 600 SFPs, GEMs and SSRs. The abundance of gene-based markers allowed us to localize over 35 million base pairs of previously unplaced whole-genome shotgun (WGS) scaffold sequence to putative locations in the genome of P. trichocarpa. A high proportion of sampled scaffolds could be verified for their placement with independently mapped SSRs, demonstrating the previously un-utilized power that high-density genotyping can provide in the context of map-based WGS sequence reassembly. Our results provide a substantial contribution to the continued improvement of the Populus genome assembly, while demonstrating the feasibility of microarray-based genotyping in a highly heterozygous population. The strategies presented are applicable to genetic mapping efforts in all plant species with similarly high levels of genetic diversity.
Neural network based short-term load forecasting using weather compensation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, T.W.S.; Leung, C.T.
This paper presents a novel technique for electric load forecasting based on neural weather compensation. The proposed method is a nonlinear generalization of Box and Jenkins approach for nonstationary time-series prediction. A weather compensation neural network is implemented for one-day ahead electric load forecasting. The weather compensation neural network can accurately predict the change of actual electric load consumption from the previous day. The results, based on Hong Kong Island historical load demand, indicate that this methodology is capable of providing a more accurate load forecast with a 0.9% reduction in forecast error.
Lattice hydrodynamic model based traffic control: A transportation cyber-physical system approach
NASA Astrophysics Data System (ADS)
Liu, Hui; Sun, Dihua; Liu, Weining
2016-11-01
Lattice hydrodynamic model is a typical continuum traffic flow model, which describes the jamming transition of traffic flow properly. Previous studies in lattice hydrodynamic model have shown that the use of control method has the potential to improve traffic conditions. In this paper, a new control method is applied in lattice hydrodynamic model from a transportation cyber-physical system approach, in which only one lattice site needs to be controlled in this control scheme. The simulation verifies the feasibility and validity of this method, which can ensure the efficient and smooth operation of the traffic flow.