Instances selection algorithm by ensemble margin
NASA Astrophysics Data System (ADS)
Saidi, Meryem; Bechar, Mohammed El Amine; Settouti, Nesma; Chikh, Mohamed Amine
2018-05-01
The main limit of data mining algorithms is their inability to deal with the huge amount of available data in a reasonable processing time. A solution of producing fast and accurate results is instances and features selection. This process eliminates noisy or redundant data in order to reduce the storage and computational cost without performances degradation. In this paper, a new instance selection approach called Ensemble Margin Instance Selection (EMIS) algorithm is proposed. This approach is based on the ensemble margin. To evaluate our approach, we have conducted several experiments on different real-world classification problems from UCI Machine learning repository. The pixel-based image segmentation is a field where the storage requirement and computational cost of applied model become higher. To solve these limitations we conduct a study based on the application of EMIS and other instance selection techniques for the segmentation and automatic recognition of white blood cells WBC (nucleus and cytoplasm) in cytological images.
Automatic Generation of Heuristics for Scheduling
NASA Technical Reports Server (NTRS)
Morris, Robert A.; Bresina, John L.; Rodgers, Stuart M.
1997-01-01
This paper presents a technique, called GenH, that automatically generates search heuristics for scheduling problems. The impetus for developing this technique is the growing consensus that heuristics encode advice that is, at best, useful in solving most, or typical, problem instances, and, at worst, useful in solving only a narrowly defined set of instances. In either case, heuristic problem solvers, to be broadly applicable, should have a means of automatically adjusting to the idiosyncrasies of each problem instance. GenH generates a search heuristic for a given problem instance by hill-climbing in the space of possible multi-attribute heuristics, where the evaluation of a candidate heuristic is based on the quality of the solution found under its guidance. We present empirical results obtained by applying GenH to the real world problem of telescope observation scheduling. These results demonstrate that GenH is a simple and effective way of improving the performance of an heuristic scheduler.
Derrac, Joaquín; Triguero, Isaac; Garcia, Salvador; Herrera, Francisco
2012-10-01
Cooperative coevolution is a successful trend of evolutionary computation which allows us to define partitions of the domain of a given problem, or to integrate several related techniques into one, by the use of evolutionary algorithms. It is possible to apply it to the development of advanced classification methods, which integrate several machine learning techniques into a single proposal. A novel approach integrating instance selection, instance weighting, and feature weighting into the framework of a coevolutionary model is presented in this paper. We compare it with a wide range of evolutionary and nonevolutionary related methods, in order to show the benefits of the employment of coevolution to apply the techniques considered simultaneously. The results obtained, contrasted through nonparametric statistical tests, show that our proposal outperforms other methods in the comparison, thus becoming a suitable tool in the task of enhancing the nearest neighbor classifier.
Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.
Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V
2016-01-01
Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.
Bayes classifiers for imbalanced traffic accidents datasets.
Mujalli, Randa Oqab; López, Griselda; Garach, Laura
2016-03-01
Traffic accidents data sets are usually imbalanced, where the number of instances classified under the killed or severe injuries class (minority) is much lower than those classified under the slight injuries class (majority). This, however, supposes a challenging problem for classification algorithms and may cause obtaining a model that well cover the slight injuries instances whereas the killed or severe injuries instances are misclassified frequently. Based on traffic accidents data collected on urban and suburban roads in Jordan for three years (2009-2011); three different data balancing techniques were used: under-sampling which removes some instances of the majority class, oversampling which creates new instances of the minority class and a mix technique that combines both. In addition, different Bayes classifiers were compared for the different imbalanced and balanced data sets: Averaged One-Dependence Estimators, Weightily Average One-Dependence Estimators, and Bayesian networks in order to identify factors that affect the severity of an accident. The results indicated that using the balanced data sets, especially those created using oversampling techniques, with Bayesian networks improved classifying a traffic accident according to its severity and reduced the misclassification of killed and severe injuries instances. On the other hand, the following variables were found to contribute to the occurrence of a killed causality or a severe injury in a traffic accident: number of vehicles involved, accident pattern, number of directions, accident type, lighting, surface condition, and speed limit. This work, to the knowledge of the authors, is the first that aims at analyzing historical data records for traffic accidents occurring in Jordan and the first to apply balancing techniques to analyze injury severity of traffic accidents. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tools and Methods for Visualization of Mesoscale Ocean Eddies
NASA Astrophysics Data System (ADS)
Bemis, K. G.; Liu, L.; Silver, D.; Kang, D.; Curchitser, E.
2017-12-01
Mesoscale ocean eddies form in the Gulf Stream and transport heat and nutrients across the ocean basin. The internal structure of these three-dimensional eddies and the kinematics with which they move are critical to a full understanding of their transport capacity. A series of visualization tools have been developed to extract, characterize, and track ocean eddies from 3D modeling results, to visually show the ocean eddy story by applying various illustrative visualization techniques, and to interactively view results stored on a server from a conventional browser. In this work, we apply a feature-based method to track instances of ocean eddies through the time steps of a high-resolution multidecadal regional ocean model and generate a series of eddy paths which reflect the life cycle of individual eddy instances. The basic method uses the Okubu-Weiss parameter to define eddy cores but could be adapted to alternative specifications of an eddy. Stored results include pixel-lists for each eddy instance, tracking metadata for eddy paths, and physical and geometric properties. In the simplest view, isosurfaces are used to display eddies along an eddy path. Individual eddies can then be selected and viewed independently or an eddy path can be viewed in the context of all eddy paths (longer than a specified duration) and the ocean basin. To tell the story of mesoscale ocean eddies, we combined illustrative visualization techniques, including visual effectiveness enhancement, focus+context, and smart visibility, with the extracted volume features to explore eddy characteristics at multiple scales from ocean basin to individual eddy. An evaluation by domain experts indicates that combining our feature-based techniques with illustrative visualization techniques provides an insight into the role eddies play in ocean circulation. A web-based GUI is under development to facilitate easy viewing of stored results. The GUI provides the user control to choose amongst available datasets, to specify the variables (such as temperature or salinity) to display on the isosurfaces, and to choose the scale and orientation of the view. These techniques allow an oceanographer to browse the data based on eddy paths and individual eddies rather than slices or volumes of data.
Integer Linear Programming in Computational Biology
NASA Astrophysics Data System (ADS)
Althaus, Ernst; Klau, Gunnar W.; Kohlbacher, Oliver; Lenhof, Hans-Peter; Reinert, Knut
Computational molecular biology (bioinformatics) is a young research field that is rich in NP-hard optimization problems. The problem instances encountered are often huge and comprise thousands of variables. Since their introduction into the field of bioinformatics in 1997, integer linear programming (ILP) techniques have been successfully applied to many optimization problems. These approaches have added much momentum to development and progress in related areas. In particular, ILP-based approaches have become a standard optimization technique in bioinformatics. In this review, we present applications of ILP-based techniques developed by members and former members of Kurt Mehlhorn’s group. These techniques were introduced to bioinformatics in a series of papers and popularized by demonstration of their effectiveness and potential.
Distributed Sleep Scheduling in Wireless Sensor Networks via Fractional Domatic Partitioning
NASA Astrophysics Data System (ADS)
Schumacher, André; Haanpää, Harri
We consider setting up sleep scheduling in sensor networks. We formulate the problem as an instance of the fractional domatic partition problem and obtain a distributed approximation algorithm by applying linear programming approximation techniques. Our algorithm is an application of the Garg-Könemann (GK) scheme that requires solving an instance of the minimum weight dominating set (MWDS) problem as a subroutine. Our two main contributions are a distributed implementation of the GK scheme for the sleep-scheduling problem and a novel asynchronous distributed algorithm for approximating MWDS based on a primal-dual analysis of Chvátal's set-cover algorithm. We evaluate our algorithm with
Chew, Peter A; Bader, Brett W
2012-10-16
A technique for information retrieval includes parsing a corpus to identify a number of wordform instances within each document of the corpus. A weighted morpheme-by-document matrix is generated based at least in part on the number of wordform instances within each document of the corpus and based at least in part on a weighting function. The weighted morpheme-by-document matrix separately enumerates instances of stems and affixes. Additionally or alternatively, a term-by-term alignment matrix may be generated based at least in part on the number of wordform instances within each document of the corpus. At least one lower rank approximation matrix is generated by factorizing the weighted morpheme-by-document matrix and/or the term-by-term alignment matrix.
Guinness, Robert E
2015-04-28
This paper presents the results of research on the use of smartphone sensors (namely, GPS and accelerometers), geospatial information (points of interest, such as bus stops and train stations) and machine learning (ML) to sense mobility contexts. Our goal is to develop techniques to continuously and automatically detect a smartphone user's mobility activities, including walking, running, driving and using a bus or train, in real-time or near-real-time (<5 s). We investigated a wide range of supervised learning techniques for classification, including decision trees (DT), support vector machines (SVM), naive Bayes classifiers (NB), Bayesian networks (BN), logistic regression (LR), artificial neural networks (ANN) and several instance-based classifiers (KStar, LWLand IBk). Applying ten-fold cross-validation, the best performers in terms of correct classification rate (i.e., recall) were DT (96.5%), BN (90.9%), LWL (95.5%) and KStar (95.6%). In particular, the DT-algorithm RandomForest exhibited the best overall performance. After a feature selection process for a subset of algorithms, the performance was improved slightly. Furthermore, after tuning the parameters of RandomForest, performance improved to above 97.5%. Lastly, we measured the computational complexity of the classifiers, in terms of central processing unit (CPU) time needed for classification, to provide a rough comparison between the algorithms in terms of battery usage requirements. As a result, the classifiers can be ranked from lowest to highest complexity (i.e., computational cost) as follows: SVM, ANN, LR, BN, DT, NB, IBk, LWL and KStar. The instance-based classifiers take considerably more computational time than the non-instance-based classifiers, whereas the slowest non-instance-based classifier (NB) required about five-times the amount of CPU time as the fastest classifier (SVM). The above results suggest that DT algorithms are excellent candidates for detecting mobility contexts in smartphones, both in terms of performance and computational complexity.
Guinness, Robert E.
2015-01-01
This paper presents the results of research on the use of smartphone sensors (namely, GPS and accelerometers), geospatial information (points of interest, such as bus stops and train stations) and machine learning (ML) to sense mobility contexts. Our goal is to develop techniques to continuously and automatically detect a smartphone user's mobility activities, including walking, running, driving and using a bus or train, in real-time or near-real-time (<5 s). We investigated a wide range of supervised learning techniques for classification, including decision trees (DT), support vector machines (SVM), naive Bayes classifiers (NB), Bayesian networks (BN), logistic regression (LR), artificial neural networks (ANN) and several instance-based classifiers (KStar, LWLand IBk). Applying ten-fold cross-validation, the best performers in terms of correct classification rate (i.e., recall) were DT (96.5%), BN (90.9%), LWL (95.5%) and KStar (95.6%). In particular, the DT-algorithm RandomForest exhibited the best overall performance. After a feature selection process for a subset of algorithms, the performance was improved slightly. Furthermore, after tuning the parameters of RandomForest, performance improved to above 97.5%. Lastly, we measured the computational complexity of the classifiers, in terms of central processing unit (CPU) time needed for classification, to provide a rough comparison between the algorithms in terms of battery usage requirements. As a result, the classifiers can be ranked from lowest to highest complexity (i.e., computational cost) as follows: SVM, ANN, LR, BN, DT, NB, IBk, LWL and KStar. The instance-based classifiers take considerably more computational time than the non-instance-based classifiers, whereas the slowest non-instance-based classifier (NB) required about five-times the amount of CPU time as the fastest classifier (SVM). The above results suggest that DT algorithms are excellent candidates for detecting mobility contexts in smartphones, both in terms of performance and computational complexity. PMID:25928060
Generalized query-based active learning to identify differentially methylated regions in DNA.
Haque, Md Muksitul; Holder, Lawrence B; Skinner, Michael K; Cook, Diane J
2013-01-01
Active learning is a supervised learning technique that reduces the number of examples required for building a successful classifier, because it can choose the data it learns from. This technique holds promise for many biological domains in which classified examples are expensive and time-consuming to obtain. Most traditional active learning methods ask very specific queries to the Oracle (e.g., a human expert) to label an unlabeled example. The example may consist of numerous features, many of which are irrelevant. Removing such features will create a shorter query with only relevant features, and it will be easier for the Oracle to answer. We propose a generalized query-based active learning (GQAL) approach that constructs generalized queries based on multiple instances. By constructing appropriately generalized queries, we can achieve higher accuracy compared to traditional active learning methods. We apply our active learning method to find differentially DNA methylated regions (DMRs). DMRs are DNA locations in the genome that are known to be involved in tissue differentiation, epigenetic regulation, and disease. We also apply our method on 13 other data sets and show that our method is better than another popular active learning technique.
ERIC Educational Resources Information Center
Muth, Chelsea; Bales, Karen L.; Hinde, Katie; Maninger, Nicole; Mendoza, Sally P.; Ferrer, Emilio
2016-01-01
Unavoidable sample size issues beset psychological research that involves scarce populations or costly laboratory procedures. When incorporating longitudinal designs these samples are further reduced by traditional modeling techniques, which perform listwise deletion for any instance of missing data. Moreover, these techniques are limited in their…
A preclustering-based ensemble learning technique for acute appendicitis diagnoses.
Lee, Yen-Hsien; Hu, Paul Jen-Hwa; Cheng, Tsang-Hsiang; Huang, Te-Chia; Chuang, Wei-Yao
2013-06-01
Acute appendicitis is a common medical condition, whose effective, timely diagnosis can be difficult. A missed diagnosis not only puts the patient in danger but also requires additional resources for corrective treatments. An acute appendicitis diagnosis constitutes a classification problem, for which a further fundamental challenge pertains to the skewed outcome class distribution of instances in the training sample. A preclustering-based ensemble learning (PEL) technique aims to address the associated imbalanced sample learning problems and thereby support the timely, accurate diagnosis of acute appendicitis. The proposed PEL technique employs undersampling to reduce the number of majority-class instances in a training sample, uses preclustering to group similar majority-class instances into multiple groups, and selects from each group representative instances to create more balanced samples. The PEL technique thereby reduces potential information loss from random undersampling. It also takes advantage of ensemble learning to improve performance. We empirically evaluate this proposed technique with 574 clinical cases obtained from a comprehensive tertiary hospital in southern Taiwan, using several prevalent techniques and a salient scoring system as benchmarks. The comparative results show that PEL is more effective and less biased than any benchmarks. The proposed PEL technique seems more sensitive to identifying positive acute appendicitis than the commonly used Alvarado scoring system and exhibits higher specificity in identifying negative acute appendicitis. In addition, the sensitivity and specificity values of PEL appear higher than those of the investigated benchmarks that follow the resampling approach. Our analysis suggests PEL benefits from the more representative majority-class instances in the training sample. According to our overall evaluation results, PEL records the best overall performance, and its area under the curve measure reaches 0.619. The PEL technique is capable of addressing imbalanced sample learning associated with acute appendicitis diagnosis. Our evaluation results suggest PEL is less biased toward a positive or negative class than the investigated benchmark techniques. In addition, our results indicate the overall effectiveness of the proposed technique, compared with prevalent scoring systems or salient classification techniques that follow the resampling approach. Copyright © 2013 Elsevier B.V. All rights reserved.
Normal Modes Expose Active Sites in Enzymes.
Glantz-Gashai, Yitav; Meirson, Tomer; Samson, Abraham O
2016-12-01
Accurate prediction of active sites is an important tool in bioinformatics. Here we present an improved structure based technique to expose active sites that is based on large changes of solvent accessibility accompanying normal mode dynamics. The technique which detects EXPOsure of active SITes through normal modEs is named EXPOSITE. The technique is trained using a small 133 enzyme dataset and tested using a large 845 enzyme dataset, both with known active site residues. EXPOSITE is also tested in a benchmark protein ligand dataset (PLD) comprising 48 proteins with and without bound ligands. EXPOSITE is shown to successfully locate the active site in most instances, and is found to be more accurate than other structure-based techniques. Interestingly, in several instances, the active site does not correspond to the largest pocket. EXPOSITE is advantageous due to its high precision and paves the way for structure based prediction of active site in enzymes.
Normal Modes Expose Active Sites in Enzymes
Glantz-Gashai, Yitav; Samson, Abraham O.
2016-01-01
Accurate prediction of active sites is an important tool in bioinformatics. Here we present an improved structure based technique to expose active sites that is based on large changes of solvent accessibility accompanying normal mode dynamics. The technique which detects EXPOsure of active SITes through normal modEs is named EXPOSITE. The technique is trained using a small 133 enzyme dataset and tested using a large 845 enzyme dataset, both with known active site residues. EXPOSITE is also tested in a benchmark protein ligand dataset (PLD) comprising 48 proteins with and without bound ligands. EXPOSITE is shown to successfully locate the active site in most instances, and is found to be more accurate than other structure-based techniques. Interestingly, in several instances, the active site does not correspond to the largest pocket. EXPOSITE is advantageous due to its high precision and paves the way for structure based prediction of active site in enzymes. PMID:28002427
Attribute-based Decision Graphs: A framework for multiclass data classification.
Bertini, João Roberto; Nicoletti, Maria do Carmo; Zhao, Liang
2017-01-01
Graph-based algorithms have been successfully applied in machine learning and data mining tasks. A simple but, widely used, approach to build graphs from vector-based data is to consider each data instance as a vertex and connecting pairs of it using a similarity measure. Although this abstraction presents some advantages, such as arbitrary shape representation of the original data, it is still tied to some drawbacks, for example, it is dependent on the choice of a pre-defined distance metric and is biased by the local information among data instances. Aiming at exploring alternative ways to build graphs from data, this paper proposes an algorithm for constructing a new type of graph, called Attribute-based Decision Graph-AbDG. Given a vector-based data set, an AbDG is built by partitioning each data attribute range into disjoint intervals and representing each interval as a vertex. The edges are then established between vertices from different attributes according to a pre-defined pattern. Classification is performed through a matching process among the attribute values of the new instance and AbDG. Moreover, AbDG provides an inner mechanism to handle missing attribute values, which contributes for expanding its applicability. Results of classification tasks have shown that AbDG is a competitive approach when compared to well-known multiclass algorithms. The main contribution of the proposed framework is the combination of the advantages of attribute-based and graph-based techniques to perform robust pattern matching data classification, while permitting the analysis the input data considering only a subset of its attributes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Gunay, Nur Sibel; Wang, Jing; Sun, Elaine Y; Pradines, Joël R; Farutin, Victor; Shriver, Zachary; Kaundinya, Ganesh V; Capila, Ishan
2017-02-01
Heparan sulfate (HS), a glycosaminoglycan present on the surface of cells, has been postulated to have important roles in driving both normal and pathological physiologies. The chemical structure and sulfation pattern (domain structure) of HS is believed to determine its biological function, to vary across tissue types, and to be modified in the context of disease. Characterization of HS requires isolation and purification of cell surface HS as a complex mixture. This process may introduce additional chemical modification of the native residues. In this study, we describe an approach towards thorough characterization of bovine kidney heparan sulfate (BKHS) that utilizes a variety of orthogonal analytical techniques (e.g. NMR, IP-RPHPLC, LC-MS). These techniques are applied to characterize this mixture at various levels including composition, fragment level, and overall chain properties. The combination of these techniques in many instances provides orthogonal views into the fine structure of HS, and in other instances provides overlapping / confirmatory information from different perspectives. Specifically, this approach enables quantitative determination of natural and modified saccharide residues in the HS chains, and identifies unusual structures. Analysis of partially digested HS chains allows for a better understanding of the domain structures within this mixture, and yields specific insights into the non-reducing end and reducing end structures of the chains. This approach outlines a useful framework that can be applied to elucidate HS structure and thereby provides means to advance understanding of its biological role and potential involvement in disease progression. In addition, the techniques described here can be applied to characterization of heparin from different sources.
Relativistic Quantum Metrology: Exploiting relativity to improve quantum measurement technologies
Ahmadi, Mehdi; Bruschi, David Edward; Sabín, Carlos; Adesso, Gerardo; Fuentes, Ivette
2014-01-01
We present a framework for relativistic quantum metrology that is useful for both Earth-based and space-based technologies. Quantum metrology has been so far successfully applied to design precision instruments such as clocks and sensors which outperform classical devices by exploiting quantum properties. There are advanced plans to implement these and other quantum technologies in space, for instance Space-QUEST and Space Optical Clock projects intend to implement quantum communications and quantum clocks at regimes where relativity starts to kick in. However, typical setups do not take into account the effects of relativity on quantum properties. To include and exploit these effects, we introduce techniques for the application of metrology to quantum field theory. Quantum field theory properly incorporates quantum theory and relativity, in particular, at regimes where space-based experiments take place. This framework allows for high precision estimation of parameters that appear in quantum field theory including proper times and accelerations. Indeed, the techniques can be applied to develop a novel generation of relativistic quantum technologies for gravimeters, clocks and sensors. As an example, we present a high precision device which in principle improves the state-of-the-art in quantum accelerometers by exploiting relativistic effects. PMID:24851858
Relativistic quantum metrology: exploiting relativity to improve quantum measurement technologies.
Ahmadi, Mehdi; Bruschi, David Edward; Sabín, Carlos; Adesso, Gerardo; Fuentes, Ivette
2014-05-22
We present a framework for relativistic quantum metrology that is useful for both Earth-based and space-based technologies. Quantum metrology has been so far successfully applied to design precision instruments such as clocks and sensors which outperform classical devices by exploiting quantum properties. There are advanced plans to implement these and other quantum technologies in space, for instance Space-QUEST and Space Optical Clock projects intend to implement quantum communications and quantum clocks at regimes where relativity starts to kick in. However, typical setups do not take into account the effects of relativity on quantum properties. To include and exploit these effects, we introduce techniques for the application of metrology to quantum field theory. Quantum field theory properly incorporates quantum theory and relativity, in particular, at regimes where space-based experiments take place. This framework allows for high precision estimation of parameters that appear in quantum field theory including proper times and accelerations. Indeed, the techniques can be applied to develop a novel generation of relativistic quantum technologies for gravimeters, clocks and sensors. As an example, we present a high precision device which in principle improves the state-of-the-art in quantum accelerometers by exploiting relativistic effects.
A k-Vector Approach to Sampling, Interpolation, and Approximation
NASA Astrophysics Data System (ADS)
Mortari, Daniele; Rogers, Jonathan
2013-12-01
The k-vector search technique is a method designed to perform extremely fast range searching of large databases at computational cost independent of the size of the database. k-vector search algorithms have historically found application in satellite star-tracker navigation systems which index very large star catalogues repeatedly in the process of attitude estimation. Recently, the k-vector search algorithm has been applied to numerous other problem areas including non-uniform random variate sampling, interpolation of 1-D or 2-D tables, nonlinear function inversion, and solution of systems of nonlinear equations. This paper presents algorithms in which the k-vector search technique is used to solve each of these problems in a computationally-efficient manner. In instances where these tasks must be performed repeatedly on a static (or nearly-static) data set, the proposed k-vector-based algorithms offer an extremely fast solution technique that outperforms standard methods.
On solving three-dimensional open-dimension rectangular packing problems
NASA Astrophysics Data System (ADS)
Junqueira, Leonardo; Morabito, Reinaldo
2017-05-01
In this article, a recently proposed three-dimensional open-dimension rectangular packing problem is considered, in which the objective is to find a minimal volume rectangular container that packs a set of rectangular boxes. The literature has tackled small-sized instances of this problem by means of optimization solvers, position-free mixed-integer programming (MIP) formulations and piecewise linearization approaches. In this study, the problem is alternatively addressed by means of grid-based position MIP formulations, whereas still considering optimization solvers and the same piecewise linearization techniques. A comparison of the computational performance of both models is then presented, when tested with benchmark problem instances and with new instances, and it is shown that the grid-based position MIP formulation can be competitive, depending on the characteristics of the instances. The grid-based position MIP formulation is also embedded with real-world practical constraints, such as cargo stability, and results are additionally presented.
Can genetic algorithms help virus writers reshape their creations and avoid detection?
NASA Astrophysics Data System (ADS)
Abu Doush, Iyad; Al-Saleh, Mohammed I.
2017-11-01
Different attack and defence techniques have been evolved over time as actions and reactions between black-hat and white-hat communities. Encryption, polymorphism, metamorphism and obfuscation are among the techniques used by the attackers to bypass security controls. On the other hand, pattern matching, algorithmic scanning, emulation and heuristic are used by the defence team. The Antivirus (AV) is a vital security control that is used against a variety of threats. The AV mainly scans data against its database of virus signatures. Basically, it claims a virus if a match is found. This paper seeks to find the minimal possible changes that can be made on the virus so that it will appear normal when scanned by the AV. Brute-force search through all possible changes can be a computationally expensive task. Alternatively, this paper tries to apply a Genetic Algorithm in solving such a problem. Our proposed algorithm is tested on seven different malware instances. The results show that in all the tested malware instances only a small change in each instance was good enough to bypass the AV.
Network Anomaly Detection Based on Wavelet Analysis
NASA Astrophysics Data System (ADS)
Lu, Wei; Ghorbani, Ali A.
2008-12-01
Signal processing techniques have been applied recently for analyzing and detecting network anomalies due to their potential to find novel or unknown intrusions. In this paper, we propose a new network signal modelling technique for detecting network anomalies, combining the wavelet approximation and system identification theory. In order to characterize network traffic behaviors, we present fifteen features and use them as the input signals in our system. We then evaluate our approach with the 1999 DARPA intrusion detection dataset and conduct a comprehensive analysis of the intrusions in the dataset. Evaluation results show that the approach achieves high-detection rates in terms of both attack instances and attack types. Furthermore, we conduct a full day's evaluation in a real large-scale WiFi ISP network where five attack types are successfully detected from over 30 millions flows.
Protein binding hot spots prediction from sequence only by a new ensemble learning method.
Hu, Shan-Shan; Chen, Peng; Wang, Bing; Li, Jinyan
2017-10-01
Hot spots are interfacial core areas of binding proteins, which have been applied as targets in drug design. Experimental methods are costly in both time and expense to locate hot spot areas. Recently, in-silicon computational methods have been widely used for hot spot prediction through sequence or structure characterization. As the structural information of proteins is not always solved, and thus hot spot identification from amino acid sequences only is more useful for real-life applications. This work proposes a new sequence-based model that combines physicochemical features with the relative accessible surface area of amino acid sequences for hot spot prediction. The model consists of 83 classifiers involving the IBk (Instance-based k means) algorithm, where instances are encoded by important properties extracted from a total of 544 properties in the AAindex1 (Amino Acid Index) database. Then top-performance classifiers are selected to form an ensemble by a majority voting technique. The ensemble classifier outperforms the state-of-the-art computational methods, yielding an F1 score of 0.80 on the benchmark binding interface database (BID) test set. http://www2.ahu.edu.cn/pchen/web/HotspotEC.htm .
Solving the flexible job shop problem by hybrid metaheuristics-based multiagent model
NASA Astrophysics Data System (ADS)
Nouri, Houssem Eddine; Belkahla Driss, Olfa; Ghédira, Khaled
2018-03-01
The flexible job shop scheduling problem (FJSP) is a generalization of the classical job shop scheduling problem that allows to process operations on one machine out of a set of alternative machines. The FJSP is an NP-hard problem consisting of two sub-problems, which are the assignment and the scheduling problems. In this paper, we propose how to solve the FJSP by hybrid metaheuristics-based clustered holonic multiagent model. First, a neighborhood-based genetic algorithm (NGA) is applied by a scheduler agent for a global exploration of the search space. Second, a local search technique is used by a set of cluster agents to guide the research in promising regions of the search space and to improve the quality of the NGA final population. The efficiency of our approach is explained by the flexible selection of the promising parts of the search space by the clustering operator after the genetic algorithm process, and by applying the intensification technique of the tabu search allowing to restart the search from a set of elite solutions to attain new dominant scheduling solutions. Computational results are presented using four sets of well-known benchmark literature instances. New upper bounds are found, showing the effectiveness of the presented approach.
Frausto-Solis, Juan; Liñán-García, Ernesto; Sánchez-Hernández, Juan Paulo; González-Barbosa, J Javier; González-Flores, Carlos; Castilla-Valdez, Guadalupe
2016-01-01
A new hybrid Multiphase Simulated Annealing Algorithm using Boltzmann and Bose-Einstein distributions (MPSABBE) is proposed. MPSABBE was designed for solving the Protein Folding Problem (PFP) instances. This new approach has four phases: (i) Multiquenching Phase (MQP), (ii) Boltzmann Annealing Phase (BAP), (iii) Bose-Einstein Annealing Phase (BEAP), and (iv) Dynamical Equilibrium Phase (DEP). BAP and BEAP are simulated annealing searching procedures based on Boltzmann and Bose-Einstein distributions, respectively. DEP is also a simulated annealing search procedure, which is applied at the final temperature of the fourth phase, which can be seen as a second Bose-Einstein phase. MQP is a search process that ranges from extremely high to high temperatures, applying a very fast cooling process, and is not very restrictive to accept new solutions. However, BAP and BEAP range from high to low and from low to very low temperatures, respectively. They are more restrictive for accepting new solutions. DEP uses a particular heuristic to detect the stochastic equilibrium by applying a least squares method during its execution. MPSABBE parameters are tuned with an analytical method, which considers the maximal and minimal deterioration of problem instances. MPSABBE was tested with several instances of PFP, showing that the use of both distributions is better than using only the Boltzmann distribution on the classical SA.
2011-08-01
component in an airborne platform. Authorized licensed use limited to: ROME AFB. Downloaded on August 05,2010 at 14:47:37 UTC from IEEE Xplore ...UTC from IEEE Xplore . Restrictions apply. 2 . (7) For instance, it is not difficult to show that the MGF of for Nakagami-m fading with i.n.d fading...August 05,2010 at 14:47:37 UTC from IEEE Xplore . Restrictions apply. 3 ditions. Once again, using integration by parts, (14) can be concisely expressed
Clustering-Based Ensemble Learning for Activity Recognition in Smart Homes
Jurek, Anna; Nugent, Chris; Bi, Yaxin; Wu, Shengli
2014-01-01
Application of sensor-based technology within activity monitoring systems is becoming a popular technique within the smart environment paradigm. Nevertheless, the use of such an approach generates complex constructs of data, which subsequently requires the use of intricate activity recognition techniques to automatically infer the underlying activity. This paper explores a cluster-based ensemble method as a new solution for the purposes of activity recognition within smart environments. With this approach activities are modelled as collections of clusters built on different subsets of features. A classification process is performed by assigning a new instance to its closest cluster from each collection. Two different sensor data representations have been investigated, namely numeric and binary. Following the evaluation of the proposed methodology it has been demonstrated that the cluster-based ensemble method can be successfully applied as a viable option for activity recognition. Results following exposure to data collected from a range of activities indicated that the ensemble method had the ability to perform with accuracies of 94.2% and 97.5% for numeric and binary data, respectively. These results outperformed a range of single classifiers considered as benchmarks. PMID:25014095
Clustering-based ensemble learning for activity recognition in smart homes.
Jurek, Anna; Nugent, Chris; Bi, Yaxin; Wu, Shengli
2014-07-10
Application of sensor-based technology within activity monitoring systems is becoming a popular technique within the smart environment paradigm. Nevertheless, the use of such an approach generates complex constructs of data, which subsequently requires the use of intricate activity recognition techniques to automatically infer the underlying activity. This paper explores a cluster-based ensemble method as a new solution for the purposes of activity recognition within smart environments. With this approach activities are modelled as collections of clusters built on different subsets of features. A classification process is performed by assigning a new instance to its closest cluster from each collection. Two different sensor data representations have been investigated, namely numeric and binary. Following the evaluation of the proposed methodology it has been demonstrated that the cluster-based ensemble method can be successfully applied as a viable option for activity recognition. Results following exposure to data collected from a range of activities indicated that the ensemble method had the ability to perform with accuracies of 94.2% and 97.5% for numeric and binary data, respectively. These results outperformed a range of single classifiers considered as benchmarks.
Kim, Jihoon; Grillo, Janice M; Boxwala, Aziz A; Jiang, Xiaoqian; Mandelbaum, Rose B; Patel, Bhakti A; Mikels, Debra; Vinterbo, Staal A; Ohno-Machado, Lucila
2011-01-01
Our objective is to facilitate semi-automated detection of suspicious access to EHRs. Previously we have shown that a machine learning method can play a role in identifying potentially inappropriate access to EHRs. However, the problem of sampling informative instances to build a classifier still remained. We developed an integrated filtering method leveraging both anomaly detection based on symbolic clustering and signature detection, a rule-based technique. We applied the integrated filtering to 25.5 million access records in an intervention arm, and compared this with 8.6 million access records in a control arm where no filtering was applied. On the training set with cross-validation, the AUC was 0.960 in the control arm and 0.998 in the intervention arm. The difference in false negative rates on the independent test set was significant, P=1.6×10(-6). Our study suggests that utilization of integrated filtering strategies to facilitate the construction of classifiers can be helpful.
Kim, Jihoon; Grillo, Janice M; Boxwala, Aziz A; Jiang, Xiaoqian; Mandelbaum, Rose B; Patel, Bhakti A; Mikels, Debra; Vinterbo, Staal A; Ohno-Machado, Lucila
2011-01-01
Our objective is to facilitate semi-automated detection of suspicious access to EHRs. Previously we have shown that a machine learning method can play a role in identifying potentially inappropriate access to EHRs. However, the problem of sampling informative instances to build a classifier still remained. We developed an integrated filtering method leveraging both anomaly detection based on symbolic clustering and signature detection, a rule-based technique. We applied the integrated filtering to 25.5 million access records in an intervention arm, and compared this with 8.6 million access records in a control arm where no filtering was applied. On the training set with cross-validation, the AUC was 0.960 in the control arm and 0.998 in the intervention arm. The difference in false negative rates on the independent test set was significant, P=1.6×10−6. Our study suggests that utilization of integrated filtering strategies to facilitate the construction of classifiers can be helpful. PMID:22195129
dos-Santos, M; Fujino, A
2012-01-01
Radiology teaching usually employs a systematic and comprehensive set of medical images and related information. Databases with representative radiological images and documents are highly desirable and widely used in Radiology teaching programs. Currently, computer-based teaching file systems are widely used in Medicine and Radiology teaching as an educational resource. This work addresses a user-centered radiology electronic teaching file system as an instance of MIRC compliant medical image database. Such as a digital library, the clinical cases are available to access by using a web browser. The system has offered great opportunities to some Radiology residents interact with experts. This has been done by applying user-centered techniques and creating usage context-based tools in order to make available an interactive system.
Specification and Error Pattern Based Program Monitoring
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Johnson, Scott; Rosu, Grigore; Clancy, Daniel (Technical Monitor)
2001-01-01
We briefly present Java PathExplorer (JPAX), a tool developed at NASA Ames for monitoring the execution of Java programs. JPAX can be used not only during program testing to reveal subtle errors, but also can be applied during operation to survey safety critical systems. The tool facilitates automated instrumentation of a program in order to properly observe its execution. The instrumentation can be either at the bytecode level or at the source level when the source code is available. JPaX is an instance of a more general project, called PathExplorer (PAX), which is a basis for experiments rather than a fixed system, capable of monitoring various programming languages and experimenting with other logics and analysis techniques
Solving traveling salesman problems with DNA molecules encoding numerical values.
Lee, Ji Youn; Shin, Soo-Yong; Park, Tai Hyun; Zhang, Byoung-Tak
2004-12-01
We introduce a DNA encoding method to represent numerical values and a biased molecular algorithm based on the thermodynamic properties of DNA. DNA strands are designed to encode real values by variation of their melting temperatures. The thermodynamic properties of DNA are used for effective local search of optimal solutions using biochemical techniques, such as denaturation temperature gradient polymerase chain reaction and temperature gradient gel electrophoresis. The proposed method was successfully applied to the traveling salesman problem, an instance of optimization problems on weighted graphs. This work extends the capability of DNA computing to solving numerical optimization problems, which is contrasted with other DNA computing methods focusing on logical problem solving.
Yousefi, Mina; Krzyżak, Adam; Suen, Ching Y
2018-05-01
Digital breast tomosynthesis (DBT) was developed in the field of breast cancer screening as a new tomographic technique to minimize the limitations of conventional digital mammography breast screening methods. A computer-aided detection (CAD) framework for mass detection in DBT has been developed and is described in this paper. The proposed framework operates on a set of two-dimensional (2D) slices. With plane-to-plane analysis on corresponding 2D slices from each DBT, it automatically learns complex patterns of 2D slices through a deep convolutional neural network (DCNN). It then applies multiple instance learning (MIL) with a randomized trees approach to classify DBT images based on extracted information from 2D slices. This CAD framework was developed and evaluated using 5040 2D image slices derived from 87 DBT volumes. The empirical results demonstrate that this proposed CAD framework achieves much better performance than CAD systems that use hand-crafted features and deep cardinality-restricted Bolzmann machines to detect masses in DBTs. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wani, Omar; Beckers, Joost V. L.; Weerts, Albrecht H.; Solomatine, Dimitri P.
2017-08-01
A non-parametric method is applied to quantify residual uncertainty in hydrologic streamflow forecasting. This method acts as a post-processor on deterministic model forecasts and generates a residual uncertainty distribution. Based on instance-based learning, it uses a k nearest-neighbour search for similar historical hydrometeorological conditions to determine uncertainty intervals from a set of historical errors, i.e. discrepancies between past forecast and observation. The performance of this method is assessed using test cases of hydrologic forecasting in two UK rivers: the Severn and Brue. Forecasts in retrospect were made and their uncertainties were estimated using kNN resampling and two alternative uncertainty estimators: quantile regression (QR) and uncertainty estimation based on local errors and clustering (UNEEC). Results show that kNN uncertainty estimation produces accurate and narrow uncertainty intervals with good probability coverage. Analysis also shows that the performance of this technique depends on the choice of search space. Nevertheless, the accuracy and reliability of uncertainty intervals generated using kNN resampling are at least comparable to those produced by QR and UNEEC. It is concluded that kNN uncertainty estimation is an interesting alternative to other post-processors, like QR and UNEEC, for estimating forecast uncertainty. Apart from its concept being simple and well understood, an advantage of this method is that it is relatively easy to implement.
Male contraceptive technology for nonhuman male mammals.
Bowen, R A
2008-04-01
Contraceptive techniques applied to males have potential to mitigate diverse instances of overpopulation in human and animal populations. Different situations involving different species dictate that there is no ideal male contraceptive, and emphasizes the value of varying approaches to reducing male fertility. A majority of work in this field has focused on non-surgically destroying the testes or obstructing the epididymis, and suppressing gonadotropin secretion or inducing immune responses to reproductive hormones or sperm-specific antigens. Injection of tissue-destructive agents into the testes or epididymides can be very effective, but often is associated with unacceptable inflammatory responses and sometimes pain. Hormonal vaccines are often not efficacious and provide only short-term contraception, although GnRH vaccines may be an exception to this generality. Finally, there are no clearly effective contraceptive vaccines based on sperm antigens. Although several techniques have been developed to the point of commercialization, none has yet been widely deployed other than surgical castration.
Nagarajan, P; Tetzlaff, M T; Curry, J L; Prieto, V G
Melanoma remains one of the most aggressive forms of cutaneous malignancies. While its diagnosis based on histologic parameters is usually straight forward in most cases, distinguishing a melanoma from a melanocytic nevus can be challenging in some instances, especially when there are overlapping clinical and histopathologic features. Occasionally, melanomas can histologically mimic other tumors and even demonstration of melanocytic origin can be challenging. Thus, several ancillary tests may be employed to arrive at the correct diagnosis. The objective of this review is to summarize these tests, including the well-established and commonly used ones such as immunohistochemistry, with specific emphasis on emerging techniques such as comparative genomic hybridization, fluorescence in situ hybridization and imaging mass spectrometry. Copyright © 2016 AEDV. Publicado por Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Xu, Ye; Wang, Ling; Wang, Shengyao; Liu, Min
2014-09-01
In this article, an effective hybrid immune algorithm (HIA) is presented to solve the distributed permutation flow-shop scheduling problem (DPFSP). First, a decoding method is proposed to transfer a job permutation sequence to a feasible schedule considering both factory dispatching and job sequencing. Secondly, a local search with four search operators is presented based on the characteristics of the problem. Thirdly, a special crossover operator is designed for the DPFSP, and mutation and vaccination operators are also applied within the framework of the HIA to perform an immune search. The influence of parameter setting on the HIA is investigated based on the Taguchi method of design of experiment. Extensive numerical testing results based on 420 small-sized instances and 720 large-sized instances are provided. The effectiveness of the HIA is demonstrated by comparison with some existing heuristic algorithms and the variable neighbourhood descent methods. New best known solutions are obtained by the HIA for 17 out of 420 small-sized instances and 585 out of 720 large-sized instances.
Williams, Larry J; O'Boyle, Ernest H
2015-09-01
A persistent concern in the management and applied psychology literature is the effect of common method variance on observed relations among variables. Recent work (i.e., Richardson, Simmering, & Sturman, 2009) evaluated 3 analytical approaches to controlling for common method variance, including the confirmatory factor analysis (CFA) marker technique. Their findings indicated significant problems with this technique, especially with nonideal marker variables (those with theoretical relations with substantive variables). Based on their simulation results, Richardson et al. concluded that not correcting for method variance provides more accurate estimates than using the CFA marker technique. We reexamined the effects of using marker variables in a simulation study and found the degree of error in estimates of a substantive factor correlation was relatively small in most cases, and much smaller than error associated with making no correction. Further, in instances in which the error was large, the correlations between the marker and substantive scales were higher than that found in organizational research with marker variables. We conclude that in most practical settings, the CFA marker technique yields parameter estimates close to their true values, and the criticisms made by Richardson et al. are overstated. (c) 2015 APA, all rights reserved).
Delrue, Steven; Tabatabaeipour, Morteza; Hettler, Jan; Van Den Abeele, Koen
2016-05-01
Friction stir welding (FSW) is a promising technology for the joining of aluminum alloys and other metallic admixtures that are hard to weld by conventional fusion welding. Although FSW generally provides better fatigue properties than traditional fusion welding methods, fatigue properties are still significantly lower than for the base material. Apart from voids, kissing bonds for instance, in the form of closed cracks propagating along the interface of the stirred and heat affected zone, are inherent features of the weld and can be considered as one of the main causes of a reduced fatigue life of FSW in comparison to the base material. The main problem with kissing bond defects in FSW, is that they currently are very difficult to detect using existing NDT methods. Besides, in most cases, the defects are not directly accessible from the exposed surface. Therefore, new techniques capable of detecting small kissing bond flaws need to be introduced. In the present paper, a novel and practical approach is introduced based on a nonlinear, single-sided, ultrasonic technique. The proposed inspection technique uses two single element transducers, with the first transducer transmitting an ultrasonic signal that focuses the ultrasonic waves at the bottom side of the sample where cracks are most likely to occur. The large amount of energy at the focus activates the kissing bond, resulting in the generation of nonlinear features in the wave propagation. These nonlinear features are then captured by the second transducer operating in pitch-catch mode, and are analyzed, using pulse inversion, to reveal the presence of a defect. The performance of the proposed nonlinear, pitch-catch technique, is first illustrated using a numerical study of an aluminum sample containing simple, vertically oriented, incipient cracks. Later, the proposed technique is also applied experimentally on a real-life friction stir welded butt joint containing a kissing bond flaw. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ogorodnikov, Yuri; Khachay, Michael; Pljonkin, Anton
2018-04-01
We describe the possibility of employing the special case of the 3-SAT problem stemming from the well known integer factorization problem for the quantum cryptography. It is known, that for every instance of our 3-SAT setting the given 3-CNF is satisfiable by a unique truth assignment, and the goal is to find this assignment. Since the complexity status of the factorization problem is still undefined, development of approximation algorithms and heuristics adopts interest of numerous researchers. One of promising approaches to construction of approximation techniques is based on real-valued relaxation of the given 3-CNF followed by minimizing of the appropriate differentiable loss function, and subsequent rounding of the fractional minimizer obtained. Actually, algorithms developed this way differ by the rounding scheme applied on their final stage. We propose a new rounding scheme based on Bayesian learning. The article shows that the proposed method can be used to determine the security in quantum key distribution systems. In the quantum distribution the Shannon rules is applied and the factorization problem is paramount when decrypting secret keys.
A set-covering based heuristic algorithm for the periodic vehicle routing problem.
Cacchiani, V; Hemmelmayr, V C; Tricoire, F
2014-01-30
We present a hybrid optimization algorithm for mixed-integer linear programming, embedding both heuristic and exact components. In order to validate it we use the periodic vehicle routing problem (PVRP) as a case study. This problem consists of determining a set of minimum cost routes for each day of a given planning horizon, with the constraints that each customer must be visited a required number of times (chosen among a set of valid day combinations), must receive every time the required quantity of product, and that the number of routes per day (each respecting the capacity of the vehicle) does not exceed the total number of available vehicles. This is a generalization of the well-known vehicle routing problem (VRP). Our algorithm is based on the linear programming (LP) relaxation of a set-covering-like integer linear programming formulation of the problem, with additional constraints. The LP-relaxation is solved by column generation, where columns are generated heuristically by an iterated local search algorithm. The whole solution method takes advantage of the LP-solution and applies techniques of fixing and releasing of the columns as a local search, making use of a tabu list to avoid cycling. We show the results of the proposed algorithm on benchmark instances from the literature and compare them to the state-of-the-art algorithms, showing the effectiveness of our approach in producing good quality solutions. In addition, we report the results on realistic instances of the PVRP introduced in Pacheco et al. (2011) [24] and on benchmark instances of the periodic traveling salesman problem (PTSP), showing the efficacy of the proposed algorithm on these as well. Finally, we report the new best known solutions found for all the tested problems.
A set-covering based heuristic algorithm for the periodic vehicle routing problem
Cacchiani, V.; Hemmelmayr, V.C.; Tricoire, F.
2014-01-01
We present a hybrid optimization algorithm for mixed-integer linear programming, embedding both heuristic and exact components. In order to validate it we use the periodic vehicle routing problem (PVRP) as a case study. This problem consists of determining a set of minimum cost routes for each day of a given planning horizon, with the constraints that each customer must be visited a required number of times (chosen among a set of valid day combinations), must receive every time the required quantity of product, and that the number of routes per day (each respecting the capacity of the vehicle) does not exceed the total number of available vehicles. This is a generalization of the well-known vehicle routing problem (VRP). Our algorithm is based on the linear programming (LP) relaxation of a set-covering-like integer linear programming formulation of the problem, with additional constraints. The LP-relaxation is solved by column generation, where columns are generated heuristically by an iterated local search algorithm. The whole solution method takes advantage of the LP-solution and applies techniques of fixing and releasing of the columns as a local search, making use of a tabu list to avoid cycling. We show the results of the proposed algorithm on benchmark instances from the literature and compare them to the state-of-the-art algorithms, showing the effectiveness of our approach in producing good quality solutions. In addition, we report the results on realistic instances of the PVRP introduced in Pacheco et al. (2011) [24] and on benchmark instances of the periodic traveling salesman problem (PTSP), showing the efficacy of the proposed algorithm on these as well. Finally, we report the new best known solutions found for all the tested problems. PMID:24748696
A novel hybrid meta-heuristic technique applied to the well-known benchmark optimization problems
NASA Astrophysics Data System (ADS)
Abtahi, Amir-Reza; Bijari, Afsane
2017-03-01
In this paper, a hybrid meta-heuristic algorithm, based on imperialistic competition algorithm (ICA), harmony search (HS), and simulated annealing (SA) is presented. The body of the proposed hybrid algorithm is based on ICA. The proposed hybrid algorithm inherits the advantages of the process of harmony creation in HS algorithm to improve the exploitation phase of the ICA algorithm. In addition, the proposed hybrid algorithm uses SA to make a balance between exploration and exploitation phases. The proposed hybrid algorithm is compared with several meta-heuristic methods, including genetic algorithm (GA), HS, and ICA on several well-known benchmark instances. The comprehensive experiments and statistical analysis on standard benchmark functions certify the superiority of the proposed method over the other algorithms. The efficacy of the proposed hybrid algorithm is promising and can be used in several real-life engineering and management problems.
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.
Zhou, Guangni; Zhu, Wenxin; Shen, Hao; ...
2016-06-15
Synchrotron-based Laue microdiffraction has been widely applied to characterize the local crystal structure, orientation, and defects of inhomogeneous polycrystalline solids by raster scanning them under a micro/nano focused polychromatic X-ray probe. In a typical experiment, a large number of Laue diffraction patterns are collected, requiring novel data reduction and analysis approaches, especially for researchers who do not have access to fast parallel computing capabilities. In this article, a novel approach is developed by plotting the distributions of the average recorded intensity and the average filtered intensity of the Laue patterns. Visualization of the characteristic microstructural features is realized in realmore » time during data collection. As an example, this method is applied to image key features such as microcracks, carbides, heat affected zone, and dendrites in a laser assisted 3D printed Ni-based superalloy, at a speed much faster than data collection. Such analytical approach remains valid for a wide range of crystalline solids, and therefore extends the application range of the Laue microdiffraction technique to problems where real-time decision-making during experiment is crucial (for instance time-resolved non-reversible experiments).« less
Zhou, Guangni; Zhu, Wenxin; Shen, Hao; Li, Yao; Zhang, Anfeng; Tamura, Nobumichi; Chen, Kai
2016-01-01
Synchrotron-based Laue microdiffraction has been widely applied to characterize the local crystal structure, orientation, and defects of inhomogeneous polycrystalline solids by raster scanning them under a micro/nano focused polychromatic X-ray probe. In a typical experiment, a large number of Laue diffraction patterns are collected, requiring novel data reduction and analysis approaches, especially for researchers who do not have access to fast parallel computing capabilities. In this article, a novel approach is developed by plotting the distributions of the average recorded intensity and the average filtered intensity of the Laue patterns. Visualization of the characteristic microstructural features is realized in real time during data collection. As an example, this method is applied to image key features such as microcracks, carbides, heat affected zone, and dendrites in a laser assisted 3D printed Ni-based superalloy, at a speed much faster than data collection. Such analytical approach remains valid for a wide range of crystalline solids, and therefore extends the application range of the Laue microdiffraction technique to problems where real-time decision-making during experiment is crucial (for instance time-resolved non-reversible experiments). PMID:27302087
Multiobjective Resource-Constrained Project Scheduling with a Time-Varying Number of Tasks
Abello, Manuel Blanco
2014-01-01
In resource-constrained project scheduling (RCPS) problems, ongoing tasks are restricted to utilizing a fixed number of resources. This paper investigates a dynamic version of the RCPS problem where the number of tasks varies in time. Our previous work investigated a technique called mapping of task IDs for centroid-based approach with random immigrants (McBAR) that was used to solve the dynamic problem. However, the solution-searching ability of McBAR was investigated over only a few instances of the dynamic problem. As a consequence, only a small number of characteristics of McBAR, under the dynamics of the RCPS problem, were found. Further, only a few techniques were compared to McBAR with respect to its solution-searching ability for solving the dynamic problem. In this paper, (a) the significance of the subalgorithms of McBAR is investigated by comparing McBAR to several other techniques; and (b) the scope of investigation in the previous work is extended. In particular, McBAR is compared to a technique called, Estimation Distribution Algorithm (EDA). As with McBAR, EDA is applied to solve the dynamic problem, an application that is unique in the literature. PMID:24883398
Handling Imbalanced Data Sets in Multistage Classification
NASA Astrophysics Data System (ADS)
López, M.
Multistage classification is a logical approach, based on a divide-and-conquer solution, for dealing with problems with a high number of classes. The classification problem is divided into several sequential steps, each one associated to a single classifier that works with subgroups of the original classes. In each level, the current set of classes is split into smaller subgroups of classes until they (the subgroups) are composed of only one class. The resulting chain of classifiers can be represented as a tree, which (1) simplifies the classification process by using fewer categories in each classifier and (2) makes it possible to combine several algorithms or use different attributes in each stage. Most of the classification algorithms can be biased in the sense of selecting the most populated class in overlapping areas of the input space. This can degrade a multistage classifier performance if the training set sample frequencies do not reflect the real prevalence in the population. Several techniques such as applying prior probabilities, assigning weights to the classes, or replicating instances have been developed to overcome this handicap. Most of them are designed for two-class (accept-reject) problems. In this article, we evaluate several of these techniques as applied to multistage classification and analyze how they can be useful for astronomy. We compare the results obtained by classifying a data set based on Hipparcos with and without these methods.
NASA Technical Reports Server (NTRS)
Baird, J.
1967-01-01
This supplement to Task lB-Large Solid Rocket Motor Case Fabrication Methods supplies additional supporting cost data and discusses in detail the methodology that was applied to the task. For the case elements studied, the cost was found to be directly proportional to the Process Complexity Factor (PCF). The PCF was obtained for each element by identifying unit processes that are common to the elements and their alternative manufacturing routes, by assigning a weight to each unit process, and by summing the weighted counts. In three instances of actual manufacture, the actual cost per pound equaled the cost estimate based on PCF per pound, but this supplement, recognizes that the methodology is of limited, rather than general, application.
NASA Astrophysics Data System (ADS)
Nucci, M. C.; Leach, P. G. L.
2007-09-01
We apply the techniques of Lie's symmetry analysis to a caricature of the simplified multistrain model of Castillo-Chavez and Feng [C. Castillo-Chavez, Z. Feng, To treat or not to treat: The case of tuberculosis, J. Math. Biol. 35 (1997) 629-656] for the transmission of tuberculosis and the coupled two-stream vector-based model of Feng and Velasco-Hernandez [Z. Feng, J.X. Velasco-Hernandez, Competitive exclusion in a vector-host model for the dengue fever, J. Math. Biol. 35 (1997) 523-544] to identify the combinations of parameters which lead to the existence of nontrivial symmetries. In particular we identify those combinations which lead to the possibility of the linearization of the system and provide the corresponding solutions. Many instances of additional symmetry are analyzed.
NASA Astrophysics Data System (ADS)
Konishi, Tsuyoshi; Tanida, Jun; Ichioka, Yoshiki
1995-06-01
A novel technique, the visual-area coding technique (VACT), for the optical implementation of fuzzy logic with the capability of visualization of the results is presented. This technique is based on the microfont method and is considered to be an instance of digitized analog optical computing. Huge amounts of data can be processed in fuzzy logic with the VACT. In addition, real-time visualization of the processed result can be accomplished.
Boosting instance prototypes to detect local dermoscopic features.
Situ, Ning; Yuan, Xiaojing; Zouridakis, George
2010-01-01
Local dermoscopic features are useful in many dermoscopic criteria for skin cancer detection. We address the problem of detecting local dermoscopic features from epiluminescence (ELM) microscopy skin lesion images. We formulate the recognition of local dermoscopic features as a multi-instance learning (MIL) problem. We employ the method of diverse density (DD) and evidence confidence (EC) function to convert MIL to a single-instance learning (SIL) problem. We apply Adaboost to improve the classification performance with support vector machines (SVMs) as the base classifier. We also propose to boost the selection of instance prototypes through changing the data weights in the DD function. We validate the methods on detecting ten local dermoscopic features from a dataset with 360 images. We compare the performance of the MIL approach, its boosting version, and a baseline method without using MIL. Our results show that boosting can provide performance improvement compared to the other two methods.
Strain gage selection in loads equations using a genetic algorithm
NASA Technical Reports Server (NTRS)
1994-01-01
Traditionally, structural loads are measured using strain gages. A loads calibration test must be done before loads can be accurately measured. In one measurement method, a series of point loads is applied to the structure, and loads equations are derived via the least squares curve fitting algorithm using the strain gage responses to the applied point loads. However, many research structures are highly instrumented with strain gages, and the number and selection of gages used in a loads equation can be problematic. This paper presents an improved technique using a genetic algorithm to choose the strain gages used in the loads equations. Also presented are a comparison of the genetic algorithm performance with the current T-value technique and a variant known as the Best Step-down technique. Examples are shown using aerospace vehicle wings of high and low aspect ratio. In addition, a significant limitation in the current methods is revealed. The genetic algorithm arrived at a comparable or superior set of gages with significantly less human effort, and could be applied in instances when the current methods could not.
Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W; Popp, Jürgen
2017-07-27
Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC.
Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W.; Popp, Jürgen
2017-01-01
Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC. PMID:28749450
VOC/HAP control systems for the shipbuilding and aerospace industries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lukey, M.E.; Toothman, D.A.
1999-07-01
Compliant coating systems, i.e., those which meet limits on pounds of volatile organic compound (VOC)/hazardous air pollutant (HAP) per gallon, on a solids applied basis, are routinely used to meet emission regulations in the shipbuilding and aerospace industries. However, there are occasions when solvent based systems must be used. Total capture and high destruction of the solvents in those systems is necessary in order to meet the required emission limit, e.g., a reasonably available control technology (RACT) limit of 3.5lbs of VOC/gallon. Water based marine coatings and certain aerospace finish coats do not provide sufficient longevity or meet other customermore » specifications in these instances. Furthermore, because of best available control technology (BACT) determinations or facility limits for operation in serious, severe, and extreme nonattainment areas, it is necessary to reduce annual emissions to levels which are below the levels required by the coating standards. The paper discusses those operations for controlling emissions from large-scale solvent based painting and coating systems in those instances when a high degree of overall control is required. Permanent total enclosures (stationary and portable), concentrators, regenerative thermal oxidizers, and other air pollution control systems are evaluated, both for technical applicability and economic feasibility. Several case studies are presented which illustrate techniques for capturing painting emissions, options for air handling in the workplace, and methods for destroying exhaust stream VOC concentrations of less than 40 ppm.« less
A Runtime Performance Predictor for Selecting Tabu Tenures
NASA Technical Reports Server (NTRS)
Allen, John A.; Minton, Steven N.
1997-01-01
One of the drawbacks of parameter based systems, such as tabu search, is the difficulty of finding the correct parameter for a particular problem. Often, rule-of-thumb advice is given which may have little or no applicability to the domain or problem instance at hand. This paper describes the application of a general technique, Runtime Performance Predictors (RPP) which can be used to determine, in an efficient manner, the correct tabu tenure for a particular problem instance. The details of the approach and a demonstration using a variant of GSAT are presented.
Demonstration of innovative techniques for work zone safety data analysis
DOT National Transportation Integrated Search
2009-07-15
Based upon the results of the simulator data analysis, additional future research can be : identified to validate the driving simulator in terms of similarities with Ohio work zones. For : instance, the speeds observed in the simulator were greater f...
New insights into diversification of hyper-heuristics.
Ren, Zhilei; Jiang, He; Xuan, Jifeng; Hu, Yan; Luo, Zhongxuan
2014-10-01
There has been a growing research trend of applying hyper-heuristics for problem solving, due to their ability of balancing the intensification and the diversification with low level heuristics. Traditionally, the diversification mechanism is mostly realized by perturbing the incumbent solutions to escape from local optima. In this paper, we report our attempt toward providing a new diversification mechanism, which is based on the concept of instance perturbation. In contrast to existing approaches, the proposed mechanism achieves the diversification by perturbing the instance under solving, rather than the solutions. To tackle the challenge of incorporating instance perturbation into hyper-heuristics, we also design a new hyper-heuristic framework HIP-HOP (recursive acronym of HIP-HOP is an instance perturbation-based hyper-heuristic optimization procedure), which employs a grammar guided high level strategy to manipulate the low level heuristics. With the expressive power of the grammar, the constraints, such as the feasibility of the output solution could be easily satisfied. Numerical results and statistical tests over both the Ising spin glass problem and the p -median problem instances show that HIP-HOP is able to achieve promising performances. Furthermore, runtime distribution analysis reveals that, although being relatively slow at the beginning, HIP-HOP is able to achieve competitive solutions once given sufficient time.
Peptide radioimmunoassays in clinical medicine.
Geokas, M C; Yalow, R S; Straus, E W; Gold, E M
1982-09-01
The radioimmunoassay technique, first developed for the determination of hormones, has been applied to many substances of biologic interest by clinical and research laboratories around the world. It has had an enormous effect in medicine and biology as a diagnostic tool, a guide to therapy, and a probe for the fine structure of biologic systems. For instance, the assays of insulin, gastrin, secretin, prolactin, and certain tissue-specific enzymes have been invaluable in patient care. Further refinements of current methods, as well as the emergence of new immunoassay techniques, are expected to enhance precision, specificity, reliability, and convenience of the radioimmunoassay in both clinical and research laboratories.
Quantitative ultrasonic evaluation of mechanical properties of engineering materials
NASA Technical Reports Server (NTRS)
Vary, A.
1978-01-01
Current progress in the application of ultrasonic techniques to nondestructive measurement of mechanical strength properties of engineering materials is reviewed. Even where conventional NDE techniques have shown that a part is free of overt defects, advanced NDE techniques should be available to confirm the material properties assumed in the part's design. There are many instances where metallic, composite, or ceramic parts may be free of critical defects while still being susceptible to failure under design loads due to inadequate or degraded mechanical strength. This must be considered in any failure prevention scheme that relies on fracture analysis. This review will discuss the availability of ultrasonic methods that can be applied to actual parts to assess their potential susceptibility to failure under design conditions.
The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation model.
Mongkolwat, Pattanasak; Kleper, Vladimir; Talbot, Skip; Rubin, Daniel
2014-12-01
Knowledge contained within in vivo imaging annotated by human experts or computer programs is typically stored as unstructured text and separated from other associated information. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation information model is an evolution of the National Institute of Health's (NIH) National Cancer Institute's (NCI) Cancer Bioinformatics Grid (caBIG®) AIM model. The model applies to various image types created by various techniques and disciplines. It has evolved in response to the feedback and changing demands from the imaging community at NCI. The foundation model serves as a base for other imaging disciplines that want to extend the type of information the model collects. The model captures physical entities and their characteristics, imaging observation entities and their characteristics, markups (two- and three-dimensional), AIM statements, calculations, image source, inferences, annotation role, task context or workflow, audit trail, AIM creator details, equipment used to create AIM instances, subject demographics, and adjudication observations. An AIM instance can be stored as a Digital Imaging and Communications in Medicine (DICOM) structured reporting (SR) object or Extensible Markup Language (XML) document for further processing and analysis. An AIM instance consists of one or more annotations and associated markups of a single finding along with other ancillary information in the AIM model. An annotation describes information about the meaning of pixel data in an image. A markup is a graphical drawing placed on the image that depicts a region of interest. This paper describes fundamental AIM concepts and how to use and extend AIM for various imaging disciplines.
Le, Jenna; Peng, Qi; Sperling, Karen
2016-11-01
Osteoarthritis (OA) is a disease whose hallmark is the degeneration of articular cartilage. There is a worsening epidemic of OA in the United States today, with considerable economic costs. In order to develop more effective treatments for OA, noninvasive biomarkers that permit early diagnosis and treatment monitoring are necessary. T1rho and T2 mapping are two magnetic resonance imaging techniques that have shown great promise as noninvasive biomarkers of cartilage degeneration. Each of the two techniques is endowed with advantages and disadvantages: T1rho can discern earlier biochemical changes of OA than T2 mapping, while T2 mapping is more widely available and can be incorporated into existing imaging protocols in a more time-efficient manner than T1rho. Both techniques have been applied in numerous instances to study how cartilage is affected by OA risk factors, such as age and exercise. Additionally, both techniques have been repeatedly applied to the study of posttraumatic OA in patients with torn anterior cruciate ligaments. © 2016 New York Academy of Sciences.
Loop-Extended Symbolic Execution on Binary Programs
2009-03-02
1434. Based on its speci- fication [35], one valid message format contains 2 fields: a header byte of value 4, followed by a string giving a database ...potentially become expensive. For instance the polyhedron technique [16] requires costly conversion operations on a multi-dimensional abstract representation
Distance majorization and its applications.
Chi, Eric C; Zhou, Hua; Lange, Kenneth
2014-08-01
The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.
The application of mean field theory to image motion estimation.
Zhang, J; Hanauer, G G
1995-01-01
Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.
NASA Astrophysics Data System (ADS)
Arregui, Francisco J.; Matías, Ignacio R.; Claus, Richard O.
2007-07-01
The Layer-by-Layer Electrostatic Self-Assembly (ESA) method has been successfully used for the design and fabrication of nanostructured materials. More specifically, this technique has been applied for the deposition of thin films on optical fibers with the purpose of fabricating different types of optical fiber sensors. In fact, optical fiber sensors for measuring humidity, temperature, pH, hydrogen peroxide, glucose, volatile organic compounds or even gluten have been already experimentally demonstrated. The versatility of this technique allows the deposition of these sensing coatings on flat substrates and complex geometries as well. For instance, nanoFabry-Perots and microgratings have been formed on cleaved ends of optical fibers (flat surfaces) and also sensing coatings have been built onto long period gratings (cylindrical shape), tapered fiber ends (conical shape), biconically tapered fibers or even the internal side of hollow core fibers. Among the different materials used for the construction of these sensing nanostructured coatings, diverse types such as polymers, inorganic semiconductors, colorimetric indicators, fluorescent dyes, quantum dots or even biological elements as enzymes can be found. This technique opens the door to the fabrication of new types of optical fiber sensors.
Plasma assisted surface treatments of biomaterials.
Minati, L; Migliaresi, C; Lunelli, L; Viero, G; Dalla Serra, M; Speranza, G
2017-10-01
The biocompatibility of an implant depends upon the material it is composed of, in addition to the prosthetic device's morphology, mechanical and surface properties. Properties as porosity and pore size should allow, when required, cells penetration and proliferation. Stiffness and strength, that depend on the bulk characteristics of the material, should match the mechanical requirements of the prosthetic applications. Surface properties should allow integration in the surrounding tissues by activating proper communication pathways with the surrounding cells. Bulk and surface properties are not interconnected, and for instance a bone prosthesis could possess the necessary stiffness and strength for the application omitting out prerequisite surface properties essential for the osteointegration. In this case, surface treatment is mandatory and can be accomplished using various techniques such as applying coatings to the prosthesis, ion beams, chemical grafting or modification, low temperature plasma, or a combination of the aforementioned. Low temperature plasma-based techniques have gained increasing consensus for the surface modification of biomaterials for being effective and competitive compared to other ways to introduce surface functionalities. In this paper we review plasma processing techniques and describe potentialities and applications of plasma to tailor the interface of biomaterials. Copyright © 2017 Elsevier B.V. All rights reserved.
Multi-instance learning based on instance consistency for image retrieval
NASA Astrophysics Data System (ADS)
Zhang, Miao; Wu, Zhize; Wan, Shouhong; Yue, Lihua; Yin, Bangjie
2017-07-01
Multiple-instance learning (MIL) has been successfully utilized in image retrieval. Existing approaches cannot select positive instances correctly from positive bags which may result in a low accuracy. In this paper, we propose a new image retrieval approach called multiple instance learning based on instance-consistency (MILIC) to mitigate such issue. First, we select potential positive instances effectively in each positive bag by ranking instance-consistency (IC) values of instances. Then, we design a feature representation scheme, which can represent the relationship among bags and instances, based on potential positive instances to convert a bag into a single instance. Finally, we can use a standard single-instance learning strategy, such as the support vector machine, for performing object-based image retrieval. Experimental results on two challenging data sets show the effectiveness of our proposal in terms of accuracy and run time.
NASA Astrophysics Data System (ADS)
Ofuchi, C. Y.; Morales, R. E. M.; Arruda, L. V. R.; Neves, F., Jr.; Dorini, L.; do Amaral, C. E. F.; da Silva, M. J.
2012-03-01
Gas-liquid flows occur in a broad range of industrial applications, for instance in chemical, petrochemical and nuclear industries. Correct understating of flow behavior is crucial for safe and optimized operation of equipments and processes. Thus, measurement of gas-liquid flow plays an important role. Many techniques have been proposed and applied to analyze two-phase flows so far. In this experimental research, data from a wire-mesh sensor, an ultrasound technique and high-speed camera are used to study two-phase slug flows in horizontal pipes. The experiments were performed in an experimental two-phase flow loop which comprises a horizontal acrylic pipe of 26 mm internal diameter and 9 m length. Water and air were used to produce the two-phase flow and their flow rates are separately controlled to produce different flow conditions. As a parameter of choice, translational velocity of air bubbles was determined by each of the techniques and comparatively evaluated along with a mechanistic flow model. Results obtained show good agreement among all techniques. The visualization of flow obtained by the different techniques is also presented.
Assessment of knowledge transfer in the context of biomechanics
NASA Astrophysics Data System (ADS)
Hutchison, Randolph E.
The dynamic act of knowledge transfer, or the connection of a student's prior knowledge to features of a new problem, could be considered one of the primary goals of education. Yet studies highlight more instances of failure than success. This dissertation focuses on how knowledge transfer takes place during individual problem solving, in classroom settings and during group work. Through the lens of dynamic transfer, or how students connect prior knowledge to problem features, this qualitative study focuses on a methodology to assess transfer in the context of biomechanics. The first phase of this work investigates how a pedagogical technique based on situated cognition theory affects students' ability to transfer knowledge gained in a biomechanics class to later experiences both in and out of the classroom. A post-class focus group examined events the students remembered from the class, what they learned from them, and how they connected them to later relevant experiences inside and outside the classroom. These results were triangulated with conceptual gains evaluated through concept inventories and pre- and post- content tests. Based on these results, the next two phases of the project take a more in-depth look at dynamic knowledge transfer during independent problem-solving and group project interactions, respectively. By categorizing prior knowledge (Source Tools), problem features (Target Tools) and the connections between them, results from the second phase of this study showed that within individual problem solving, source tools were almost exclusively derived from "propagated sources," i.e. those based on an authoritative source. This differs from findings in the third phase of the project, in which a mixture of "propagated" sources and "fabricated" sources, i.e. those based on student experiences, were identified within the group project work. This methodology is effective at assessing knowledge transfer in the context of biomechanics through evidence of the ability to identify differing patterns of how different students apply prior knowledge and make new connections between prior knowledge and current problem features in different learning situations. Implications for the use of this methodology include providing insight into not only students' prior knowledge, but also how they connect this prior knowledge to problem features (i.e. dynamic knowledge transfer). It also allows the identification of instances in which external input from other students or the instructor prompted knowledge transfer to take place. The use of this dynamic knowledge transfer lens allows the addressing of gaps in student understanding, and permits further investigations of techniques that increase instances of successful knowledge transfer.
Arruti, Andoni; Cearreta, Idoia; Álvarez, Aitor; Lazkano, Elena; Sierra, Basilio
2014-01-01
Study of emotions in human–computer interaction is a growing research area. This paper shows an attempt to select the most significant features for emotion recognition in spoken Basque and Spanish Languages using different methods for feature selection. RekEmozio database was used as the experimental data set. Several Machine Learning paradigms were used for the emotion classification task. Experiments were executed in three phases, using different sets of features as classification variables in each phase. Moreover, feature subset selection was applied at each phase in order to seek for the most relevant feature subset. The three phases approach was selected to check the validity of the proposed approach. Achieved results show that an instance-based learning algorithm using feature subset selection techniques based on evolutionary algorithms is the best Machine Learning paradigm in automatic emotion recognition, with all different feature sets, obtaining a mean of 80,05% emotion recognition rate in Basque and a 74,82% in Spanish. In order to check the goodness of the proposed process, a greedy searching approach (FSS-Forward) has been applied and a comparison between them is provided. Based on achieved results, a set of most relevant non-speaker dependent features is proposed for both languages and new perspectives are suggested. PMID:25279686
A staggered conservative scheme for every Froude number in rapidly varied shallow water flows
NASA Astrophysics Data System (ADS)
Stelling, G. S.; Duinmeijer, S. P. A.
2003-12-01
This paper proposes a numerical technique that in essence is based upon the classical staggered grids and implicit numerical integration schemes, but that can be applied to problems that include rapidly varied flows as well. Rapidly varied flows occur, for instance, in hydraulic jumps and bores. Inundation of dry land implies sudden flow transitions due to obstacles such as road banks. Near such transitions the grid resolution is often low compared to the gradients of the bathymetry. In combination with the local invalidity of the hydrostatic pressure assumption, conservation properties become crucial. The scheme described here, combines the efficiency of staggered grids with conservation properties so as to ensure accurate results for rapidly varied flows, as well as in expansions as in contractions. In flow expansions, a numerical approximation is applied that is consistent with the momentum principle. In flow contractions, a numerical approximation is applied that is consistent with the Bernoulli equation. Both approximations are consistent with the shallow water equations, so under sufficiently smooth conditions they converge to the same solution. The resulting method is very efficient for the simulation of large-scale inundations.
Opposition-Based Memetic Algorithm and Hybrid Approach for Sorting Permutations by Reversals.
Soncco-Álvarez, José Luis; Muñoz, Daniel M; Ayala-Rincón, Mauricio
2018-02-21
Sorting unsigned permutations by reversals is a difficult problem; indeed, it was proved to be NP-hard by Caprara (1997). Because of its high complexity, many approximation algorithms to compute the minimal reversal distance were proposed until reaching the nowadays best-known theoretical ratio of 1.375. In this article, two memetic algorithms to compute the reversal distance are proposed. The first one uses the technique of opposition-based learning leading to an opposition-based memetic algorithm; the second one improves the previous algorithm by applying the heuristic of two breakpoint elimination leading to a hybrid approach. Several experiments were performed with one-hundred randomly generated permutations, single benchmark permutations, and biological permutations. Results of the experiments showed that the proposed OBMA and Hybrid-OBMA algorithms achieve the best results for practical cases, that is, for permutations of length up to 120. Also, Hybrid-OBMA showed to improve the results of OBMA for permutations greater than or equal to 60. The applicability of our proposed algorithms was checked processing permutations based on biological data, in which case OBMA gave the best average results for all instances.
Neural networks for structural design - An integrated system implementation
NASA Technical Reports Server (NTRS)
Berke, Laszlo; Hafez, Wassim; Pao, Yoh-Han
1992-01-01
The development of powerful automated procedures to aid the creative designer is becoming increasingly critical for complex design tasks. In the work described here Artificial Neural Nets are applied to acquire structural analysis and optimization domain expertise. Based on initial instructions from the user an automated procedure generates random instances of structural analysis and/or optimization 'experiences' that cover a desired domain. It extracts training patterns from the created instances, constructs and trains an appropriate network architecture and checks the accuracy of net predictions. The final product is a trained neural net that can estimate analysis and/or optimization results instantaneously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keepin, G.R.
Over the years the Los Alamos safeguards program has developed, tested, and implemented a broad range of passive and active nondestructive analysis (NDA) instruments (based on gamma and x-ray detection and neutron counting) that are now widely employed in safeguarding nuclear materials of all forms. Here very briefly, the major categories of gamma ray and neutron based NDA techniques, give some representative examples of NDA instruments currently in use, and cite a few notable instances of state-of-the-art NDA technique development. Historical aspects and a broad overview of the safeguards program are also presented.
Path Planning For A Class Of Cutting Operations
NASA Astrophysics Data System (ADS)
Tavora, Jose
1989-03-01
Optimizing processing time in some contour-cutting operations requires solving the so-called no-load path problem. This problem is formulated and an approximate resolution method (based on heuristic search techniques) is described. Results for real-life instances (clothing layouts in the apparel industry) are presented and evaluated.
Shape-Tailorable Graphene-Based Ultra-High-Rate Supercapacitor for Wearable Electronics.
Xie, Binghe; Yang, Cheng; Zhang, Zhexu; Zou, Peichao; Lin, Ziyin; Shi, Gaoquan; Yang, Quanhong; Kang, Feiyu; Wong, Ching-Ping
2015-06-23
With the bloom of wearable electronics, it is becoming necessary to develop energy storage units, e.g., supercapacitors that can be arbitrarily tailored at the device level. Although gel electrolytes have been applied in supercapacitors for decades, no report has studied the shape-tailorable capability of a supercapacitor, for instance, where the device still works after being cut. Here we report a tailorable gel-based supercapacitor with symmetric electrodes prepared by combining electrochemically reduced graphene oxide deposited on a nickel nanocone array current collector with a unique packaging method. This supercapacitor with good flexibility and consistency showed excellent rate performance, cycling stability, and mechanical properties. As a demonstration, these tailorable supercapacitors connected in series can be used to drive small gadgets, e.g., a light-emitting diode (LED) and a minimotor propeller. As simple as it is (electrochemical deposition, stencil printing, etc.), this technique can be used in wearable electronics and miniaturized device applications that require arbitrarily shaped energy storage units.
NASA Astrophysics Data System (ADS)
Pérez Ramos, A.; Robleda Prieto, G.
2016-06-01
Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS) internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.
Improving the Held and Karp Approach with Constraint Programming
NASA Astrophysics Data System (ADS)
Benchimol, Pascal; Régin, Jean-Charles; Rousseau, Louis-Martin; Rueher, Michel; van Hoeve, Willem-Jan
Held and Karp have proposed, in the early 1970s, a relaxation for the Traveling Salesman Problem (TSP) as well as a branch-and-bound procedure that can solve small to modest-size instances to optimality [4, 5]. It has been shown that the Held-Karp relaxation produces very tight bounds in practice, and this relaxation is therefore applied in TSP solvers such as Concorde [1]. In this short paper we show that the Held-Karp approach can benefit from well-known techniques in Constraint Programming (CP) such as domain filtering and constraint propagation. Namely, we show that filtering algorithms developed for the weighted spanning tree constraint [3, 8] can be adapted to the context of the Held and Karp procedure. In addition to the adaptation of existing algorithms, we introduce a special-purpose filtering algorithm based on the underlying mechanisms used in Prim's algorithm [7]. Finally, we explored two different branching schemes to close the integrality gap. Our initial experimental results indicate that the addition of the CP techniques to the Held-Karp method can be very effective.
NASA Technical Reports Server (NTRS)
Albornoz, Caleb Ronald
2012-01-01
Thousands of millions of documents are stored and updated daily in the World Wide Web. Most of the information is not efficiently organized to build knowledge from the stored data. Nowadays, search engines are mainly used by users who rely on their skills to look for the information needed. This paper presents different techniques search engine users can apply in Google Search to improve the relevancy of search results. According to the Pew Research Center, the average person spends eight hours a month searching for the right information. For instance, a company that employs 1000 employees wastes $2.5 million dollars on looking for nonexistent and/or not found information. The cost is very high because decisions are made based on the information that is readily available to use. Whenever the information necessary to formulate an argument is not available or found, poor decisions may be made and mistakes will be more likely to occur. Also, the survey indicates that only 56% of Google users feel confident with their current search skills. Moreover, just 76% of the information that is available on the Internet is accurate.
Volcano remote sensing with ground-based spectroscopy.
McGonigle, Andrew J S
2005-12-15
The chemical compositions and emission rates of volcanic gases carry important information about underground magmatic and hydrothermal conditions, with application in eruption forecasting. Volcanic plumes are also studied because of their impacts upon the atmosphere, climate and human health. Remote sensing techniques are being increasingly used in this field because they provide real-time data and can be applied at safe distances from the target, even throughout violent eruptive episodes. However, notwithstanding the many scientific insights into volcanic behaviour already achieved with these approaches, technological limitations have placed firm restrictions upon the utility of the acquired data. For instance, volcanic SO(2) emission rate measurements are typically inaccurate (errors can be greater than 100%) and have poor time resolution (ca once per week). Volcanic gas geochemistry is currently being revolutionized by the recent implementation of a new generation of remote sensing tools, which are overcoming the above limitations and are providing degassing data of unprecedented quality. In this article, I review this field at this exciting point of transition, covering the techniques used and the insights thereby obtained, and I speculate upon the breakthroughs that are now tantalizingly close.
Characterizing Spatial Organization of Cell Surface Receptors in Human Breast Cancer with STORM
NASA Astrophysics Data System (ADS)
Lyall, Evan; Chapman, Matthew R.; Sohn, Lydia L.
2012-02-01
Regulation and control of complex biological functions are dependent upon spatial organization of biological structures at many different length scales. For instance Eph receptors and their ephrin ligands bind when opposing cells come into contact during development, resulting in spatial organizational changes on the nanometer scale that lead to changes on the macro scale, in a process known as organ morphogenesis. One technique able to probe this important spatial organization at both the nanometer and micrometer length scales, including at cell-cell junctions, is stochastic optical reconstruction microscopy (STORM). STORM is a technique that localizes individual fluorophores based on the centroids of their point spread functions and then reconstructs a composite image to produce super resolved structure. We have applied STORM to study spatial organization of the cell surface of human breast cancer cells, specifically the organization of tyrosine kinase receptors and chemokine receptors. A better characterization of spatial organization of breast cancer cell surface proteins is necessary to fully understand the tumorigenisis pathways in the most common malignancy in United States women.
Exposure Render: An Interactive Photo-Realistic Volume Rendering Framework
Kroes, Thomas; Post, Frits H.; Botha, Charl P.
2012-01-01
The field of volume visualization has undergone rapid development during the past years, both due to advances in suitable computing hardware and due to the increasing availability of large volume datasets. Recent work has focused on increasing the visual realism in Direct Volume Rendering (DVR) by integrating a number of visually plausible but often effect-specific rendering techniques, for instance modeling of light occlusion and depth of field. Besides yielding more attractive renderings, especially the more realistic lighting has a positive effect on perceptual tasks. Although these new rendering techniques yield impressive results, they exhibit limitations in terms of their exibility and their performance. Monte Carlo ray tracing (MCRT), coupled with physically based light transport, is the de-facto standard for synthesizing highly realistic images in the graphics domain, although usually not from volumetric data. Due to the stochastic sampling of MCRT algorithms, numerous effects can be achieved in a relatively straight-forward fashion. For this reason, we have developed a practical framework that applies MCRT techniques also to direct volume rendering (DVR). With this work, we demonstrate that a host of realistic effects, including physically based lighting, can be simulated in a generic and flexible fashion, leading to interactive DVR with improved realism. In the hope that this improved approach to DVR will see more use in practice, we have made available our framework under a permissive open source license. PMID:22768292
Demographic management in a federated healthcare environment.
Román, I; Roa, L M; Reina-Tosina, J; Madinabeitia, G
2006-09-01
The purpose of this paper is to provide a further step toward the decentralization of identification and demographic information about persons by solving issues related to the integration of demographic agents in a federated healthcare environment. The aim is to identify a particular person in every system of a federation and to obtain a unified view of his/her demographic information stored in different locations. This work is based on semantic models and techniques, and pursues the reconciliation of several current standardization works including ITU-T's Open Distributed Processing, CEN's prEN 12967, OpenEHR's dual and reference models, CEN's General Purpose Information Components and CORBAmed's PID service. We propose a new paradigm for the management of person identification and demographic data, based on the development of an open architecture of specialized distributed components together with the incorporation of techniques for the efficient management of domain ontologies, in order to have a federated demographic service. This new service enhances previous correlation solutions sharing ideas with different standards and domains like semantic techniques and database systems. The federation philosophy enforces us to devise solutions to the semantic, functional and instance incompatibilities in our approach. Although this work is based on several models and standards, we have improved them by combining their contributions and developing a federated architecture that does not require the centralization of demographic information. The solution is thus a good approach to face integration problems and the applied methodology can be easily extended to other tasks involved in the healthcare organization.
Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.
2017-07-14
A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.
A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less
Solving Connected Subgraph Problems in Wildlife Conservation
NASA Astrophysics Data System (ADS)
Dilkina, Bistra; Gomes, Carla P.
We investigate mathematical formulations and solution techniques for a variant of the Connected Subgraph Problem. Given a connected graph with costs and profits associated with the nodes, the goal is to find a connected subgraph that contains a subset of distinguished vertices. In this work we focus on the budget-constrained version, where we maximize the total profit of the nodes in the subgraph subject to a budget constraint on the total cost. We propose several mixed-integer formulations for enforcing the subgraph connectivity requirement, which plays a key role in the combinatorial structure of the problem. We show that a new formulation based on subtour elimination constraints is more effective at capturing the combinatorial structure of the problem, providing significant advantages over the previously considered encoding which was based on a single commodity flow. We test our formulations on synthetic instances as well as on real-world instances of an important problem in environmental conservation concerning the design of wildlife corridors. Our encoding results in a much tighter LP relaxation, and more importantly, it results in finding better integer feasible solutions as well as much better upper bounds on the objective (often proving optimality or within less than 1% of optimality), both when considering the synthetic instances as well as the real-world wildlife corridor instances.
NASA Astrophysics Data System (ADS)
Leyva, R.; Artillan, P.; Cabal, C.; Estibals, B.; Alonso, C.
2011-04-01
The article studies the dynamic performance of a family of maximum power point tracking circuits used for photovoltaic generation. It revisits the sinusoidal extremum seeking control (ESC) technique which can be considered as a particular subgroup of the Perturb and Observe algorithms. The sinusoidal ESC technique consists of adding a small sinusoidal disturbance to the input and processing the perturbed output to drive the operating point at its maximum. The output processing involves a synchronous multiplication and a filtering stage. The filter instance determines the dynamic performance of the MPPT based on sinusoidal ESC principle. The approach uses the well-known root-locus method to give insight about damping degree and settlement time of maximum-seeking waveforms. This article shows the transient waveforms in three different filter instances to illustrate the approach. Finally, an experimental prototype corroborates the dynamic analysis.
Hardware based redundant multi-threading inside a GPU for improved reliability
Sridharan, Vilas; Gurumurthi, Sudhanva
2015-05-05
A system and method for verifying computation output using computer hardware are provided. Instances of computation are generated and processed on hardware-based processors. As instances of computation are processed, each instance of computation receives a load accessible to other instances of computation. Instances of output are generated by processing the instances of computation. The instances of output are verified against each other in a hardware based processor to ensure accuracy of the output.
Estimates of the absolute error and a scheme for an approximate solution to scheduling problems
NASA Astrophysics Data System (ADS)
Lazarev, A. A.
2009-02-01
An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.
Mobile device geo-localization and object visualization in sensor networks
NASA Astrophysics Data System (ADS)
Lemaire, Simon; Bodensteiner, Christoph; Arens, Michael
2014-10-01
In this paper we present a method to visualize geo-referenced objects on modern smartphones using a multi- functional application design. The application applies different localization and visualization methods including the smartphone camera image. The presented application copes well with different scenarios. A generic application work flow and augmented reality visualization techniques are described. The feasibility of the approach is experimentally validated using an online desktop selection application in a network with a modern of-the-shelf smartphone. Applications are widespread and include for instance crisis and disaster management or military applications.
Distance majorization and its applications
Chi, Eric C.; Zhou, Hua; Lange, Kenneth
2014-01-01
The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton’s method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications. PMID:25392563
Computational Study for Planar Connected Dominating Set Problem
NASA Astrophysics Data System (ADS)
Marzban, Marjan; Gu, Qian-Ping; Jia, Xiaohua
The connected dominating set (CDS) problem is a well studied NP-hard problem with many important applications. Dorn et al. [ESA2005, LNCS3669,pp95-106] introduce a new technique to generate 2^{O(sqrt{n})} time and fixed-parameter algorithms for a number of non-local hard problems, including the CDS problem in planar graphs. The practical performance of this algorithm is yet to be evaluated. We perform a computational study for such an evaluation. The results show that the size of instances can be solved by the algorithm mainly depends on the branchwidth of the instances, coinciding with the theoretical result. For graphs with small or moderate branchwidth, the CDS problem instances with size up to a few thousands edges can be solved in a practical time and memory space. This suggests that the branch-decomposition based algorithms can be practical for the planar CDS problem.
Rajasekhar, Achanta; Gimi, Barjor; Hu, Walter
2013-01-01
We live in a world of convergence where scientific techniques from a variety of seemingly disparate fields are being applied cohesively to the study and solution of biomedical problems. For instance, the semiconductor processing field has been primarily developed to cater to the needs of the ever decreasing transistor size and cost while increasing functionality of electronic circuits. In recent years, pioneers in this field have equipped themselves with a powerful understanding of how the same techniques can be applied in the biomedical field to develop new and efficient systems for the diagnosis, analysis and treatment of various conditions in the human body. In this paper, we review the major inventions and experimental methods which have been developed for nano/micro fluidic channels, nanoparticles fabricated by top-down methods, and in-vivo nanoporous microcages for effective drug delivery. This paper focuses on the information contained in patents as well as the corresponding technical publications. The goal of the paper is to help emerging scientists understand and improvise over these inventions. PMID:24312161
NASA Astrophysics Data System (ADS)
Liu, Jingfa; Song, Beibei; Liu, Zhaoxia; Huang, Weibo; Sun, Yuanyuan; Liu, Wenjie
2013-11-01
Protein structure prediction (PSP) is a classical NP-hard problem in computational biology. The energy-landscape paving (ELP) method is a class of heuristic global optimization algorithm, and has been successfully applied to solving many optimization problems with complex energy landscapes in the continuous space. By putting forward a new update mechanism of the histogram function in ELP and incorporating the generation of initial conformation based on the greedy strategy and the neighborhood search strategy based on pull moves into ELP, an improved energy-landscape paving (ELP+) method is put forward. Twelve general benchmark instances are first tested on both two-dimensional and three-dimensional (3D) face-centered-cubic (fcc) hydrophobic-hydrophilic (HP) lattice models. The lowest energies by ELP+ are as good as or better than those of other methods in the literature for all instances. Then, five sets of larger-scale instances, denoted by S, R, F90, F180, and CASP target instances on the 3D FCC HP lattice model are tested. The proposed algorithm finds lower energies than those by the five other methods in literature. Not unexpectedly, this is particularly pronounced for the longer sequences considered. Computational results show that ELP+ is an effective method for PSP on the fcc HP lattice model.
Quantum interpolation for high-resolution sensing
Ajoy, Ashok; Liu, Yi-Xiang; Saha, Kasturi; Marseglia, Luca; Jaskula, Jean-Christophe; Bissbort, Ulf; Cappellaro, Paola
2017-01-01
Recent advances in engineering and control of nanoscale quantum sensors have opened new paradigms in precision metrology. Unfortunately, hardware restrictions often limit the sensor performance. In nanoscale magnetic resonance probes, for instance, finite sampling times greatly limit the achievable sensitivity and spectral resolution. Here we introduce a technique for coherent quantum interpolation that can overcome these problems. Using a quantum sensor associated with the nitrogen vacancy center in diamond, we experimentally demonstrate that quantum interpolation can achieve spectroscopy of classical magnetic fields and individual quantum spins with orders of magnitude finer frequency resolution than conventionally possible. Not only is quantum interpolation an enabling technique to extract structural and chemical information from single biomolecules, but it can be directly applied to other quantum systems for superresolution quantum spectroscopy. PMID:28196889
Quantum interpolation for high-resolution sensing.
Ajoy, Ashok; Liu, Yi-Xiang; Saha, Kasturi; Marseglia, Luca; Jaskula, Jean-Christophe; Bissbort, Ulf; Cappellaro, Paola
2017-02-28
Recent advances in engineering and control of nanoscale quantum sensors have opened new paradigms in precision metrology. Unfortunately, hardware restrictions often limit the sensor performance. In nanoscale magnetic resonance probes, for instance, finite sampling times greatly limit the achievable sensitivity and spectral resolution. Here we introduce a technique for coherent quantum interpolation that can overcome these problems. Using a quantum sensor associated with the nitrogen vacancy center in diamond, we experimentally demonstrate that quantum interpolation can achieve spectroscopy of classical magnetic fields and individual quantum spins with orders of magnitude finer frequency resolution than conventionally possible. Not only is quantum interpolation an enabling technique to extract structural and chemical information from single biomolecules, but it can be directly applied to other quantum systems for superresolution quantum spectroscopy.
NASA Technical Reports Server (NTRS)
Baker, J. R. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Least squares techniques were applied for parameter estimation of functions to predict winter wheat phenological stage with daily maximum temperature, minimum temperature, daylength, and precipitation as independent variables. After parameter estimation, tests were conducted using independent data. It may generally be concluded that exponential functions have little advantage over polynomials. Precipitation was not found to significantly affect the fits. The Robertson triquadratic form, in general use for spring wheat, yielded good results, but special techniques and care are required. In most instances, equations with nonlinear effects were found to yield erratic results when utilized with averaged daily environmental values as independent variables.
van Iersel, Leo; Kelk, Steven; Lekić, Nela; Scornavacca, Celine
2014-05-05
Reticulate events play an important role in determining evolutionary relationships. The problem of computing the minimum number of such events to explain discordance between two phylogenetic trees is a hard computational problem. Even for binary trees, exact solvers struggle to solve instances with reticulation number larger than 40-50. Here we present CycleKiller and NonbinaryCycleKiller, the first methods to produce solutions verifiably close to optimality for instances with hundreds or even thousands of reticulations. Using simulations, we demonstrate that these algorithms run quickly for large and difficult instances, producing solutions that are very close to optimality. As a spin-off from our simulations we also present TerminusEst, which is the fastest exact method currently available that can handle nonbinary trees: this is used to measure the accuracy of the NonbinaryCycleKiller algorithm. All three methods are based on extensions of previous theoretical work (SIDMA 26(4):1635-1656, TCBB 10(1):18-25, SIDMA 28(1):49-66) and are publicly available. We also apply our methods to real data.
Allner, S; Koehler, T; Fehringer, A; Birnbacher, L; Willner, M; Pfeiffer, F; Noël, P B
2016-05-21
The purpose of this work is to develop an image-based de-noising algorithm that exploits complementary information and noise statistics from multi-modal images, as they emerge in x-ray tomography techniques, for instance grating-based phase-contrast CT and spectral CT. Among the noise reduction methods, image-based de-noising is one popular approach and the so-called bilateral filter is a well known algorithm for edge-preserving filtering. We developed a generalization of the bilateral filter for the case where the imaging system provides two or more perfectly aligned images. The proposed generalization is statistically motivated and takes the full second order noise statistics of these images into account. In particular, it includes a noise correlation between the images and spatial noise correlation within the same image. The novel generalized three-dimensional bilateral filter is applied to the attenuation and phase images created with filtered backprojection reconstructions from grating-based phase-contrast tomography. In comparison to established bilateral filters, we obtain improved noise reduction and at the same time a better preservation of edges in the images on the examples of a simulated soft-tissue phantom, a human cerebellum and a human artery sample. The applied full noise covariance is determined via cross-correlation of the image noise. The filter results yield an improved feature recovery based on enhanced noise suppression and edge preservation as shown here on the example of attenuation and phase images captured with grating-based phase-contrast computed tomography. This is supported by quantitative image analysis. Without being bound to phase-contrast imaging, this generalized filter is applicable to any kind of noise-afflicted image data with or without noise correlation. Therefore, it can be utilized in various imaging applications and fields.
Coding tools investigation for next generation video coding based on HEVC
NASA Astrophysics Data System (ADS)
Chen, Jianle; Chen, Ying; Karczewicz, Marta; Li, Xiang; Liu, Hongbin; Zhang, Li; Zhao, Xin
2015-09-01
The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.
Using ontology databases for scalable query answering, inconsistency detection, and data integration
Dou, Dejing
2011-01-01
An ontology database is a basic relational database management system that models an ontology plus its instances. To reason over the transitive closure of instances in the subsumption hierarchy, for example, an ontology database can either unfold views at query time or propagate assertions using triggers at load time. In this paper, we use existing benchmarks to evaluate our method—using triggers—and we demonstrate that by forward computing inferences, we not only improve query time, but the improvement appears to cost only more space (not time). However, we go on to show that the true penalties were simply opaque to the benchmark, i.e., the benchmark inadequately captures load-time costs. We have applied our methods to two case studies in biomedicine, using ontologies and data from genetics and neuroscience to illustrate two important applications: first, ontology databases answer ontology-based queries effectively; second, using triggers, ontology databases detect instance-based inconsistencies—something not possible using views. Finally, we demonstrate how to extend our methods to perform data integration across multiple, distributed ontology databases. PMID:22163378
Gruginskie, Lúcia Adriana Dos Santos; Vaccaro, Guilherme Luís Roehe
2018-01-01
The quality of the judicial system of a country can be verified by the overall length time of lawsuits, or the lead time. When the lead time is excessive, a country's economy can be affected, leading to the adoption of measures such as the creation of the Saturn Center in Europe. Although there are performance indicators to measure the lead time of lawsuits, the analysis and the fit of prediction models are still underdeveloped themes in the literature. To contribute to this subject, this article compares different prediction models according to their accuracy, sensitivity, specificity, precision, and F1 measure. The database used was from TRF4-the Tribunal Regional Federal da 4a Região-a federal court in southern Brazil, corresponding to the 2nd Instance civil lawsuits completed in 2016. The models were fitted using support vector machine, naive Bayes, random forests, and neural network approaches with categorical predictor variables. The lead time of the 2nd Instance judgment was selected as the response variable measured in days and categorized in bands. The comparison among the models showed that the support vector machine and random forest approaches produced measurements that were superior to those of the other models. The evaluation of the models was made using k-fold cross-validation similar to that applied to the test models.
TMS-EEG: From basic research to clinical applications
NASA Astrophysics Data System (ADS)
Hernandez-Pavon, Julio C.; Sarvas, Jukka; Ilmoniemi, Risto J.
2014-11-01
Transcranial magnetic stimulation (TMS) combined with electroencephalography (EEG) is a powerful technique for non-invasively studying cortical excitability and connectivity. The combination of TMS and EEG has widely been used to perform basic research and recently has gained importance in different clinical applications. In this paper, we will describe the physical and biological principles of TMS-EEG and different applications in basic research and clinical applications. We will present methods based on independent component analysis (ICA) for studying the TMS-evoked EEG responses. These methods have the capability to remove and suppress large artifacts, making it feasible, for instance, to study language areas with TMS-EEG. We will discuss the different applications and limitations of TMS and TMS-EEG in clinical applications. Potential applications of TMS are presented, for instance in neurosurgical planning, depression and other neurological disorders. Advantages and disadvantages of TMS-EEG and its variants such as repetitive TMS (rTMS) are discussed in comparison to other brain stimulation and neuroimaging techniques. Finally, challenges that researchers face when using this technique will be summarized.
PuReD-MCL: a graph-based PubMed document clustering methodology.
Theodosiou, T; Darzentas, N; Angelis, L; Ouzounis, C A
2008-09-01
Biomedical literature is the principal repository of biomedical knowledge, with PubMed being the most complete database collecting, organizing and analyzing such textual knowledge. There are numerous efforts that attempt to exploit this information by using text mining and machine learning techniques. We developed a novel approach, called PuReD-MCL (Pubmed Related Documents-MCL), which is based on the graph clustering algorithm MCL and relevant resources from PubMed. PuReD-MCL avoids using natural language processing (NLP) techniques directly; instead, it takes advantage of existing resources, available from PubMed. PuReD-MCL then clusters documents efficiently using the MCL graph clustering algorithm, which is based on graph flow simulation. This process allows users to analyse the results by highlighting important clues, and finally to visualize the clusters and all relevant information using an interactive graph layout algorithm, for instance BioLayout Express 3D. The methodology was applied to two different datasets, previously used for the validation of the document clustering tool TextQuest. The first dataset involves the organisms Escherichia coli and yeast, whereas the second is related to Drosophila development. PuReD-MCL successfully reproduces the annotated results obtained from TextQuest, while at the same time provides additional insights into the clusters and the corresponding documents. Source code in perl and R are available from http://tartara.csd.auth.gr/~theodos/
Size reduction techniques for vital compliant VHDL simulation models
Rich, Marvin J.; Misra, Ashutosh
2006-08-01
A method and system select delay values from a VHDL standard delay file that correspond to an instance of a logic gate in a logic model. Then the system collects all the delay values of the selected instance and builds super generics for the rise-time and the fall-time of the selected instance. Then, the system repeats this process for every delay value in the standard delay file (310) that correspond to every instance of every logic gate in the logic model. The system then outputs a reduced size standard delay file (314) containing the super generics for every instance of every logic gate in the logic model.
DServO: A Peer-to-Peer-based Approach to Biomedical Ontology Repositories.
Mambone, Zakaria; Savadogo, Mahamadi; Some, Borlli Michel Jonas; Diallo, Gayo
2015-01-01
We present in this poster an extension of the ServO ontology server system, which adopts a decentralized Peer-To-Peer approach for managing multiple heterogeneous knowledge organization systems. It relies on the use of the JXTA protocol coupled with information retrieval techniques to provide a decentralized infrastructure for managing multiples instances of Ontology Repositories.
Emotional Design Tutoring System Based on Multimodal Affective Computing Techniques
ERIC Educational Resources Information Center
Wang, Cheng-Hung; Lin, Hao-Chiang Koong
2018-01-01
In a traditional class, the role of the teacher is to teach and that of the students is to learn. However, the constant and rapid technological advancements have transformed education in numerous ways. For instance, in addition to traditional, face to face teaching, E-learning is now possible. Nevertheless, face to face teaching is unavailable in…
Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo
2016-01-01
Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community.
Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo
2016-01-01
Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community. PMID:27124610
Four-dimensional reconstruction of cultural heritage sites based on photogrammetry and clustering
NASA Astrophysics Data System (ADS)
Voulodimos, Athanasios; Doulamis, Nikolaos; Fritsch, Dieter; Makantasis, Konstantinos; Doulamis, Anastasios; Klein, Michael
2017-01-01
A system designed and developed for the three-dimensional (3-D) reconstruction of cultural heritage (CH) assets is presented. Two basic approaches are presented. The first one, resulting in an "approximate" 3-D model, uses images retrieved in online multimedia collections; it employs a clustering-based technique to perform content-based filtering and eliminate outliers that significantly reduce the performance of 3-D reconstruction frameworks. The second one is based on input image data acquired through terrestrial laser scanning, as well as close range and airborne photogrammetry; it follows a sophisticated multistep strategy, which leads to a "precise" 3-D model. Furthermore, the concept of change history maps is proposed to address the computational limitations involved in four-dimensional (4-D) modeling, i.e., capturing 3-D models of a CH landmark or site at different time instances. The system also comprises a presentation viewer, which manages the display of the multifaceted CH content collected and created. The described methods have been successfully applied and evaluated in challenging real-world scenarios, including the 4-D reconstruction of the historic Market Square of the German city of Calw in the context of the 4-D-CH-World EU project.
Walker, J.F.
1993-01-01
Selected statistical techniques were applied to three urban watersheds in Texas and Minnesota and three rural watersheds in Illinois. For the urban watersheds, single- and paired-site data-collection strategies were considered. The paired-site strategy was much more effective than the singlesite strategy for detecting changes. Analysis of storm load regression residuals demonstrated the potential utility of regressions for variability reduction. For the rural watersheds, none of the selected techniques were effective at identifying changes, primarily due to a small degree of management-practice implementation, potential errors introduced through the estimation of storm load, and small sample sizes. A Monte Carlo sensitivity analysis was used to determine the percent change in water chemistry that could be detected for each watershed. In most instances, the use of regressions improved the ability to detect changes.
A Study of Production of Miscibility Gap Alloys with Controlled Structures
NASA Technical Reports Server (NTRS)
Parr, R. A.; Johnston, M. H.; Burka, J. A.; Davis, J. H.; Lee, J. A.
1983-01-01
Composite materials were directionally solidified using a new technique to align the constituents longitudinally along the length of the specimen. In some instances a tin coating was applied and diffused into the sample to form a high transition temperature superconducting phase. The superconducting properties were measured and compared with the properties obtained for powder composites and re-directionally solidified powder compacts. The samples which were compacted and redirectionally solidified showed the highest transition temperature and wildest transition range. This indicates that both steps, powder compaction and resolidification, determine the final superconducting properties of the material.
Quantum speedup of the traveling-salesman problem for bounded-degree graphs
NASA Astrophysics Data System (ADS)
Moylett, Dominic J.; Linden, Noah; Montanaro, Ashley
2017-03-01
The traveling-salesman problem is one of the most famous problems in graph theory. However, little is currently known about the extent to which quantum computers could speed up algorithms for the problem. In this paper, we prove a quadratic quantum speedup when the degree of each vertex is at most 3 by applying a quantum backtracking algorithm to a classical algorithm by Xiao and Nagamochi. We then use similar techniques to accelerate a classical algorithm for when the degree of each vertex is at most 4, before speeding up higher-degree graphs via reductions to these instances.
Definition and Formulation of Scientific Prediction and Its Role in Inquiry-Based Laboratories
ERIC Educational Resources Information Center
Mauldin, Robert F.
2011-01-01
The formulation of a scientific prediction by students in college-level laboratories is proposed. This activity will develop the students' ability to apply abstract concepts via deductive reasoning. For instances in which a hypothesis will be tested by an experiment, students should develop a prediction that states what sort of experimental…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krupar, V.; Eastwood, J. P.; Kruparova, O.
Coronal mass ejections (CMEs) are large-scale eruptions of magnetized plasma that may cause severe geomagnetic storms if Earth directed. Here, we report a rare instance with comprehensive in situ and remote sensing observations of a CME combining white-light, radio, and plasma measurements from four different vantage points. For the first time, we have successfully applied a radio direction-finding technique to an interplanetary type II burst detected by two identical widely separated radio receivers. The derived locations of the type II and type III bursts are in general agreement with the white-light CME reconstruction. We find that the radio emission arisesmore » from the flanks of the CME and are most likely associated with the CME-driven shock. Our work demonstrates the complementarity between radio triangulation and 3D reconstruction techniques for space weather applications.« less
Visibility Equalizer Cutaway Visualization of Mesoscopic Biological Models.
Le Muzic, M; Mindek, P; Sorger, J; Autin, L; Goodsell, D; Viola, I
2016-06-01
In scientific illustrations and visualization, cutaway views are often employed as an effective technique for occlusion management in densely packed scenes. We propose a novel method for authoring cutaway illustrations of mesoscopic biological models. In contrast to the existing cutaway algorithms, we take advantage of the specific nature of the biological models. These models consist of thousands of instances with a comparably smaller number of different types. Our method constitutes a two stage process. In the first step, clipping objects are placed in the scene, creating a cutaway visualization of the model. During this process, a hierarchical list of stacked bars inform the user about the instance visibility distribution of each individual molecular type in the scene. In the second step, the visibility of each molecular type is fine-tuned through these bars, which at this point act as interactive visibility equalizers. An evaluation of our technique with domain experts confirmed that our equalizer-based approach for visibility specification was valuable and effective for both, scientific and educational purposes.
Visibility Equalizer Cutaway Visualization of Mesoscopic Biological Models
Le Muzic, M.; Mindek, P.; Sorger, J.; Autin, L.; Goodsell, D.; Viola, I.
2017-01-01
In scientific illustrations and visualization, cutaway views are often employed as an effective technique for occlusion management in densely packed scenes. We propose a novel method for authoring cutaway illustrations of mesoscopic biological models. In contrast to the existing cutaway algorithms, we take advantage of the specific nature of the biological models. These models consist of thousands of instances with a comparably smaller number of different types. Our method constitutes a two stage process. In the first step, clipping objects are placed in the scene, creating a cutaway visualization of the model. During this process, a hierarchical list of stacked bars inform the user about the instance visibility distribution of each individual molecular type in the scene. In the second step, the visibility of each molecular type is fine-tuned through these bars, which at this point act as interactive visibility equalizers. An evaluation of our technique with domain experts confirmed that our equalizer-based approach for visibility specification was valuable and effective for both, scientific and educational purposes. PMID:28344374
Human vision-based algorithm to hide defective pixels in LCDs
NASA Astrophysics Data System (ADS)
Kimpe, Tom; Coulier, Stefaan; Van Hoey, Gert
2006-02-01
Producing displays without pixel defects or repairing defective pixels is technically not possible at this moment. This paper presents a new approach to solve this problem: defects are made invisible for the user by using image processing algorithms based on characteristics of the human eye. The performance of this new algorithm has been evaluated using two different methods. First of all the theoretical response of the human eye was analyzed on a series of images and this before and after applying the defective pixel compensation algorithm. These results show that indeed it is possible to mask a defective pixel. A second method was to perform a psycho-visual test where users were asked whether or not a defective pixel could be perceived. The results of these user tests also confirm the value of the new algorithm. Our "defective pixel correction" algorithm can be implemented very efficiently and cost-effectively as pixel-dataprocessing algorithms inside the display in for instance an FPGA, a DSP or a microprocessor. The described techniques are also valid for both monochrome and color displays ranging from high-quality medical displays to consumer LCDTV applications.
2012-01-01
precision and accuracy. For instance, in international time metrology, two-way satellite time and frequency transfer ( TWSTFT ) (see e.g. [1] and...can act as a time transfer system that is complementary to other high quality systems such as TWSTFT and GPS. REFERENCES [1] J. Levine. “A
NASA Astrophysics Data System (ADS)
Yamaguchi, Hideshi; Soeda, Takeshi
2015-03-01
A practical framework for an electron beam induced current (EBIC) technique has been established for conductive materials based on a numerical optimization approach. Although the conventional EBIC technique is useful for evaluating the distributions of dopants or crystal defects in semiconductor transistors, issues related to the reproducibility and quantitative capability of measurements using this technique persist. For instance, it is difficult to acquire high-quality EBIC images throughout continuous tests due to variation in operator skill or test environment. Recently, due to the evaluation of EBIC equipment performance and the numerical optimization of equipment items, the constant acquisition of high contrast images has become possible, improving the reproducibility as well as yield regardless of operator skill or test environment. The technique proposed herein is even more sensitive and quantitative than scanning probe microscopy, an imaging technique that can possibly damage the sample. The new technique is expected to benefit the electrical evaluation of fragile or soft materials along with LSI materials.
A multilevel probabilistic beam search algorithm for the shortest common supersequence problem.
Gallardo, José E
2012-01-01
The shortest common supersequence problem is a classical problem with many applications in different fields such as planning, Artificial Intelligence and especially in Bioinformatics. Due to its NP-hardness, we can not expect to efficiently solve this problem using conventional exact techniques. This paper presents a heuristic to tackle this problem based on the use at different levels of a probabilistic variant of a classical heuristic known as Beam Search. The proposed algorithm is empirically analysed and compared to current approaches in the literature. Experiments show that it provides better quality solutions in a reasonable time for medium and large instances of the problem. For very large instances, our heuristic also provides better solutions, but required execution times may increase considerably.
A novel fast ion chromatographic method for the analysis of fluoride in Antarctic snow and ice.
Severi, Mirko; Becagli, Silvia; Frosini, Daniele; Marconi, Miriam; Traversi, Rita; Udisti, Roberto
2014-01-01
Ice cores are widely used to reconstruct past changes of the climate system. For instance, the ice core record of numerous water-soluble and insoluble chemical species that are trapped in snow and ice offer the possibility to investigate past changes of various key compounds present in the atmosphere (i.e., aerosol, reactive gases). We developed a new method for the quantitative determination of fluoride in ice cores at sub-μg L(-1) levels by coupling a flow injection analysis technique with a fast ion chromatography separation based on the "heart cut" column switching technology. Sensitivity, linear range (up to 60 μg L(-1)), reproducibility, and detection limit (0.02 μg L(-1)) were evaluated for the new method. This method was successfully applied to the analysis of fluoride at trace levels in more than 450 recent snow samples collected during the 1998-1999 International Trans-Antarctica Scientific Expedition traverse in East Antarctica at sites located between 170 and 850 km from the coastline.
Study of Commercially Available Lobelia chinensis Products Using Bar-HRM Technology.
Sun, Wei; Yan, Song; Li, Jingjian; Xiong, Chao; Shi, Yuhua; Wu, Lan; Xiang, Li; Deng, Bo; Ma, Wei; Chen, Shilin
2017-01-01
There is an unmet need for herbal medicine identification using a fast, sensitive, and easy-to-use method that does not require complex infrastructure and well-trained technicians. For instance, the detection of adulterants in Lobelia chinensis herbal product has been challenging, since current detection technologies are not effective due to their own limits. High Resolution Melting (HRM) has emerged as a powerful new technology for clinical diagnosis, research in the food industry and in plant molecular biology, and this method has already highlighted the complexity of species identification. In this study, we developed a method of species specific detection of L. chinensis using HRM analysis combined with internal transcribed spacer 2. We then applied this method to commercial products purporting to contain L . chinensis . Our results demonstrated that HRM can differentiate L. chinensis from six common adulterants. HRM was proven to be a fast and accurate technique for testing the authenticity of L. chinensis in herbal products. Based on these results, a HRM approach for herbal authentication is provided.
Towards a Quality Assessment Method for Learning Preference Profiles in Negotiation
NASA Astrophysics Data System (ADS)
Hindriks, Koen V.; Tykhonov, Dmytro
In automated negotiation, information gained about an opponent's preference profile by means of learning techniques may significantly improve an agent's negotiation performance. It therefore is useful to gain a better understanding of how various negotiation factors influence the quality of learning. The quality of learning techniques in negotiation are typically assessed indirectly by means of comparing the utility levels of agreed outcomes and other more global negotiation parameters. An evaluation of learning based on such general criteria, however, does not provide any insight into the influence of various aspects of negotiation on the quality of the learned model itself. The quality may depend on such aspects as the domain of negotiation, the structure of the preference profiles, the negotiation strategies used by the parties, and others. To gain a better understanding of the performance of proposed learning techniques in the context of negotiation and to be able to assess the potential to improve the performance of such techniques a more systematic assessment method is needed. In this paper we propose such a systematic method to analyse the quality of the information gained about opponent preferences by learning in single-instance negotiations. The method includes measures to assess the quality of a learned preference profile and proposes an experimental setup to analyse the influence of various negotiation aspects on the quality of learning. We apply the method to a Bayesian learning approach for learning an opponent's preference profile and discuss our findings.
Efficient sequential and parallel algorithms for finding edit distance based motifs.
Pal, Soumitra; Xiao, Peng; Rajasekaran, Sanguthevar
2016-08-18
Motif search is an important step in extracting meaningful patterns from biological data. The general problem of motif search is intractable and there is a pressing need to develop efficient, exact and approximation algorithms to solve this problem. In this paper, we present several novel, exact, sequential and parallel algorithms for solving the (l,d) Edit-distance-based Motif Search (EMS) problem: given two integers l,d and n biological strings, find all strings of length l that appear in each input string with atmost d errors of types substitution, insertion and deletion. One popular technique to solve the problem is to explore for each input string the set of all possible l-mers that belong to the d-neighborhood of any substring of the input string and output those which are common for all input strings. We introduce a novel and provably efficient neighborhood exploration technique. We show that it is enough to consider the candidates in neighborhood which are at a distance exactly d. We compactly represent these candidate motifs using wildcard characters and efficiently explore them with very few repetitions. Our sequential algorithm uses a trie based data structure to efficiently store and sort the candidate motifs. Our parallel algorithm in a multi-core shared memory setting uses arrays for storing and a novel modification of radix-sort for sorting the candidate motifs. The algorithms for EMS are customarily evaluated on several challenging instances such as (8,1), (12,2), (16,3), (20,4), and so on. The best previously known algorithm, EMS1, is sequential and in estimated 3 days solves up to instance (16,3). Our sequential algorithms are more than 20 times faster on (16,3). On other hard instances such as (9,2), (11,3), (13,4), our algorithms are much faster. Our parallel algorithm has more than 600 % scaling performance while using 16 threads. Our algorithms have pushed up the state-of-the-art of EMS solvers and we believe that the techniques introduced in this paper are also applicable to other motif search problems such as Planted Motif Search (PMS) and Simple Motif Search (SMS).
Simulation and optimization of pressure swing adsorption systmes using reduced-order modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, A.; Biegler, L.; Zitney, S.
2009-01-01
Over the past three decades, pressure swing adsorption (PSA) processes have been widely used as energyefficient gas separation techniques, especially for high purity hydrogen purification from refinery gases. Models for PSA processes are multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together. The solution of this coupled stiff PDE system is governed by steep fronts moving with time. As a result, the optimization of such systems represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Model reduction is one approachmore » to generate cost-efficient low-order models which can be used as surrogate models in the optimization problems. This study develops a reducedorder model (ROM) based on proper orthogonal decomposition (POD), which is a low-dimensional approximation to a dynamic PDE-based model. The proposed method leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization and making the optimization problem computationally efficient. The method has been applied to the dynamic coupled PDE-based model of a twobed four-step PSA process for separation of hydrogen from methane. Separate ROMs have been developed for each operating step with different POD modes for each of them. A significant reduction in the order of the number of states has been achieved. The reduced-order model has been successfully used to maximize hydrogen recovery by manipulating operating pressures, step times and feed and regeneration velocities, while meeting product purity and tight bounds on these parameters. Current results indicate the proposed ROM methodology as a promising surrogate modeling technique for cost-effective optimization purposes.« less
A Cluster-then-label Semi-supervised Learning Approach for Pathology Image Classification.
Peikari, Mohammad; Salama, Sherine; Nofech-Mozes, Sharon; Martel, Anne L
2018-05-08
Completely labeled pathology datasets are often challenging and time-consuming to obtain. Semi-supervised learning (SSL) methods are able to learn from fewer labeled data points with the help of a large number of unlabeled data points. In this paper, we investigated the possibility of using clustering analysis to identify the underlying structure of the data space for SSL. A cluster-then-label method was proposed to identify high-density regions in the data space which were then used to help a supervised SVM in finding the decision boundary. We have compared our method with other supervised and semi-supervised state-of-the-art techniques using two different classification tasks applied to breast pathology datasets. We found that compared with other state-of-the-art supervised and semi-supervised methods, our SSL method is able to improve classification performance when a limited number of labeled data instances are made available. We also showed that it is important to examine the underlying distribution of the data space before applying SSL techniques to ensure semi-supervised learning assumptions are not violated by the data.
NASA Astrophysics Data System (ADS)
Tommasino, F.
2016-03-01
This review will summarize results obtained in the recent years applying the Local Effect Model (LEM) approach to the study of basic radiobiological aspects, as for instance DNA damage induction and repair, and charged particle track structure. The promising results obtained using different experimental techniques and looking at different biological end points, support the relevance of the LEM approach for the description of radiation effects induced by both low- and high-LET radiation. Furthermore, they suggest that nowadays the appropriate combination of experimental and modelling tools can lead to advances in the understanding of several open issues in the field of radiation biology.
Markov Chain Ontology Analysis (MCOA)
2012-01-01
Background Biomedical ontologies have become an increasingly critical lens through which researchers analyze the genomic, clinical and bibliographic data that fuels scientific research. Of particular relevance are methods, such as enrichment analysis, that quantify the importance of ontology classes relative to a collection of domain data. Current analytical techniques, however, remain limited in their ability to handle many important types of structural complexity encountered in real biological systems including class overlaps, continuously valued data, inter-instance relationships, non-hierarchical relationships between classes, semantic distance and sparse data. Results In this paper, we describe a methodology called Markov Chain Ontology Analysis (MCOA) and illustrate its use through a MCOA-based enrichment analysis application based on a generative model of gene activation. MCOA models the classes in an ontology, the instances from an associated dataset and all directional inter-class, class-to-instance and inter-instance relationships as a single finite ergodic Markov chain. The adjusted transition probability matrix for this Markov chain enables the calculation of eigenvector values that quantify the importance of each ontology class relative to other classes and the associated data set members. On both controlled Gene Ontology (GO) data sets created with Escherichia coli, Drosophila melanogaster and Homo sapiens annotations and real gene expression data extracted from the Gene Expression Omnibus (GEO), the MCOA enrichment analysis approach provides the best performance of comparable state-of-the-art methods. Conclusion A methodology based on Markov chain models and network analytic metrics can help detect the relevant signal within large, highly interdependent and noisy data sets and, for applications such as enrichment analysis, has been shown to generate superior performance on both real and simulated data relative to existing state-of-the-art approaches. PMID:22300537
Markov Chain Ontology Analysis (MCOA).
Frost, H Robert; McCray, Alexa T
2012-02-03
Biomedical ontologies have become an increasingly critical lens through which researchers analyze the genomic, clinical and bibliographic data that fuels scientific research. Of particular relevance are methods, such as enrichment analysis, that quantify the importance of ontology classes relative to a collection of domain data. Current analytical techniques, however, remain limited in their ability to handle many important types of structural complexity encountered in real biological systems including class overlaps, continuously valued data, inter-instance relationships, non-hierarchical relationships between classes, semantic distance and sparse data. In this paper, we describe a methodology called Markov Chain Ontology Analysis (MCOA) and illustrate its use through a MCOA-based enrichment analysis application based on a generative model of gene activation. MCOA models the classes in an ontology, the instances from an associated dataset and all directional inter-class, class-to-instance and inter-instance relationships as a single finite ergodic Markov chain. The adjusted transition probability matrix for this Markov chain enables the calculation of eigenvector values that quantify the importance of each ontology class relative to other classes and the associated data set members. On both controlled Gene Ontology (GO) data sets created with Escherichia coli, Drosophila melanogaster and Homo sapiens annotations and real gene expression data extracted from the Gene Expression Omnibus (GEO), the MCOA enrichment analysis approach provides the best performance of comparable state-of-the-art methods. A methodology based on Markov chain models and network analytic metrics can help detect the relevant signal within large, highly interdependent and noisy data sets and, for applications such as enrichment analysis, has been shown to generate superior performance on both real and simulated data relative to existing state-of-the-art approaches.
Alligood, Christina A; Dorey, Nicole R; Mehrkam, Lindsay R; Leighty, Katherine A
2017-05-01
Environmental enrichment in zoos and aquariums is often evaluated at two overlapping levels: published research and day-to-day institutional record keeping. Several authors have discussed ongoing challenges with small sample sizes in between-groups zoological research and have cautioned against the inappropriate use of inferential statistics (Shepherdson, , International Zoo Yearbook, 38, 118-124; Shepherdson, Lewis, Carlstead, Bauman, & Perrin, Applied Animal Behaviour Science, 147, 298-277; Swaisgood, , Applied Animal Behaviour Science, 102, 139-162; Swaisgood & Shepherdson, , Zoo Biology, 24, 499-518). Multi-institutional studies are the typically-prescribed solution, but these are expensive and difficult to carry out. Kuhar ( Zoo Biology, 25, 339-352) provided a reminder that inferential statistics are only necessary when one wishes to draw general conclusions at the population level. Because welfare is assessed at the level of the individual animal, we argue that evaluations of enrichment efficacy are often instances in which inferential statistics may be neither necessary nor appropriate. In recent years, there have been calls for the application of behavior-analytic techniques to zoo animal behavior management, including environmental enrichment (e.g., Bloomsmith, Marr, & Maple, , Applied Animal Behaviour Science, 102, 205-222; Tarou & Bashaw, , Applied Animal Behaviour Science, 102, 189-204). Single-subject (also called single-case, or small-n) designs provide a means of designing evaluations of enrichment efficacy based on an individual's behavior. We discuss how these designs might apply to research and practice goals at zoos and aquariums, contrast them with standard practices in the field, and give examples of how each could be successfully applied in a zoo or aquarium setting. © 2017 Wiley Periodicals, Inc.
CHISSL: A Human-Machine Collaboration Space for Unsupervised Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arendt, Dustin L.; Komurlu, Caner; Blaha, Leslie M.
We developed CHISSL, a human-machine interface that utilizes supervised machine learning in an unsupervised context to help the user group unlabeled instances by her own mental model. The user primarily interacts via correction (moving a misplaced instance into its correct group) or confirmation (accepting that an instance is placed in its correct group). Concurrent with the user's interactions, CHISSL trains a classification model guided by the user's grouping of the data. It then predicts the group of unlabeled instances and arranges some of these alongside the instances manually organized by the user. We hypothesize that this mode of human andmore » machine collaboration is more effective than Active Learning, wherein the machine decides for itself which instances should be labeled by the user. We found supporting evidence for this hypothesis in a pilot study where we applied CHISSL to organize a collection of handwritten digits.« less
A non-hydrostatic flat-bottom ocean model entirely based on Fourier expansion
NASA Astrophysics Data System (ADS)
Wirth, A.
2005-01-01
We show how to implement free-slip and no-slip boundary conditions in a three dimensional Boussinesq flat-bottom ocean model based on Fourier expansion. Our method is inspired by the immersed or virtual boundary technique in which the effect of boundaries on the flow field is modeled by a virtual force field. Our method, however, explicitly depletes the velocity on the boundary induced by the pressure, while at the same time respecting the incompressibility of the flow field. Spurious spatial oscillations remain at a negligible level in the simulated flow field when using our technique and no filtering of the flow field is necessary. We furthermore show that by using the method presented here the residual velocities at the boundaries are easily reduced to a negligible value. This stands in contradistinction to previous calculations using the immersed or virtual boundary technique. The efficiency is demonstrated by simulating a Rayleigh impulsive flow, for which the time evolution of the simulated flow is compared to an analytic solution, and a three dimensional Boussinesq simulation of ocean convection. The second instance is taken form a well studied oceanographic context: A free slip boundary condition is applied on the upper surface, the modeled sea surface, and a no-slip boundary condition to the lower boundary, the modeled ocean floor. Convergence properties of the method are investigated by solving a two dimensional stationary problem at different spatial resolutions. The work presented here is restricted to a flat ocean floor. Extensions of our method to ocean models with a realistic topography are discussed.
Quantum annealing correction with minor embedding
NASA Astrophysics Data System (ADS)
Vinci, Walter; Albash, Tameem; Paz-Silva, Gerardo; Hen, Itay; Lidar, Daniel A.
2015-10-01
Quantum annealing provides a promising route for the development of quantum optimization devices, but the usefulness of such devices will be limited in part by the range of implementable problems as dictated by hardware constraints. To overcome constraints imposed by restricted connectivity between qubits, a larger set of interactions can be approximated using minor embedding techniques whereby several physical qubits are used to represent a single logical qubit. However, minor embedding introduces new types of errors due to its approximate nature. We introduce and study quantum annealing correction schemes designed to improve the performance of quantum annealers in conjunction with minor embedding, thus leading to a hybrid scheme defined over an encoded graph. We argue that this scheme can be efficiently decoded using an energy minimization technique provided the density of errors does not exceed the per-site percolation threshold of the encoded graph. We test the hybrid scheme using a D-Wave Two processor on problems for which the encoded graph is a two-level grid and the Ising model is known to be NP-hard. The problems we consider are frustrated Ising model problem instances with "planted" (a priori known) solutions. Applied in conjunction with optimized energy penalties and decoding techniques, we find that this approach enables the quantum annealer to solve minor embedded instances with significantly higher success probability than it would without error correction. Our work demonstrates that quantum annealing correction can and should be used to improve the robustness of quantum annealing not only for natively embeddable problems but also when minor embedding is used to extend the connectivity of physical devices.
Crammer, Koby; Singer, Yoram
2005-01-01
We discuss the problem of ranking instances. In our framework, each instance is associated with a rank or a rating, which is an integer in 1 to k. Our goal is to find a rank-prediction rule that assigns each instance a rank that is as close as possible to the instance's true rank. We discuss a group of closely related online algorithms, analyze their performance in the mistake-bound model, and prove their correctness. We describe two sets of experiments, with synthetic data and with the EachMovie data set for collaborative filtering. In the experiments we performed, our algorithms outperform online algorithms for regression and classification applied to ranking.
Thermodynamically consistent data-driven computational mechanics
NASA Astrophysics Data System (ADS)
González, David; Chinesta, Francisco; Cueto, Elías
2018-05-01
In the paradigm of data-intensive science, automated, unsupervised discovering of governing equations for a given physical phenomenon has attracted a lot of attention in several branches of applied sciences. In this work, we propose a method able to avoid the identification of the constitutive equations of complex systems and rather work in a purely numerical manner by employing experimental data. In sharp contrast to most existing techniques, this method does not rely on the assumption on any particular form for the model (other than some fundamental restrictions placed by classical physics such as the second law of thermodynamics, for instance) nor forces the algorithm to find among a predefined set of operators those whose predictions fit best to the available data. Instead, the method is able to identify both the Hamiltonian (conservative) and dissipative parts of the dynamics while satisfying fundamental laws such as energy conservation or positive production of entropy, for instance. The proposed method is tested against some examples of discrete as well as continuum mechanics, whose accurate results demonstrate the validity of the proposed approach.
Experimental Matching of Instances to Heuristics for Constraint Satisfaction Problems.
Moreno-Scott, Jorge Humberto; Ortiz-Bayliss, José Carlos; Terashima-Marín, Hugo; Conant-Pablos, Santiago Enrique
2016-01-01
Constraint satisfaction problems are of special interest for the artificial intelligence and operations research community due to their many applications. Although heuristics involved in solving these problems have largely been studied in the past, little is known about the relation between instances and the respective performance of the heuristics used to solve them. This paper focuses on both the exploration of the instance space to identify relations between instances and good performing heuristics and how to use such relations to improve the search. Firstly, the document describes a methodology to explore the instance space of constraint satisfaction problems and evaluate the corresponding performance of six variable ordering heuristics for such instances in order to find regions on the instance space where some heuristics outperform the others. Analyzing such regions favors the understanding of how these heuristics work and contribute to their improvement. Secondly, we use the information gathered from the first stage to predict the most suitable heuristic to use according to the features of the instance currently being solved. This approach proved to be competitive when compared against the heuristics applied in isolation on both randomly generated and structured instances of constraint satisfaction problems.
Experimental Matching of Instances to Heuristics for Constraint Satisfaction Problems
Moreno-Scott, Jorge Humberto; Ortiz-Bayliss, José Carlos; Terashima-Marín, Hugo; Conant-Pablos, Santiago Enrique
2016-01-01
Constraint satisfaction problems are of special interest for the artificial intelligence and operations research community due to their many applications. Although heuristics involved in solving these problems have largely been studied in the past, little is known about the relation between instances and the respective performance of the heuristics used to solve them. This paper focuses on both the exploration of the instance space to identify relations between instances and good performing heuristics and how to use such relations to improve the search. Firstly, the document describes a methodology to explore the instance space of constraint satisfaction problems and evaluate the corresponding performance of six variable ordering heuristics for such instances in order to find regions on the instance space where some heuristics outperform the others. Analyzing such regions favors the understanding of how these heuristics work and contribute to their improvement. Secondly, we use the information gathered from the first stage to predict the most suitable heuristic to use according to the features of the instance currently being solved. This approach proved to be competitive when compared against the heuristics applied in isolation on both randomly generated and structured instances of constraint satisfaction problems. PMID:26949383
Use of partial dissolution techniques in geochemical exploration
Chao, T.T.
1984-01-01
Application of partial dissolution techniques to geochemical exploration has advanced from an early empirical approach to an approach based on sound geochemical principles. This advance assures a prominent future position for the use of these techniques in geochemical exploration for concealed mineral deposits. Partial dissolution techniques are classified as single dissolution or sequential multiple dissolution depending on the number of steps taken in the procedure, or as "nonselective" extraction and as "selective" extraction in terms of the relative specificity of the extraction. The choice of dissolution techniques for use in geochemical exploration is dictated by the geology of the area, the type and degree of weathering, and the expected chemical forms of the ore and of the pathfinding elements. Case histories have illustrated many instances where partial dissolution techniques exhibit advantages over conventional methods of chemical analysis used in geochemical exploration. ?? 1984.
Yang, Ke-Wu; Zhou, Yajun; Ge, Ying; Zhang, Yuejuan
2017-07-13
We report an UV-Vis method for monitoring the hydrolysis of the β-lactam antibiotics inside living bacterial cells. Cell-based studies demonstrated that the hydrolysis of cefazolin was inhibited by three known NDM-1 inhibitors. This approach can be applied to the monitoring of reactions in a complex biological system, for instance in medical testing.
In-context query reformulation for failing SPARQL queries
NASA Astrophysics Data System (ADS)
Viswanathan, Amar; Michaelis, James R.; Cassidy, Taylor; de Mel, Geeth; Hendler, James
2017-05-01
Knowledge bases for decision support systems are growing increasingly complex, through continued advances in data ingest and management approaches. However, humans do not possess the cognitive capabilities to retain a bird's-eyeview of such knowledge bases, and may end up issuing unsatisfiable queries to such systems. This work focuses on the implementation of a query reformulation approach for graph-based knowledge bases, specifically designed to support the Resource Description Framework (RDF). The reformulation approach presented is instance-and schema-aware. Thus, in contrast to relaxation techniques found in the state-of-the-art, the presented approach produces in-context query reformulation.
Developing Formal Object-oriented Requirements Specifications: A Model, Tool and Technique.
ERIC Educational Resources Information Center
Jackson, Robert B.; And Others
1995-01-01
Presents a formal object-oriented specification model (OSS) for computer software system development that is supported by a tool that automatically generates a prototype from an object-oriented analysis model (OSA) instance, lets the user examine the prototype, and permits the user to refine the OSA model instance to generate a requirements…
Adaptive Batch Mode Active Learning.
Chakraborty, Shayok; Balasubramanian, Vineeth; Panchanathan, Sethuraman
2015-08-01
Active learning techniques have gained popularity to reduce human effort in labeling data instances for inducing a classifier. When faced with large amounts of unlabeled data, such algorithms automatically identify the exemplar and representative instances to be selected for manual annotation. More recently, there have been attempts toward a batch mode form of active learning, where a batch of data points is simultaneously selected from an unlabeled set. Real-world applications require adaptive approaches for batch selection in active learning, depending on the complexity of the data stream in question. However, the existing work in this field has primarily focused on static or heuristic batch size selection. In this paper, we propose two novel optimization-based frameworks for adaptive batch mode active learning (BMAL), where the batch size as well as the selection criteria are combined in a single formulation. We exploit gradient-descent-based optimization strategies as well as properties of submodular functions to derive the adaptive BMAL algorithms. The solution procedures have the same computational complexity as existing state-of-the-art static BMAL techniques. Our empirical results on the widely used VidTIMIT and the mobile biometric (MOBIO) data sets portray the efficacy of the proposed frameworks and also certify the potential of these approaches in being used for real-world biometric recognition applications.
Real-time transmission of digital video using variable-length coding
NASA Technical Reports Server (NTRS)
Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.
1993-01-01
Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.
McNabb, Matthew; Cao, Yu; Devlin, Thomas; Baxter, Blaise; Thornton, Albert
2012-01-01
Mechanical Embolus Removal in Cerebral Ischemia (MERCI) has been supported by medical trials as an improved method of treating ischemic stroke past the safe window of time for administering clot-busting drugs, and was released for medical use in 2004. The importance of analyzing real-world data collected from MERCI clinical trials is key to providing insights on the effectiveness of MERCI. Most of the existing data analysis on MERCI results has thus far employed conventional statistical analysis techniques. To the best of our knowledge, advanced data analytics and data mining techniques have not yet been systematically applied. To address the issue in this thesis, we conduct a comprehensive study on employing state of the art machine learning algorithms to generate prediction criteria for the outcome of MERCI patients. Specifically, we investigate the issue of how to choose the most significant attributes of a data set with limited instance examples. We propose a few search algorithms to identify the significant attributes, followed by a thorough performance analysis for each algorithm. Finally, we apply our proposed approach to the real-world, de-identified patient data provided by Erlanger Southeast Regional Stroke Center, Chattanooga, TN. Our experimental results have demonstrated that our proposed approach performs well.
Refactoring a CS0 Course for Engineering Students to Use Active Learning
ERIC Educational Resources Information Center
Lokkila, Erno; Kaila, Erkki; Lindén, Rolf; Laakso, Mikko-Jussi; Sutinen, Erkki
2017-01-01
Purpose: The purpose of this paper was to determine whether applying e-learning material to a course leads to consistently improved student performance. Design/methodology/approach: This paper analyzes grade data from seven instances of the course. The first three instances were performed traditionally. After an intervention, in the form of…
Blind source computer device identification from recorded VoIP calls for forensic investigation.
Jahanirad, Mehdi; Anuar, Nor Badrul; Wahab, Ainuddin Wahid Abdul
2017-03-01
The VoIP services provide fertile ground for criminal activity, thus identifying the transmitting computer devices from recorded VoIP call may help the forensic investigator to reveal useful information. It also proves the authenticity of the call recording submitted to the court as evidence. This paper extended the previous study on the use of recorded VoIP call for blind source computer device identification. Although initial results were promising but theoretical reasoning for this is yet to be found. The study suggested computing entropy of mel-frequency cepstrum coefficients (entropy-MFCC) from near-silent segments as an intrinsic feature set that captures the device response function due to the tolerances in the electronic components of individual computer devices. By applying the supervised learning techniques of naïve Bayesian, linear logistic regression, neural networks and support vector machines to the entropy-MFCC features, state-of-the-art identification accuracy of near 99.9% has been achieved on different sets of computer devices for both call recording and microphone recording scenarios. Furthermore, unsupervised learning techniques, including simple k-means, expectation-maximization and density-based spatial clustering of applications with noise (DBSCAN) provided promising results for call recording dataset by assigning the majority of instances to their correct clusters. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Wilcox, Rand; Carlson, Mike; Azen, Stan; Clark, Florence
2013-03-01
Recently, there have been major advances in statistical techniques for assessing central tendency and measures of association. The practical utility of modern methods has been documented extensively in the statistics literature, but they remain underused and relatively unknown in clinical trials. Our objective was to address this issue. STUDY DESIGN AND PURPOSE: The first purpose was to review common problems associated with standard methodologies (low power, lack of control over type I errors, and incorrect assessments of the strength of the association). The second purpose was to summarize some modern methods that can be used to circumvent such problems. The third purpose was to illustrate the practical utility of modern robust methods using data from the Well Elderly 2 randomized controlled trial. In multiple instances, robust methods uncovered differences among groups and associations among variables that were not detected by classic techniques. In particular, the results demonstrated that details of the nature and strength of the association were sometimes overlooked when using ordinary least squares regression and Pearson correlation. Modern robust methods can make a practical difference in detecting and describing differences between groups and associations between variables. Such procedures should be applied more frequently when analyzing trial-based data. Copyright © 2013 Elsevier Inc. All rights reserved.
Applying knowledge compilation techniques to model-based reasoning
NASA Technical Reports Server (NTRS)
Keller, Richard M.
1991-01-01
Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems.
Point-source inversion techniques
NASA Astrophysics Data System (ADS)
Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.
1982-11-01
A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.
Numerical optimization in Hilbert space using inexact function and gradient evaluations
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.
NASA Astrophysics Data System (ADS)
Mundis, Nathan L.; Mavriplis, Dimitri J.
2017-09-01
The time-spectral method applied to the Euler and coupled aeroelastic equations theoretically offers significant computational savings for purely periodic problems when compared to standard time-implicit methods. However, attaining superior efficiency with time-spectral methods over traditional time-implicit methods hinges on the ability rapidly to solve the large non-linear system resulting from time-spectral discretizations which become larger and stiffer as more time instances are employed or the period of the flow becomes especially short (i.e. the maximum resolvable wave-number increases). In order to increase the efficiency of these solvers, and to improve robustness, particularly for large numbers of time instances, the Generalized Minimal Residual Method (GMRES) is used to solve the implicit linear system over all coupled time instances. The use of GMRES as the linear solver makes time-spectral methods more robust, allows them to be applied to a far greater subset of time-accurate problems, including those with a broad range of harmonic content, and vastly improves the efficiency of time-spectral methods. In previous work, a wave-number independent preconditioner that mitigates the increased stiffness of the time-spectral method when applied to problems with large resolvable wave numbers has been developed. This preconditioner, however, directly inverts a large matrix whose size increases in proportion to the number of time instances. As a result, the computational time of this method scales as the cube of the number of time instances. In the present work, this preconditioner has been reworked to take advantage of an approximate-factorization approach that effectively decouples the spatial and temporal systems. Once decoupled, the time-spectral matrix can be inverted in frequency space, where it has entries only on the main diagonal and therefore can be inverted quite efficiently. This new GMRES/preconditioner combination is shown to be over an order of magnitude more efficient than the previous wave-number independent preconditioner for problems with large numbers of time instances and/or large reduced frequencies.
Whale song analyses using bioinformatics sequence analysis approaches
NASA Astrophysics Data System (ADS)
Chen, Yian A.; Almeida, Jonas S.; Chou, Lien-Siang
2005-04-01
Animal songs are frequently analyzed using discrete hierarchical units, such as units, themes and songs. Because animal songs and bio-sequences may be understood as analogous, bioinformatics analysis tools DNA/protein sequence alignment and alignment-free methods are proposed to quantify the theme similarities of the songs of false killer whales recorded off northeast Taiwan. The eighteen themes with discrete units that were identified in an earlier study [Y. A. Chen, masters thesis, University of Charleston, 2001] were compared quantitatively using several distance metrics. These metrics included the scores calculated using the Smith-Waterman algorithm with the repeated procedure; the standardized Euclidian distance and the angle metrics based on word frequencies. The theme classifications based on different metrics were summarized and compared in dendrograms using cluster analyses. The results agree with earlier classifications derived by human observation qualitatively. These methods further quantify the similarities among themes. These methods could be applied to the analyses of other animal songs on a larger scale. For instance, these techniques could be used to investigate song evolution and cultural transmission quantifying the dissimilarities of humpback whale songs across different seasons, years, populations, and geographic regions. [Work supported by SC Sea Grant, and Ilan County Government, Taiwan.
Towards Effective Clustering Techniques for the Analysis of Electric Power Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Emilie A.; Cotilla Sanchez, Jose E.; Halappanavar, Mahantesh
2013-11-30
Clustering is an important data analysis technique with numerous applications in the analysis of electric power grids. Standard clustering techniques are oblivious to the rich structural and dynamic information available for power grids. Therefore, by exploiting the inherent topological and electrical structure in the power grid data, we propose new methods for clustering with applications to model reduction, locational marginal pricing, phasor measurement unit (PMU or synchrophasor) placement, and power system protection. We focus our attention on model reduction for analysis based on time-series information from synchrophasor measurement devices, and spectral techniques for clustering. By comparing different clustering techniques onmore » two instances of realistic power grids we show that the solutions are related and therefore one could leverage that relationship for a computational advantage. Thus, by contrasting different clustering techniques we make a case for exploiting structure inherent in the data with implications for several domains including power systems.« less
INFORMS Section on Location Analysis Dissertation Award Submission
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waddell, Lucas
This research effort can be summarized by two main thrusts, each of which has a chapter of the dissertation dedicated to it. First, I pose a novel polyhedral approach for identifying polynomially solvable in- stances of the QAP based on an application of the reformulation-linearization technique (RLT), a general procedure for constructing mixed 0-1 linear reformulations of 0-1 pro- grams. The feasible region to the continuous relaxation of the level-1 RLT form is a polytope having a highly specialized structure. Every binary solution to the QAP is associated with an extreme point of this polytope, and the objective function valuemore » is preserved at each such point. However, there exist extreme points that do not correspond to binary solutions. The key insight is a previously unnoticed and unexpected relationship between the polyhedral structure of the continuous relaxation of the level-1 RLT representation and various classes of readily solvable instances. Specifically, we show that a variety of apparently unrelated solvable cases of the QAP can all be categorized in the following sense: each such case has an objective function which ensures that an optimal solution to the continuous relaxation of the level-1 RLT form occurs at a binary extreme point. Interestingly, there exist instances that are solvable by the level-1 RLT form which do not satisfy the conditions of these cases, so that the level-1 form theoretically identifies a richer family of solvable instances. Second, I focus on instances of the QAP known in the literature as linearizable. An instance of the QAP is defined to be linearizable if and only if the problem can be equivalently written as a linear assignment problem that preserves the objective function value at all feasible solutions. I provide an entirely new polyheral-based perspective on the concept of linearizable by showing that an instance of the QAP is linearizable if and only if a relaxed version of the continuous relaxation of the level-1 RLT form is bounded. We also shows that the level-1 RLT form can identify a richer family of solvable instances than those deemed linearizable by demonstrating that the continuous relaxation of the level-1 RLT form can have an optimal binary solution for instances that are not linearizable. As a byproduct, I use this theoretical framework to explicity, in closed form, characterize the dimensions of the level-1 RLT form and various other problem relaxations.« less
Towards a new procreation ethic: the exemplary instance of cleft lip and palate.
Le Dref, Gaëlle; Grollemund, Bruno; Danion-Grilliat, Anne; Weber, Jean-Christophe
2013-08-01
The improvement of ultrasound scan techniques is enabling ever earlier prenatal diagnosis of developmental anomalies. In France, apart from cases where the mother's life is endangered, the detection of "particularly serious" conditions, and conditions that are "incurable at the time of diagnosis" are the only instances in which a therapeutic abortion can be performed, this applying up to the 9th month of pregnancy. Thus numerous conditions, despite the fact that they cause distress or pain or are socially disabling, do not qualify for therapeutic abortion, despite sometimes pressing demands from parents aware of the difficulties in store for their child and themselves, in a society that is not very favourable towards the integration and self-fulfilment of people with a disability. Cleft lip and palate (CLP), although it can be completely treated, is one of the conditions that considerably complicates the lives of child and parents. Nevertheless, the recent scope for making very early diagnosis of CLP, before the deadline for legal voluntary abortion, has not led to any wave of abortions. CLP in France has the benefit of a exceptional care plan, targeting both the health and the integration of the individuals affected. This article sets out, via the emblematic instance of CLP, to show how present fears of an emerging "domestic" or liberal eugenic trend could become redundant if disability is addressed politically and medically, so that individuals with a disability have the same social rights as any other citizen.
The Role of Lattice Matching Techniques in the Characterization of Polymorphic Forms.
Mighell, Alan D
2011-01-01
An inspection of the recent literature reveals that polymorphism is a frequently encountered phenomenon. The recognition of polymorphic forms plays a vital role in the materials sciences because such structures are characterized by different crystal packing and accordingly have different physical properties. In the pharmaceutical industry, recognition of polymorphic forms can be critical for, in certain cases, a polymorphic form of a drug may be an ineffective therapeutic agent due to its unfavorable physical properties. A check of the recent literature has revealed that in some cases new polymorphic forms are not recognized. In other instances, a supposedly new polymeric form is actually the result of an incorrect structure determination. Fortunately, lattice-matching techniques, which have proved invaluable in the identification and characterization of crystal structures, represent a powerful tool for analyzing polymorphic forms. These lattice-matching methods are based on either of two strategies: (a) the reduced cell strategy-the matching of reduced cells of the respective lattices or (b) the matrix strategy-the determination of a matrix or matrices relating the two lattices coupled with an analysis of the matrix elements. Herein, these techniques are applied to three typical cases-(a) the identification of a new polymorphic form, (b) the demonstration that a substance may not be a new polymorphic form due to missed symmetry, and (c) the evaluation of pseudo polymorphism because of a missed lattice. To identify new polymorphic forms and to prevent errors, it is recommended that these lattice matching techniques become an integral part of the editorial review process of crystallography journals.
Engineering electromagnetic metamaterials and methanol fuel cells
NASA Astrophysics Data System (ADS)
Yen, Tajen
2005-07-01
Electromagnetic metamaterials represent a group of artificial structures, whose dimensions are smaller than subwavelength. Due to electromagnetic metamaterials' collective response to the applied fields, they can exhibit unprecedented properties to fascinate researchers' eyes. For instance, artificial magnetism above terahertz frequencies and beyond, negative magnetic response, and artificial plasma lower than ultraviolet and visible frequencies. Our goal is to engineer those novel properties aforementioned at interested frequency regions and further optimize their performance. To fulfill this task, we developed exclusive micro/nano fabrication techniques to construct magnetic metamaterials (i.e., split-ring resonators and L-shaped resonators) and electric metamaterials (i.e., plasmonic wires) and also employed Taguchi method to study the optimal design of electromagnetic metamaterials. Moreover, by integrating magnetic and electric metamaterials, we have been pursuing to fabricate so-called negative index media---the Holy Grail enables not only to reverse conventional optical rules such as Snell's law, Doppler shift, and Cerenkov radiation, but also to smash the diffraction limit to realize the superlensing effect. In addition to electromagnetic metamaterials, in this dissertation we also successfully miniaturize silicon-based methanol fuel cells by means of micro-electrical-mechanical-system technique, which promise to provide an integrated micro power source with excellent performance. Our demonstrated power density and energy density are one of the highest in reported documents. Finally, based on the results of metamaterials and micro fuel cells, we intend to supply building blocks to complete an omnipotent device---a system with sensing, communication, computing, power, control, and actuation functions.
Developing VISO: Vaccine Information Statement Ontology for patient education.
Amith, Muhammad; Gong, Yang; Cunningham, Rachel; Boom, Julie; Tao, Cui
2015-01-01
To construct a comprehensive vaccine information ontology that can support personal health information applications using patient-consumer lexicon, and lead to outcomes that can improve patient education. The authors composed the Vaccine Information Statement Ontology (VISO) using the web ontology language (OWL). We started with 6 Vaccine Information Statement (VIS) documents collected from the Centers for Disease Control and Prevention (CDC) website. Important and relevant selections from the documents were recorded, and knowledge triples were derived. Based on the collection of knowledge triples, the meta-level formalization of the vaccine information domain was developed. Relevant instances and their relationships were created to represent vaccine domain knowledge. The initial iteration of the VISO was realized, based on the 6 Vaccine Information Statements and coded into OWL2 with Protégé. The ontology consisted of 132 concepts (classes and subclasses) with 33 types of relationships between the concepts. The total number of instances from classes totaled at 460, along with 429 knowledge triples in total. Semiotic-based metric scoring was applied to evaluate quality of the ontology.
Validating a biometric authentication system: sample size requirements.
Dass, Sarat C; Zhu, Yongfang; Jain, Anil K
2006-12-01
Authentication systems based on biometric features (e.g., fingerprint impressions, iris scans, human face images, etc.) are increasingly gaining widespread use and popularity. Often, vendors and owners of these commercial biometric systems claim impressive performance that is estimated based on some proprietary data. In such situations, there is a need to independently validate the claimed performance levels. System performance is typically evaluated by collecting biometric templates from n different subjects, and for convenience, acquiring multiple instances of the biometric for each of the n subjects. Very little work has been done in 1) constructing confidence regions based on the ROC curve for validating the claimed performance levels and 2) determining the required number of biometric samples needed to establish confidence regions of prespecified width for the ROC curve. To simplify the analysis that address these two problems, several previous studies have assumed that multiple acquisitions of the biometric entity are statistically independent. This assumption is too restrictive and is generally not valid. We have developed a validation technique based on multivariate copula models for correlated biometric acquisitions. Based on the same model, we also determine the minimum number of samples required to achieve confidence bands of desired width for the ROC curve. We illustrate the estimation of the confidence bands as well as the required number of biometric samples using a fingerprint matching system that is applied on samples collected from a small population.
Wang, Xibin; Luo, Fengji; Qian, Ying; Ranzi, Gianluca
2016-01-01
With the rapid development of ICT and Web technologies, a large an amount of information is becoming available and this is producing, in some instances, a condition of information overload. Under these conditions, it is difficult for a person to locate and access useful information for making decisions. To address this problem, there are information filtering systems, such as the personalized recommendation system (PRS) considered in this paper, that assist a person in identifying possible products or services of interest based on his/her preferences. Among available approaches, collaborative Filtering (CF) is one of the most widely used recommendation techniques. However, CF has some limitations, e.g., the relatively simple similarity calculation, cold start problem, etc. In this context, this paper presents a new regression model based on the support vector machine (SVM) classification and an improved PSO (IPSO) for the development of an electronic movie PRS. In its implementation, a SVM classification model is first established to obtain a preliminary movie recommendation list based on which a SVM regression model is applied to predict movies’ ratings. The proposed PRS not only considers the movie’s content information but also integrates the users’ demographic and behavioral information to better capture the users’ interests and preferences. The efficiency of the proposed method is verified by a series of experiments based on the MovieLens benchmark data set. PMID:27898691
Wang, Xibin; Luo, Fengji; Qian, Ying; Ranzi, Gianluca
2016-01-01
With the rapid development of ICT and Web technologies, a large an amount of information is becoming available and this is producing, in some instances, a condition of information overload. Under these conditions, it is difficult for a person to locate and access useful information for making decisions. To address this problem, there are information filtering systems, such as the personalized recommendation system (PRS) considered in this paper, that assist a person in identifying possible products or services of interest based on his/her preferences. Among available approaches, collaborative Filtering (CF) is one of the most widely used recommendation techniques. However, CF has some limitations, e.g., the relatively simple similarity calculation, cold start problem, etc. In this context, this paper presents a new regression model based on the support vector machine (SVM) classification and an improved PSO (IPSO) for the development of an electronic movie PRS. In its implementation, a SVM classification model is first established to obtain a preliminary movie recommendation list based on which a SVM regression model is applied to predict movies' ratings. The proposed PRS not only considers the movie's content information but also integrates the users' demographic and behavioral information to better capture the users' interests and preferences. The efficiency of the proposed method is verified by a series of experiments based on the MovieLens benchmark data set.
Selective 4D modelling framework for spatial-temporal land information management system
NASA Astrophysics Data System (ADS)
Doulamis, Anastasios; Soile, Sofia; Doulamis, Nikolaos; Chrisouli, Christina; Grammalidis, Nikos; Dimitropoulos, Kosmas; Manesis, Charalambos; Potsiou, Chryssy; Ioannidis, Charalabos
2015-06-01
This paper introduces a predictive (selective) 4D modelling framework where only the spatial 3D differences are modelled at the forthcoming time instances, while regions of no significant spatial-temporal alterations remain intact. To accomplish this, initially spatial-temporal analysis is applied between 3D digital models captured at different time instances. So, the creation of dynamic change history maps is made. Change history maps indicate spatial probabilities of regions needed further 3D modelling at forthcoming instances. Thus, change history maps are good examples for a predictive assessment, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 4D Land Information Management System (LIMS) is implemented using open interoperable standards based on the CityGML framework. CityGML allows the description of the semantic metadata information and the rights of the land resources. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 4D LIMS digital parcels and the respective semantic information. The open source 3DCityDB incorporating a PostgreSQL geo-database is used to manage and manipulate 3D data and their semantics. An application is made to detect the change through time of a 3D block of plots in an urban area of Athens, Greece. Starting with an accurate 3D model of the buildings in 1983, a change history map is created using automated dense image matching on aerial photos of 2010. For both time instances meshes are created and through their comparison the changes are detected.
NASA Astrophysics Data System (ADS)
Shinzawa, Hideyuki; Mizukado, Junji
2018-04-01
A rheo-optical characterization technique based on the combination of a near-infrared (NIR) spectrometer and a tensile testing machine is presented here. In the rheo-optical NIR spectroscopy, tensile deformations are applied to polymers to induce displacement of ordered or disordered molecular chains. The molecular-level variation of the sample occurring on short time scales is readily captured as a form of strain-dependent NIR spectra by taking an advantage of an acousto-optic tunable filter (AOTF) equipped with the NIR spectrometer. In addition, the utilization of NIR with much less intense absorption makes it possible to measure transmittance spectra of relatively thick samples which are often required for conventional tensile testing. An illustrative example of the rheo-optical technique is given with annealed and quenched Nylon 6 samples to show how this technique can be utilized to derive more penetrating insight even from the seemingly simple polymers. The analysis of the sets of strain-dependent NIR spectra suggests the presence of polymer structures undergoing different variations during the tensile elongation. For instance, the tensile deformation of the semi-crystalline Nylon 6 involves a separate step of elongation of the rubbery amorphous chains and subsequent disintegration of the rigid crystalline structure. Excess amount of crystalline phase in Nylon 6, however, results in the retardation of the elastic deformation mainly achieved by the amorphous structure, which eventually leads to the simultaneous orientation of both amorphous and crystalline structures.
Network-based high level data classification.
Silva, Thiago Christiano; Zhao, Liang
2012-06-01
Traditional supervised data classification considers only physical features (e.g., distance or similarity) of the input data. Here, this type of learning is called low level classification. On the other hand, the human (animal) brain performs both low and high orders of learning and it has facility in identifying patterns according to the semantic meaning of the input data. Data classification that considers not only physical attributes but also the pattern formation is, here, referred to as high level classification. In this paper, we propose a hybrid classification technique that combines both types of learning. The low level term can be implemented by any classification technique, while the high level term is realized by the extraction of features of the underlying network constructed from the input data. Thus, the former classifies the test instances by their physical features or class topologies, while the latter measures the compliance of the test instances to the pattern formation of the data. Our study shows that the proposed technique not only can realize classification according to the pattern formation, but also is able to improve the performance of traditional classification techniques. Furthermore, as the class configuration's complexity increases, such as the mixture among different classes, a larger portion of the high level term is required to get correct classification. This feature confirms that the high level classification has a special importance in complex situations of classification. Finally, we show how the proposed technique can be employed in a real-world application, where it is capable of identifying variations and distortions of handwritten digit images. As a result, it supplies an improvement in the overall pattern recognition rate.
Refinement of Methods for Evaluation of Near-Hypersingular Integrals in BEM Formulations
NASA Technical Reports Server (NTRS)
Fink, Patricia W.; Khayat, Michael A.; Wilton, Donald R.
2006-01-01
In this paper, we present advances in singularity cancellation techniques applied to integrals in BEM formulations that are nearly hypersingular. Significant advances have been made recently in singularity cancellation techniques applied to 1 R type kernels [M. Khayat, D. Wilton, IEEE Trans. Antennas and Prop., 53, pp. 3180-3190, 2005], as well as to the gradients of these kernels [P. Fink, D. Wilton, and M. Khayat, Proc. ICEAA, pp. 861-864, Torino, Italy, 2005] on curved subdomains. In these approaches, the source triangle is divided into three tangent subtriangles with a common vertex at the normal projection of the observation point onto the source element or the extended surface containing it. The geometry of a typical tangent subtriangle and its local rectangular coordinate system with origin at the projected observation point is shown in Fig. 1. Whereas singularity cancellation techniques for 1 R type kernels are now nearing maturity, the efficient handling of near-hypersingular kernels still needs attention. For example, in the gradient reference above, techniques are presented for computing the normal component of the gradient relative to the plane containing the tangent subtriangle. These techniques, summarized in the transformations in Table 1, are applied at the sub-triangle level and correspond particularly to the case in which the normal projection of the observation point lies within the boundary of the source element. They are found to be highly efficient as z approaches zero. Here, we extend the approach to cover two instances not previously addressed. First, we consider the case in which the normal projection of the observation point lies external to the source element. For such cases, we find that simple modifications to the transformations of Table 1 permit significant savings in computational cost. Second, we present techniques that permit accurate computation of the tangential components of the gradient; i.e., tangent to the plane containing the source element.
Rapid Discovery of Tribological Materials with Improved Performance Using Materials Informatics
2014-03-10
of New Solid State Lubricants The recursive portioning model illustrated in Fig. 3 has been applied to about 500 compounds from the FileMakerPro...neighboring cation. Based on this assumption, the large cationic charge of mineral compounds indicates the number of anions tends to be larger than the...The formation of bond types is highly dependent on the difference of electronegativity (EN) between the two elements in the compound . For instance
NASA Astrophysics Data System (ADS)
Ródenas, José
2017-11-01
All materials exposed to some neutron flux can be activated independently of the kind of the neutron source. In this study, a nuclear reactor has been considered as neutron source. In particular, the activation of control rods in a BWR is studied to obtain the doses produced around the storage pool for irradiated fuel of the plant when control rods are withdrawn from the reactor and installed into this pool. It is very important to calculate these doses because they can affect to plant workers in the area. The MCNP code based on the Monte Carlo method has been applied to simulate activation reactions produced in the control rods inserted into the reactor. Obtained activities are introduced as input into another MC model to estimate doses produced by them. The comparison of simulation results with experimental measurements allows the validation of developed models. The developed MC models have been also applied to simulate the activation of other materials, such as components of a stainless steel sample introduced into a training reactors. These models, once validated, can be applied to other situations and materials where a neutron flux can be found, not only nuclear reactors. For instance, activation analysis with an Am-Be source, neutrography techniques in both medical applications and non-destructive analysis of materials, civil engineering applications using a Troxler, analysis of materials in decommissioning of nuclear power plants, etc.
Unsupervised learning of structure in spectroscopic cubes
NASA Astrophysics Data System (ADS)
Araya, M.; Mendoza, M.; Solar, M.; Mardones, D.; Bayo, A.
2018-07-01
We consider the problem of analyzing the structure of spectroscopic cubes using unsupervised machine learning techniques. We propose representing the target's signal as a homogeneous set of volumes through an iterative algorithm that separates the structured emission from the background while not overestimating the flux. Besides verifying some basic theoretical properties, the algorithm is designed to be tuned by domain experts, because its parameters have meaningful values in the astronomical context. Nevertheless, we propose a heuristic to automatically estimate the signal-to-noise ratio parameter of the algorithm directly from data. The resulting light-weighted set of samples (≤ 1% compared to the original data) offer several advantages. For instance, it is statistically correct and computationally inexpensive to apply well-established techniques of the pattern recognition and machine learning domains; such as clustering and dimensionality reduction algorithms. We use ALMA science verification data to validate our method, and present examples of the operations that can be performed by using the proposed representation. Even though this approach is focused on providing faster and better analysis tools for the end-user astronomer, it also opens the possibility of content-aware data discovery by applying our algorithm to big data.
Applying Aerodynamics Inspired Organizational Dynamic Fit Model Disaster Relief Endeavors
2010-12-01
gusts, and a dynamically stable organization returns quickly to its intended profit level, for instance, after deviation by changed consumer preferences . Hence...dynamic stability limits the level, for instance, by changed consumer preferences . Hence static stability limits initial performance... consumer preferences Maneuverability Quickness of a controlled system’s planned change from one trajectory to another Quickness of planned
NASA Astrophysics Data System (ADS)
Govoni, Marco; Galli, Giulia
Green's function based many-body perturbation theory (MBPT) methods are well established approaches to compute quasiparticle energies and electronic lifetimes. However, their application to large systems - for instance to heterogeneous systems, nanostructured, disordered, and defective materials - has been hindered by high computational costs. We will discuss recent MBPT methodological developments leading to an efficient formulation of electron-electron and electron-phonon interactions, and that can be applied to systems with thousands of electrons. Results using a formulation that does not require the explicit calculation of virtual states, nor the storage and inversion of large dielectric matrices will be presented. We will discuss data collections obtained using the WEST code, the advantages of the algorithms used in WEST over standard techniques, and the parallel performance. Work done in collaboration with I. Hamada, R. McAvoy, P. Scherpelz, and H. Zheng. This work was supported by MICCoM, as part of the Computational Materials Sciences Program funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division and by ANL.
Microfluidic platform for optimization of crystallization conditions
NASA Astrophysics Data System (ADS)
Zhang, Shuheng; Gerard, Charline J. J.; Ikni, Aziza; Ferry, Gilles; Vuillard, Laurent M.; Boutin, Jean A.; Ferte, Nathalie; Grossier, Romain; Candoni, Nadine; Veesler, Stéphane
2017-08-01
We describe a universal, high-throughput droplet-based microfluidic platform for crystallization. It is suitable for a multitude of applications, due to its flexibility, ease of use, compatibility with all solvents and low cost. The platform offers four modular functions: droplet formation, on-line characterization, incubation and observation. We use it to generate droplet arrays with a concentration gradient in continuous long tubing, without using surfactant. We control droplet properties (size, frequency and spacing) in long tubing by using hydrodynamic empirical relations. We measure droplet chemical composition using both an off-line and a real-time on-line method. Applying this platform to a complicated chemical environment, membrane proteins, we successfully handle crystallization, suggesting that the platform is likely to perform well in other circumstances. We validate the platform for fine-gradient screening and optimization of crystallization conditions. Additional on-line detection methods may well be integrated into this platform in the future, for instance, an on-line diffraction technique. We believe this method could find applications in fields such as fluid interaction engineering, live cell study and enzyme kinetics.
Automatic Estimation of Osteoporotic Fracture Cases by Using Ensemble Learning Approaches.
Kilic, Niyazi; Hosgormez, Erkan
2016-03-01
Ensemble learning methods are one of the most powerful tools for the pattern classification problems. In this paper, the effects of ensemble learning methods and some physical bone densitometry parameters on osteoporotic fracture detection were investigated. Six feature set models were constructed including different physical parameters and they fed into the ensemble classifiers as input features. As ensemble learning techniques, bagging, gradient boosting and random subspace (RSM) were used. Instance based learning (IBk) and random forest (RF) classifiers applied to six feature set models. The patients were classified into three groups such as osteoporosis, osteopenia and control (healthy), using ensemble classifiers. Total classification accuracy and f-measure were also used to evaluate diagnostic performance of the proposed ensemble classification system. The classification accuracy has reached to 98.85 % by the combination of model 6 (five BMD + five T-score values) using RSM-RF classifier. The findings of this paper suggest that the patients will be able to be warned before a bone fracture occurred, by just examining some physical parameters that can easily be measured without invasive operations.
Liu, Xiao; Shi, Jun; Zhou, Shichong; Lu, Minhua
2014-01-01
The dimensionality reduction is an important step in ultrasound image based computer-aided diagnosis (CAD) for breast cancer. A newly proposed l2,1 regularized correntropy algorithm for robust feature selection (CRFS) has achieved good performance for noise corrupted data. Therefore, it has the potential to reduce the dimensions of ultrasound image features. However, in clinical practice, the collection of labeled instances is usually expensive and time costing, while it is relatively easy to acquire the unlabeled or undetermined instances. Therefore, the semi-supervised learning is very suitable for clinical CAD. The iterated Laplacian regularization (Iter-LR) is a new regularization method, which has been proved to outperform the traditional graph Laplacian regularization in semi-supervised classification and ranking. In this study, to augment the classification accuracy of the breast ultrasound CAD based on texture feature, we propose an Iter-LR-based semi-supervised CRFS (Iter-LR-CRFS) algorithm, and then apply it to reduce the feature dimensions of ultrasound images for breast CAD. We compared the Iter-LR-CRFS with LR-CRFS, original supervised CRFS, and principal component analysis. The experimental results indicate that the proposed Iter-LR-CRFS significantly outperforms all other algorithms.
The Trojan Horse method for nuclear astrophysics: Recent results on resonance reactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cognata, M. La; Pizzone, R. G.; Spitaleri, C.
Nuclear astrophysics aims to measure nuclear-reaction cross sections of astrophysical interest to be included into models to study stellar evolution and nucleosynthesis. Low energies, < 1 MeV or even < 10 keV, are requested for this is the window where these processes are more effective. Two effects have prevented to achieve a satisfactory knowledge of the relevant nuclear processes, namely, the Coulomb barrier exponentially suppressing the cross section and the presence of atomic electrons. These difficulties have triggered theoretical and experimental investigations to extend our knowledge down to astrophysical energies. For instance, indirect techniques such as the Trojan Horse Methodmore » have been devised yielding new cutting-edge results. In particular, I will focus on the application of this indirect method to resonance reactions. Resonances might dramatically enhance the astrophysical S(E)-factor so, when they occur right at astrophysical energies, their measurement is crucial to pin down the astrophysical scenario. Unknown or unpredicted resonances might introduce large systematic errors in nucleosynthesis models. These considerations apply to low-energy resonances and to sub-threshold resonances as well, as they may produce sizable modifications of the S-factor due to, for instance, destructive interference with another resonance.« less
Rural and urban Ugandan primary school children's alternative ideas about animals
NASA Astrophysics Data System (ADS)
Otaala, Justine
This study examined rural and urban Ugandan primary children's alternative ideas about animals through the use of qualitative research methods. Thirty-six children were selected from lower, middle, and upper primary grades in two primary schools (rural and urban). Data were collected using interview-about-instance technique. Children were shown 18 color photographs of instances and non-instances of familiar animals and asked to say if the photographed objects were animals or not. They were then asked to give reasons to justify their answers. The interviews were audiotaped and transcribed. The results indicate that children tended to apply the label "animal" to large mammals, usually found at home, on the farm, in the zoo, and in the wild. Humans were not categorized as animals, particularly by children in the lower grades. Although the children in upper grades correctly identified humans as animals, they used reasons that were irrelevant to animal attributes and improperly derived from the biological concept of evolution. Many attributes children used to categorize instances of animals were scientifically unacceptable and included superficial features, such as body outline, anatomical features (body parts), external features (visual cues), presence or absence and number of appendages. Movement and eating (nutrition) were the most popular attributes children used to identify instances of animals. The main differences in children's ideas emanated from the reasons used to identify animals. Older rural children drew upon their cultural and traditional practices more often than urban children. Anthropomorphic thinking was predominant among younger children in both settings, but diminished with progression in children's grade levels. Some of the implications of this study are: (1) teachers, teacher educators and curriculum developers should consider learners' ideas in planning and developing teaching materials and interventions. (2) Teachers should relate humans to other animals during instruction. (3) Textbooks and teaching materials need careful scrutiny to insure they include humans and other small animals as part of the animal kingdom. (4) Teaching interventions should begin with the basic attributes of animals and insure children understand the relationship between the attributes and concepts. (5) Encourage the use of examples and non-examples of the concept "animal" during instruction.
Emerging Chitosan-Based Films for Food Packaging Applications.
Wang, Hongxia; Qian, Jun; Ding, Fuyuan
2018-01-17
Recent years have witnessed great developments in biobased polymer packaging films for the serious environmental problems caused by the petroleum-based nonbiodegradable packaging materials. Chitosan is one of the most abundant biopolymers after cellulose. Chitosan-based materials have been widely applied in various fields for their biological and physical properties of biocompatibility, biodegradability, antimicrobial ability, and easy film forming ability. Different chitosan-based films have been fabricated and applied in the field of food packaging. Most of the review papers related to chitosan-based films are focusing on antibacterial food packaging films. Along with the advances in the nanotechnology and polymer science, numerous strategies, for instance direct casting, coating, dipping, layer-by-layer assembly, and extrusion, have been employed to prepare chitosan-based films with multiple functionalities. The emerging food packaging applications of chitosan-based films as antibacterial films, barrier films, and sensing films have achieved great developments. This article comprehensively reviews recent advances in the preparation and application of engineered chitosan-based films in food packaging fields.
Tuning iteration space slicing based tiled multi-core code implementing Nussinov's RNA folding.
Palkowski, Marek; Bielecki, Wlodzimierz
2018-01-15
RNA folding is an ongoing compute-intensive task of bioinformatics. Parallelization and improving code locality for this kind of algorithms is one of the most relevant areas in computational biology. Fortunately, RNA secondary structure approaches, such as Nussinov's recurrence, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. This allows us to apply powerful polyhedral compilation techniques based on the transitive closure of dependence graphs to generate parallel tiled code implementing Nussinov's RNA folding. Such techniques are within the iteration space slicing framework - the transitive dependences are applied to the statement instances of interest to produce valid tiles. The main problem at generating parallel tiled code is defining a proper tile size and tile dimension which impact parallelism degree and code locality. To choose the best tile size and tile dimension, we first construct parallel parametric tiled code (parameters are variables defining tile size). With this purpose, we first generate two nonparametric tiled codes with different fixed tile sizes but with the same code structure and then derive a general affine model, which describes all integer factors available in expressions of those codes. Using this model and known integer factors present in the mentioned expressions (they define the left-hand side of the model), we find unknown integers in this model for each integer factor available in the same fixed tiled code position and replace in this code expressions, including integer factors, with those including parameters. Then we use this parallel parametric tiled code to implement the well-known tile size selection (TSS) technique, which allows us to discover in a given search space the best tile size and tile dimension maximizing target code performance. For a given search space, the presented approach allows us to choose the best tile size and tile dimension in parallel tiled code implementing Nussinov's RNA folding. Experimental results, received on modern Intel multi-core processors, demonstrate that this code outperforms known closely related implementations when the length of RNA strands is bigger than 2500.
NASA Astrophysics Data System (ADS)
Muratov, V. G.; Lopatkin, A. V.
An important aspect in the verification of the engineering techniques used in the safety analysis of MOX-fuelled reactors, is the preparation of test calculations to determine nuclide composition variations under irradiation and analysis of burnup problem errors resulting from various factors, such as, for instance, the effect of nuclear data uncertainties on nuclide concentration calculations. So far, no universally recognized tests have been devised. A calculation technique has been developed for solving the problem using the up-to-date calculation tools and the latest versions of nuclear libraries. Initially, in 1997, a code was drawn up in an effort under ISTC Project No. 116 to calculate the burnup in one VVER-1000 fuel rod, using the MCNP Code. Later on, the authors developed a computation technique which allows calculating fuel burnup in models of a fuel rod, or a fuel assembly, or the whole reactor. It became possible to apply it to fuel burnup in all types of nuclear reactors and subcritical blankets.
Measurement of Workload: Physics, Psychophysics, and Metaphysics
NASA Technical Reports Server (NTRS)
Gopher, D.
1984-01-01
The present paper reviews the results of two experiments in which workload analysis was conducted based upon performance measures, brain evoked potentials and magnitude estimations of subjective load. The three types of measures were jointly applied to the description of the behavior of subjects in a wide battery of experimental tasks. Data analysis shows both instances of association and dissociation between types of measures. A general conceptual framework and methodological guidelines are proposed to account for these findings.
NASA Astrophysics Data System (ADS)
Cicone, A.; Zhou, H.; Piersanti, M.; Materassi, M.; Spogli, L.
2017-12-01
Nonlinear and nonstationary signals are ubiquitous in real life. Their decomposition and analysis is of crucial importance in many research fields. Traditional techniques, like Fourier and wavelet Transform have been proved to be limited in this context. In the last two decades new kind of nonlinear methods have been developed which are able to unravel hidden features of these kinds of signals. In this poster we present a new method, called Adaptive Local Iterative Filtering (ALIF). This technique, originally developed to study mono-dimensional signals, unlike any other algorithm proposed so far, can be easily generalized to study two or higher dimensional signals. Furthermore, unlike most of the similar methods, it does not require any a priori assumption on the signal itself, so that the technique can be applied as it is to any kind of signals. Applications of ALIF algorithm to real life signals analysis will be presented. Like, for instance, the behavior of the water level near the coastline in presence of a Tsunami, length of the day signal, pressure measured at ground level on a global grid, radio power scintillation from GNSS signals,
NASA Astrophysics Data System (ADS)
Roushangar, Kiyoumars; Mehrabani, Fatemeh Vojoudi; Shiri, Jalal
2014-06-01
This study presents Artificial Intelligence (AI)-based modeling of total bed material load through developing the accuracy level of the predictions of traditional models. Gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)-based models were developed and validated for estimations. Sediment data from Qotur River (Northwestern Iran) were used for developing and validation of the applied techniques. In order to assess the applied techniques in relation to traditional models, stream power-based and shear stress-based physical models were also applied in the studied case. The obtained results reveal that developed AI-based models using minimum number of dominant factors, give more accurate results than the other applied models. Nonetheless, it was revealed that k-fold test is a practical but high-cost technique for complete scanning of applied data and avoiding the over-fitting.
An algorithm for the optimal collection of wet waste.
Laureri, Federica; Minciardi, Riccardo; Robba, Michela
2016-02-01
This work refers to the development of an approach for planning wet waste (food waste and other) collection at a metropolitan scale. Some specific modeling features distinguish this specific waste collection problem from the other ones. For instance, there may be significant differences as regards the values of the parameters (such as weight and volume) characterizing the various collection points. As it happens for classical waste collection planning, even in the case of wet waste, one has to deal with difficult combinatorial problems, where the determination of an optimal solution may require a very large computational effort, in the case of problem instances having a noticeable dimensionality. For this reason, in this work, a heuristic procedure for the optimal planning of wet waste is developed and applied to problem instances drawn from a real case study. The performances that can be obtained by applying such a procedure are evaluated by a comparison with those obtainable via a general-purpose mathematical programming software package, as well as those obtained by applying very simple decision rules commonly used in practice. The considered case study consists in an area corresponding to the historical center of the Municipality of Genoa. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Vidya Sagar, R.; Raghu Prasad, B. K.
2012-03-01
This article presents a review of recent developments in parametric based acoustic emission (AE) techniques applied to concrete structures. It recapitulates the significant milestones achieved by previous researchers including various methods and models developed in AE testing of concrete structures. The aim is to provide an overview of the specific features of parametric based AE techniques of concrete structures carried out over the years. Emphasis is given to traditional parameter-based AE techniques applied to concrete structures. A significant amount of research on AE techniques applied to concrete structures has already been published and considerable attention has been given to those publications. Some recent studies such as AE energy analysis and b-value analysis used to assess damage of concrete bridge beams have also been discussed. The formation of fracture process zone and the AE energy released during the fracture process in concrete beam specimens have been summarised. A large body of experimental data on AE characteristics of concrete has accumulated over the last three decades. This review of parametric based AE techniques applied to concrete structures may be helpful to the concerned researchers and engineers to better understand the failure mechanism of concrete and evolve more useful methods and approaches for diagnostic inspection of structural elements and failure prediction/prevention of concrete structures.
Law, Jodi Woan-Fei; Ab Mutalib, Nurul-Syakima; Chan, Kok-Gan; Lee, Learn-Han
2015-01-01
Listeria monocytogenes, a foodborne pathogen that can cause listeriosis through the consumption of food contaminated with this pathogen. The ability of L. monocytogenes to survive in extreme conditions and cause food contaminations have become a major concern. Hence, routine microbiological food testing is necessary to prevent food contamination and outbreaks of foodborne illness. This review provides insight into the methods for cultural detection, enumeration, and molecular identification of L. monocytogenes in various food samples. There are a number of enrichment and plating media that can be used for the isolation of L. monocytogenes from food samples. Enrichment media such as buffered Listeria enrichment broth, Fraser broth, and University of Vermont Medium (UVM) Listeria enrichment broth are recommended by regulatory agencies such as Food and Drug Administration-bacteriological and analytical method (FDA-BAM), US Department of Agriculture-Food and Safety (USDA-FSIS), and International Organization for Standardization (ISO). Many plating media are available for the isolation of L. monocytogenes, for instance, polymyxin acriflavin lithium-chloride ceftazidime aesculin mannitol, Oxford, and other chromogenic media. Besides, reference methods like FDA-BAM, ISO 11290 method, and USDA-FSIS method are usually applied for the cultural detection or enumeration of L. monocytogenes. most probable number technique is applied for the enumeration of L. monocytogenes in the case of low level contamination. Molecular methods including polymerase chain reaction, multiplex polymerase chain reaction, real-time/quantitative polymerase chain reaction, nucleic acid sequence-based amplification, loop-mediated isothermal amplification, DNA microarray, and next generation sequencing technology for the detection and identification of L. monocytogenes are discussed in this review. Overall, molecular methods are rapid, sensitive, specific, time- and labor-saving. In future, there are chances for the development of new techniques for the detection and identification of foodborne with improved features. PMID:26579116
Congenital diaphragmatic hernia (CDH) etiology as revealed by pathway genetics.
Kantarci, Sibel; Donahoe, Patricia K
2007-05-15
Congenital diaphragmatic hernia (CDH) is a common birth defect with high mortality and morbidity. Two hundred seventy CDH patients were ascertained, carefully phenotyped, and classified as isolated (diaphragm defects alone) or complex (with additional anomalies) cases. We established different strategies to reveal CDH-critical chromosome loci and genes in humans. Candidate genes for sequencing analyses were selected from CDH animal models, genetic intervals of recurrent chromosomal aberration in humans, such as 15q26.1-q26.2 or 1q41-q42.12, as well as genes in the retinoic acid and related pathways and those known to be involved in embryonic lung development. For instance, FOG2, GATA4, and COUP-TFII are all needed for both normal diaphragm and lung development and are likely all in the same genetic and molecular pathway. Linkage analysis was applied first in a large inbred family and then in four multiplex families with Donnai-Barrow syndrome (DBS) associated with CDH. 10K SNP chip and microsatellite markers revealed a DBS locus on chromosome 2q23.3-q31.1. We applied array-based comparative genomic hybridization (aCGH) techniques to over 30, mostly complex, CDH patients and found a de novo microdeletion in a patient with Fryns syndrome related to CDH. Fluorescence in situ hybridization (FISH) and multiplex ligation-dependent probe amplification (MLPA) techniques allowed us to further define the deletion interval. Our aim is to identify genetic intervals and, in those, to prioritize genes that might reveal molecular pathways, mutations in any step of which, might contribute to the same phenotype. More important, the elucidation of pathways may ultimately provide clues to treatment strategies. (c) 2007 Wiley-Liss, Inc.
Law, Jodi Woan-Fei; Ab Mutalib, Nurul-Syakima; Chan, Kok-Gan; Lee, Learn-Han
2015-01-01
Listeria monocytogenes, a foodborne pathogen that can cause listeriosis through the consumption of food contaminated with this pathogen. The ability of L. monocytogenes to survive in extreme conditions and cause food contaminations have become a major concern. Hence, routine microbiological food testing is necessary to prevent food contamination and outbreaks of foodborne illness. This review provides insight into the methods for cultural detection, enumeration, and molecular identification of L. monocytogenes in various food samples. There are a number of enrichment and plating media that can be used for the isolation of L. monocytogenes from food samples. Enrichment media such as buffered Listeria enrichment broth, Fraser broth, and University of Vermont Medium (UVM) Listeria enrichment broth are recommended by regulatory agencies such as Food and Drug Administration-bacteriological and analytical method (FDA-BAM), US Department of Agriculture-Food and Safety (USDA-FSIS), and International Organization for Standardization (ISO). Many plating media are available for the isolation of L. monocytogenes, for instance, polymyxin acriflavin lithium-chloride ceftazidime aesculin mannitol, Oxford, and other chromogenic media. Besides, reference methods like FDA-BAM, ISO 11290 method, and USDA-FSIS method are usually applied for the cultural detection or enumeration of L. monocytogenes. most probable number technique is applied for the enumeration of L. monocytogenes in the case of low level contamination. Molecular methods including polymerase chain reaction, multiplex polymerase chain reaction, real-time/quantitative polymerase chain reaction, nucleic acid sequence-based amplification, loop-mediated isothermal amplification, DNA microarray, and next generation sequencing technology for the detection and identification of L. monocytogenes are discussed in this review. Overall, molecular methods are rapid, sensitive, specific, time- and labor-saving. In future, there are chances for the development of new techniques for the detection and identification of foodborne with improved features.
Substrate-driven chemotactic assembly in an enzyme cascade.
Zhao, Xi; Palacci, Henri; Yadav, Vinita; Spiering, Michelle M; Gilson, Michael K; Butler, Peter J; Hess, Henry; Benkovic, Stephen J; Sen, Ayusman
2018-03-01
Enzymatic catalysis is essential to cell survival. In many instances, enzymes that participate in reaction cascades have been shown to assemble into metabolons in response to the presence of the substrate for the first enzyme. However, what triggers metabolon formation has remained an open question. Through a combination of theory and experiments, we show that enzymes in a cascade can assemble via chemotaxis. We apply microfluidic and fluorescent spectroscopy techniques to study the coordinated movement of the first four enzymes of the glycolysis cascade: hexokinase, phosphoglucose isomerase, phosphofructokinase and aldolase. We show that each enzyme independently follows its own specific substrate gradient, which in turn is produced by the preceding enzymatic reaction. Furthermore, we find that the chemotactic assembly of enzymes occurs even under cytosolic crowding conditions.
Substrate-driven chemotactic assembly in an enzyme cascade
NASA Astrophysics Data System (ADS)
Zhao, Xi; Palacci, Henri; Yadav, Vinita; Spiering, Michelle M.; Gilson, Michael K.; Butler, Peter J.; Hess, Henry; Benkovic, Stephen J.; Sen, Ayusman
2018-03-01
Enzymatic catalysis is essential to cell survival. In many instances, enzymes that participate in reaction cascades have been shown to assemble into metabolons in response to the presence of the substrate for the first enzyme. However, what triggers metabolon formation has remained an open question. Through a combination of theory and experiments, we show that enzymes in a cascade can assemble via chemotaxis. We apply microfluidic and fluorescent spectroscopy techniques to study the coordinated movement of the first four enzymes of the glycolysis cascade: hexokinase, phosphoglucose isomerase, phosphofructokinase and aldolase. We show that each enzyme independently follows its own specific substrate gradient, which in turn is produced by the preceding enzymatic reaction. Furthermore, we find that the chemotactic assembly of enzymes occurs even under cytosolic crowding conditions.
Corrosive Space Gas Restores Artwork, Promises Myriad Applications
NASA Technical Reports Server (NTRS)
2007-01-01
Atomic oxygen's unique characteristic of oxidizing primarily hydrogen, carbon, and hydrocarbon polymers at surface levels has been applied in the restoration of artwork, detection of document forgeries, and removal of bacterial contaminants from surgical implants. The Electro-Physics Branch at Glenn Research Center built on corrosion studies of long-duration coatings for use in space, and applied atomic oxygen's selectivity to instances where elements need to be removed from a surface. Atomic oxygen is able to remove organic compounds high in carbon (mostly soot) from fire-damaged artworks without causing a shift in the paint color. First successfully tested on oil paintings, the team then applied the restoration technique to acrylics, watercolors, and ink. The successful art restoration process was well-publicized, and soon a multinational, nonprofit professional organization dedicated to the art of forensic analysis of documents had successfully applied this process in the field of forgery detection. The gas has biomedical applications as well-Atomic Oxygen technology can be used to decontaminate orthopedic surgical hip and knee implants prior to surgery, and additional collaborative research between the Cleveland Clinic Foundation and the Glenn team shows that this gas's roughening of surfaces improves cell adhesion, which is important for the development of new drugs.
NASA Astrophysics Data System (ADS)
Secco, Henrique de L.; Ferreira, Fabio F.; Péres, Laura O.
2018-03-01
The combination of materials to form hybrids with unique properties, different from those of the isolated components, is a strategy used to prepare functional materials with improved properties aiming to allow their application in specific fields. The doping of lanthanum fluoride with other rare earth elements is used to obtain luminescent particles, which may be useful to the manufacturing of electronic devices' displays and biological markers, for instance. The application of the powder of nanoparticles has limitations in some fields; to overcome this, the powder may be incorporated in a suitable polymeric matrix. In this work, lanthanum fluoride nanoparticles, undoped and doped with cerium and europium, were synthesized through the co-precipitation method in aqueous solution. Aiming the formation of solid state films, composites of nanoparticles in an elastomeric matrix, the nitrile rubber (NBR), were prepared. The flexibility and the transparency of the matrix in the regions of interest are advantages for the application of the luminescent composites. The composites were applied as films using the casting and the spin coating techniques and luminescent materials were obtained in the samples doped with europium and cerium. Scanning electron microscopy images showed an adequate dispersion of the particles in the matrix in both film formation techniques. Aggregates of the particles were detected in the samples which may affect the uniformity of the emission of the composites.
NASA Astrophysics Data System (ADS)
Kaftan, Jens N.; Tek, Hüseyin; Aach, Til
2009-02-01
The segmentation of the hepatic vascular tree in computed tomography (CT) images is important for many applications such as surgical planning of oncological resections and living liver donations. In surgical planning, vessel segmentation is often used as basis to support the surgeon in the decision about the location of the cut to be performed and the extent of the liver to be removed, respectively. We present a novel approach to hepatic vessel segmentation that can be divided into two stages. First, we detect and delineate the core vessel components efficiently with a high specificity. Second, smaller vessel branches are segmented by a robust vessel tracking technique based on a medialness filter response, which starts from the terminal points of the previously segmented vessels. Specifically, in the first phase major vessels are segmented using the globally optimal graphcuts algorithm in combination with foreground and background seed detection, while the computationally more demanding tracking approach needs to be applied only locally in areas of smaller vessels within the second stage. The method has been evaluated on contrast-enhanced liver CT scans from clinical routine showing promising results. In addition to the fully-automatic instance of this method, the vessel tracking technique can also be used to easily add missing branches/sub-trees to an already existing segmentation result by adding single seed-points.
3D printing of nano- and micro-structures
NASA Astrophysics Data System (ADS)
Ramasamy, Mouli; Varadan, Vijay K.
2016-04-01
Additive manufacturing or 3D printing techniques are being vigorously investigated as a replacement to the traditional and conventional methods in fabrication to bring forth cost and time effective approaches. Introduction of 3D printing has led to printing micro and nanoscale structures including tissues and organelles, bioelectric sensors and devices, artificial bones and transplants, microfluidic devices, batteries and various other biomaterials. Various microfabrication processes have been developed to fabricate micro components and assemblies at lab scale. 3D Fabrication processes that can accommodate the functional and geometrical requirements to realize complicated structures are becoming feasible through advances in additive manufacturing. This advancement could lead to simpler development mechanisms of novel components and devices exhibiting complex features. For instance, development of microstructure electrodes that can penetrate the epidermis of the skin to collect the bio potential signal may prove very effective than the electrodes that measure signal from the skin's surface. The micro and nanostructures will have to possess extraordinary material and mechanical properties for its dexterity in the applications. A substantial amount of research being pursued on stretchable and flexible devices based on PDMA, textiles, and organic electronics. Despite the numerous advantages these substrates and techniques could solely offer, 3D printing enables a multi-dimensional approach towards finer and complex applications. This review emphasizes the use of 3D printing to fabricate micro and nanostructures for that can be applied for human healthcare.
Sabattini, E; Bisgaard, K; Ascani, S; Poggi, S; Piccioli, M; Ceccarelli, C; Pieri, F; Fraternali-Orcioni, G; Pileri, S A
1998-07-01
To assess a newly developed immunohistochemical detection system, the EnVision++. A large series of differently processed normal and pathological samples and 53 relevant monoclonal antibodies were chosen. A chessboard titration assay was used to compare the results provided by the EnVision++ system with those of the APAAP, CSA, LSAB, SABC, and ChemMate methods, when applied either manually or in a TechMate 500 immunostainer. With the vast majority of the antibodies, EnVision++ allowed two- to fivefold higher dilutions than the APAAP, LSAB, SABC, and ChemMate techniques, the staining intensity and percentage of expected positive cells being the same. With some critical antibodies (such as the anti-CD5), it turned out to be superior in that it achieved consistently reproducible results with differently fixed or overfixed samples. Only the CSA method, which includes tyramide based enhancement, allowed the same dilutions as the EnVision++ system, and in one instance (with the anti-cyclin D1 antibody) represented the gold standard. The EnVision++ is an easy to use system, which avoids the possibility of disturbing endogenous biotin and lowers the cost per test by increasing the dilutions of the primary antibodies. Being a two step procedure, it reduces both the assay time and the workload.
Sabattini, E; Bisgaard, K; Ascani, S; Poggi, S; Piccioli, M; Ceccarelli, C; Pieri, F; Fraternali-Orcioni, G; Pileri, S A
1998-01-01
AIM: To assess a newly developed immunohistochemical detection system, the EnVision++. METHODS: A large series of differently processed normal and pathological samples and 53 relevant monoclonal antibodies were chosen. A chessboard titration assay was used to compare the results provided by the EnVision++ system with those of the APAAP, CSA, LSAB, SABC, and ChemMate methods, when applied either manually or in a TechMate 500 immunostainer. RESULTS: With the vast majority of the antibodies, EnVision++ allowed two- to fivefold higher dilutions than the APAAP, LSAB, SABC, and ChemMate techniques, the staining intensity and percentage of expected positive cells being the same. With some critical antibodies (such as the anti-CD5), it turned out to be superior in that it achieved consistently reproducible results with differently fixed or overfixed samples. Only the CSA method, which includes tyramide based enhancement, allowed the same dilutions as the EnVision++ system, and in one instance (with the anti-cyclin D1 antibody) represented the gold standard. CONCLUSIONS: The EnVision++ is an easy to use system, which avoids the possibility of disturbing endogenous biotin and lowers the cost per test by increasing the dilutions of the primary antibodies. Being a two step procedure, it reduces both the assay time and the workload. Images PMID:9797726
Privacy in Georeferenced Context-Aware Services: A Survey
NASA Astrophysics Data System (ADS)
Riboni, Daniele; Pareschi, Linda; Bettini, Claudio
Location based services (LBS) are a specific instance of a broader class of Internet services that are predicted to become popular in a near future: context-aware services. The privacy concerns that LBS have raised are likely to become even more serious when several context data, other than location and time, are sent to service providers as part of an Internet request. This paper provides a classification and a brief survey of the privacy preservation techniques that have been proposed for this type of services. After identifying the benefits and shortcomings of each class of techniques, the paper proposes a combined approach to achieve a more comprehensive solution for privacy preservation in georeferenced context-aware services.
Applications of flow-networks to opinion-dynamics
NASA Astrophysics Data System (ADS)
Tupikina, Liubov; Kurths, Jürgen
2015-04-01
Networks were successfully applied to describe complex systems, such as brain, climate, processes in society. Recently a socio-physical problem of opinion-dynamics was studied using network techniques. We present the toy-model of opinion-formation based on the physical model of advection-diffusion. We consider spreading of the opinion on the fixed subject, assuming that opinion on society is binary: if person has opinion then the state of the node in the society-network equals 1, if the person doesn't have opinion state of the node equals 0. Opinion can be spread from one person to another if they know each other, or in the network-terminology, if the nodes are connected. We include into the system governed by advection-diffusion equation the external field to model such effects as for instance influence from media. The assumptions for our model can be formulated as the following: 1.the node-states are influenced by the network structure in such a way, that opinion can be spread only between adjacent nodes (the advective term of the opinion-dynamics), 2.the network evolution can have two scenarios: -network topology is not changing with time; -additional links can appear or disappear each time-step with fixed probability which requires adaptive networks properties. Considering these assumptions for our system we obtain the system of equations describing our model-dynamics which corresponds well to other socio-physics models, for instance, the model of the social cohesion and the famous voter-model. We investigate the behavior of the suggested model studying "waiting time" of the system, time to get to the stable state, stability of the model regimes for different values of model parameters and network topology.
Signature-based store checking buffer
Sridharan, Vilas; Gurumurthi, Sudhanva
2015-06-02
A system and method for optimizing redundant output verification, are provided. A hardware-based store fingerprint buffer receives multiple instances of output from multiple instances of computation. The store fingerprint buffer generates a signature from the content included in the multiple instances of output. When a barrier is reached, the store fingerprint buffer uses the signature to verify the content is error-free.
Technique for Calculating Solution Derivatives With Respect to Geometry Parameters in a CFD Code
NASA Technical Reports Server (NTRS)
Mathur, Sanjay
2011-01-01
A solution has been developed to the challenges of computation of derivatives with respect to geometry, which is not straightforward because these are not typically direct inputs to the computational fluid dynamics (CFD) solver. To overcome these issues, a procedure has been devised that can be used without having access to the mesh generator, while still being applicable to all types of meshes. The basic approach is inspired by the mesh motion algorithms used to deform the interior mesh nodes in a smooth manner when the surface nodes, for example, are in a fluid structure interaction problem. The general idea is to model the mesh edges and nodes as constituting a spring-mass system. Changes to boundary node locations are propagated to interior nodes by allowing them to assume their new equilibrium positions, for instance, one where the forces on each node are in balance. The main advantage of the technique is that it is independent of the volumetric mesh generator, and can be applied to structured, unstructured, single- and multi-block meshes. It essentially reduces the problem down to defining the surface mesh node derivatives with respect to the geometry parameters of interest. For analytical geometries, this is quite straightforward. In the more general case, one would need to be able to interrogate the underlying parametric CAD (computer aided design) model and to evaluate the derivatives either analytically, or by a finite difference technique. Because the technique is based on a partial differential equation (PDE), it is applicable not only to forward mode problems (where derivatives of all the output quantities are computed with respect to a single input), but it could also be extended to the adjoint problem, either by using an analytical adjoint of the PDE or a discrete analog.
Imaging of stellar surfaces with the Occamian approach and the least-squares deconvolution technique
NASA Astrophysics Data System (ADS)
Järvinen, S. P.; Berdyugina, S. V.
2010-10-01
Context. We present in this paper a new technique for the indirect imaging of stellar surfaces (Doppler imaging, DI), when low signal-to-noise spectral data have been improved by the least-squares deconvolution (LSD) method and inverted into temperature maps with the Occamian approach. We apply this technique to both simulated and real data and investigate its applicability for different stellar rotation rates and noise levels in data. Aims: Our goal is to boost the signal of spots in spectral lines and to reduce the effect of photon noise without loosing the temperature information in the lines. Methods: We simulated data from a test star, to which we added different amounts of noise, and employed the inversion technique based on the Occamian approach with and without LSD. In order to be able to infer a temperature map from LSD profiles, we applied the LSD technique for the first time to both the simulated observations and theoretical local line profiles, which remain dependent on temperature and limb angles. We also investigated how the excitation energy of individual lines effects the obtained solution by using three submasks that have lines with low, medium, and high excitation energy levels. Results: We show that our novel approach enables us to overcome the limitations of the two-temperature approximation, which was previously employed for LSD profiles, and to obtain true temperature maps with stellar atmosphere models. The resulting maps agree well with those obtained using the inversion code without LSD, provided the data are noiseless. However, using LSD is only advisable for poor signal-to-noise data. Further, we show that the Occamian technique, both with and without LSD, approaches the surface temperature distribution reasonably well for an adequate spatial resolution. Thus, the stellar rotation rate has a great influence on the result. For instance, in a slowly rotating star, closely situated spots are usually recovered blurred and unresolved, which affects the obtained temperature range of the map. This limitation is critical for small unresolved cool spots and is common for all DI techniques. Finally the LSD method was carried out for high signal-to-noise observations of the young active star V889 Her: the maps obtained with and without LSD are found to be consistent. Conclusions: Our new technique provides meaningful information on the temperature distribution on the stellar surfaces, which was previously inaccessible in DI with LSD. Our approach can be easily adopted for any other multi-line techniques.
Acoustic Emission Detected by Matched Filter Technique in Laboratory Earthquake Experiment
NASA Astrophysics Data System (ADS)
Wang, B.; Hou, J.; Xie, F.; Ren, Y.
2017-12-01
Acoustic Emission in laboratory earthquake experiment is a fundamental measures to study the mechanics of the earthquake for instance to characterize the aseismic, nucleation, as well as post seismic phase or in stick slip experiment. Compared to field earthquake, AEs are generally recorded when they are beyond threshold, so some weak signals may be missing. Here we conducted an experiment on a 1.1m×1.1m granite with a 1.5m fault and 13 receivers with the same sample rate of 3MHz are placed on the surface. We adopt continues record and a matched filter technique to detect low-SNR signals. We found there are too many signals around the stick-slip and the P- arrival picked by manual may be time-consuming. So, we combined the short-term average to long-tem-average ratio (STA/LTA) technique with Autoregressive-Akaike information criterion (AR-AIC) technique to pick the arrival automatically and found mostly of the P- arrival accuracy can satisfy our demand to locate signals. Furthermore, we will locate the signals and apply a matched filter technique to detect low-SNR signals. Then, we can see if there is something interesting in laboratory earthquake experiment. Detailed and updated results will be present in the meeting.
Applying Deep Learning in Medical Images: The Case of Bone Age Estimation.
Lee, Jang Hyung; Kim, Kwang Gi
2018-01-01
A diagnostic need often arises to estimate bone age from X-ray images of the hand of a subject during the growth period. Together with measured physical height, such information may be used as indicators for the height growth prognosis of the subject. We present a way to apply the deep learning technique to medical image analysis using hand bone age estimation as an example. Age estimation was formulated as a regression problem with hand X-ray images as input and estimated age as output. A set of hand X-ray images was used to form a training set with which a regression model was trained. An image preprocessing procedure is described which reduces image variations across data instances that are unrelated to age-wise variation. The use of Caffe, a deep learning tool is demonstrated. A rather simple deep learning network was adopted and trained for tutorial purpose. A test set distinct from the training set was formed to assess the validity of the approach. The measured mean absolute difference value was 18.9 months, and the concordance correlation coefficient was 0.78. It is shown that the proposed deep learning-based neural network can be used to estimate a subject's age from hand X-ray images, which eliminates the need for tedious atlas look-ups in clinical environments and should improve the time and cost efficiency of the estimation process.
Extended specificity studies of mRNA assays used to infer human organ tissues and body fluids.
van den Berge, Margreet; Sijen, Titia
2017-12-01
Messenger RNA (mRNA) profiling is a technique increasingly applied for the forensic identification of body fluids and skin. More recently, an mRNA-based organ typing assay was developed which allows for the inference of brain, lung, liver, skeletal muscle, heart, kidney, and skin tissue. When applying this organ typing system in forensic casework for the presence of animal, rather than human, tissue is an alternative scenario to be proposed, for instance that bullets carry cell material from a hunting event. Even though mRNA profiling systems are commonly in silico designed to be primate specific, physical testing against other animal species is generally limited. In this study, human specificity of the organ tissue inferring system was assessed against organ tissue RNAs of various animals. Results confirm human specificity of the system, especially when utilizing interpretation rules considering multiple markers per cell type. Besides, we cross-tested our organ and body fluid mRNA assays against the target types covered by the other assay. Marker expression in the nontarget organ tissues and body fluids was observed to a limited extent, which emphasizes the importance of involving the case-specific context of the forensic samples in deciding which mRNA profiling assay to use and when for interpreting results. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Regression without truth with Markov chain Monte-Carlo
NASA Astrophysics Data System (ADS)
Madan, Hennadii; Pernuš, Franjo; Likar, Boštjan; Å piclin, Žiga
2017-03-01
Regression without truth (RWT) is a statistical technique for estimating error model parameters of each method in a group of methods used for measurement of a certain quantity. A very attractive aspect of RWT is that it does not rely on a reference method or "gold standard" data, which is otherwise difficult RWT was used for a reference-free performance comparison of several methods for measuring left ventricular ejection fraction (EF), i.e. a percentage of blood leaving the ventricle each time the heart contracts, and has since been applied for various other quantitative imaging biomarkerss (QIBs). Herein, we show how Markov chain Monte-Carlo (MCMC), a computational technique for drawing samples from a statistical distribution with probability density function known only up to a normalizing coefficient, can be used to augment RWT to gain a number of important benefits compared to the original approach based on iterative optimization. For instance, the proposed MCMC-based RWT enables the estimation of joint posterior distribution of the parameters of the error model, straightforward quantification of uncertainty of the estimates, estimation of true value of the measurand and corresponding credible intervals (CIs), does not require a finite support for prior distribution of the measureand generally has a much improved robustness against convergence to non-global maxima. The proposed approach is validated using synthetic data that emulate the EF data for 45 patients measured with 8 different methods. The obtained results show that 90% CI of the corresponding parameter estimates contain the true values of all error model parameters and the measurand. A potential real-world application is to take measurements of a certain QIB several different methods and then use the proposed framework to compute the estimates of the true values and their uncertainty, a vital information for diagnosis based on QIB.
Predicting clinical outcome of neuroblastoma patients using an integrative network-based approach.
Tranchevent, Léon-Charles; Nazarov, Petr V; Kaoma, Tony; Schmartz, Georges P; Muller, Arnaud; Kim, Sang-Yoon; Rajapakse, Jagath C; Azuaje, Francisco
2018-06-07
One of the main current challenges in computational biology is to make sense of the huge amounts of multidimensional experimental data that are being produced. For instance, large cohorts of patients are often screened using different high-throughput technologies, effectively producing multiple patient-specific molecular profiles for hundreds or thousands of patients. We propose and implement a network-based method that integrates such patient omics data into Patient Similarity Networks. Topological features derived from these networks were then used to predict relevant clinical features. As part of the 2017 CAMDA challenge, we have successfully applied this strategy to a neuroblastoma dataset, consisting of genomic and transcriptomic data. In particular, we observe that models built on our network-based approach perform at least as well as state of the art models. We furthermore explore the effectiveness of various topological features and observe, for instance, that redundant centrality metrics can be combined to build more powerful models. We demonstrate that the networks inferred from omics data contain clinically relevant information and that patient clinical outcomes can be predicted using only network topological data. This article was reviewed by Yang-Yu Liu, Tomislav Smuc and Isabel Nepomuceno.
Urbanová, Petra; Hejna, Petr; Jurda, Mikoláš
2015-05-01
Three-dimensional surface technologies particularly close range photogrammetry and optical surface scanning have recently advanced into affordable, flexible and accurate techniques. Forensic postmortem investigation as performed on a daily basis, however, has not yet fully benefited from their potentials. In the present paper, we tested two approaches to 3D external body documentation - digital camera-based photogrammetry combined with commercial Agisoft PhotoScan(®) software and stereophotogrammetry-based Vectra H1(®), a portable handheld surface scanner. In order to conduct the study three human subjects were selected, a living person, a 25-year-old female, and two forensic cases admitted for postmortem examination at the Department of Forensic Medicine, Hradec Králové, Czech Republic (both 63-year-old males), one dead to traumatic, self-inflicted, injuries (suicide by hanging), the other diagnosed with the heart failure. All three cases were photographed in 360° manner with a Nikon 7000 digital camera and simultaneously documented with the handheld scanner. In addition to having recorded the pre-autopsy phase of the forensic cases, both techniques were employed in various stages of autopsy. The sets of collected digital images (approximately 100 per case) were further processed to generate point clouds and 3D meshes. Final 3D models (a pair per individual) were counted for numbers of points and polygons, then assessed visually and compared quantitatively using ICP alignment algorithm and a cloud point comparison technique based on closest point to point distances. Both techniques were proven to be easy to handle and equally laborious. While collecting the images at autopsy took around 20min, the post-processing was much more time-demanding and required up to 10h of computation time. Moreover, for the full-body scanning the post-processing of the handheld scanner required rather time-consuming manual image alignment. In all instances the applied approaches produced high-resolution photorealistic, real sized or easy to calibrate 3D surface models. Both methods equally failed when the scanned body surface was covered with body hair or reflective moist areas. Still, it can be concluded that single camera close range photogrammetry and optical surface scanning using Vectra H1 scanner represent relatively low-cost solutions which were shown to be beneficial for postmortem body documentation in forensic pathology. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Pernomian, Larissa; Gomes, Mayara Santos; Moreira, Josimar Dornelas; da Silva, Carlos Henrique Tomich de Paula; Rosa, Joaquin Maria Campos; Cardoso, Cristina Ribeiro de Barros
2017-01-01
One of the cornerstones of rational drug development is the measurement of molecular parameters derived from ligand-receptor interaction, which guides therapeutic windows definition. Over the last decades, radioligand binding has provided valuable contributions in this field as key method for such purposes. However, its limitations spurred the development of more exquisite techniques for determining such parameters. For instance, safety risks related to radioactivity waste, expensive and controlled disposal of radioisotopes, radiotracer separation-dependence for affinity analysis, and one-site mathematical models-based fitting of data make radioligand binding a suboptimal approach in providing measures of actual affinity conformations from ligands and G proteincoupled receptors (GPCR). Current advances on high-throughput screening (HTS) assays have markedly extended the options of sparing sensitive ways for monitoring ligand affinity. The advent of the novel bioluminescent donor NanoLuc luciferase (Nluc), engineered from Oplophorus gracilirostris luciferase, allowed fitting bioluminescence resonance energy transfer (BRET) for monitoring ligand binding. Such novel approach named Nluc-based BRET (NanoBRET) binding assay consists of a real-time homogeneous proximity assay that overcomes radioligand binding limitations but ensures the quality in affinity measurements. Here, we cover the main advantages of NanoBRET protocol and the undesirable drawbacks of radioligand binding as molecular methods that span pharmacological toolbox applied to Drug Discovery. Also, we provide a novel perspective for the application of NanoBRET technology in affinity assays for multiple-state binding mechanisms involving oligomerization and/or functional biased selectivity. This new angle was proposed based on specific biophysical criteria required for the real-time homogeneity assigned to the proximity NanoBRET protocol. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Unsupervised data mining in nanoscale x-ray spectro-microscopic study of NdFeB magnet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Xiaoyue; Yang, Feifei; Antono, Erin
Novel developments in X-ray based spectro-microscopic characterization techniques have increased the rate of acquisition of spatially resolved spectroscopic data by several orders of magnitude over what was possible a few years ago. This accelerated data acquisition, with high spatial resolution at nanoscale and sensitivity to subtle differences in chemistry and atomic structure, provides a unique opportunity to investigate hierarchically complex and structurally heterogeneous systems found in functional devices and materials systems. However, handling and analyzing the large volume data generated poses significant challenges. Here we apply an unsupervised data-mining algorithm known as DBSCAN to study a rare-earth element based permanentmore » magnet material, Nd 2Fe 14B. We are able to reduce a large spectro-microscopic dataset of over 300,000 spectra to 3, preserving much of the underlying information. Scientists can easily and quickly analyze in detail three characteristic spectra. Our approach can rapidly provide a concise representation of a large and complex dataset to materials scientists and chemists. For instance, it shows that the surface of common Nd 2Fe 14B magnet is chemically and structurally very different from the bulk, suggesting a possible surface alteration effect possibly due to the corrosion, which could affect the material’s overall properties.« less
mirPub: a database for searching microRNA publications.
Vergoulis, Thanasis; Kanellos, Ilias; Kostoulas, Nikos; Georgakilas, Georgios; Sellis, Timos; Hatzigeorgiou, Artemis; Dalamagas, Theodore
2015-05-01
Identifying, amongst millions of publications available in MEDLINE, those that are relevant to specific microRNAs (miRNAs) of interest based on keyword search faces major obstacles. References to miRNA names in the literature often deviate from standard nomenclature for various reasons, since even the official nomenclature evolves. For instance, a single miRNA name may identify two completely different molecules or two different names may refer to the same molecule. mirPub is a database with a powerful and intuitive interface, which facilitates searching for miRNA literature, addressing the aforementioned issues. To provide effective search services, mirPub applies text mining techniques on MEDLINE, integrates data from several curated databases and exploits data from its user community following a crowdsourcing approach. Other key features include an interactive visualization service that illustrates intuitively the evolution of miRNA data, tag clouds summarizing the relevance of publications to particular diseases, cell types or tissues and access to TarBase 6.0 data to oversee genes related to miRNA publications. mirPub is freely available at http://www.microrna.gr/mirpub/. vergoulis@imis.athena-innovation.gr or dalamag@imis.athena-innovation.gr Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
Towards Trustable Digital Evidence with PKIDEV: PKI Based Digital Evidence Verification Model
NASA Astrophysics Data System (ADS)
Uzunay, Yusuf; Incebacak, Davut; Bicakci, Kemal
How to Capture and Preserve Digital Evidence Securely? For the investigation and prosecution of criminal activities that involve computers, digital evidence collected in the crime scene has a vital importance. On one side, it is a very challenging task for forensics professionals to collect them without any loss or damage. On the other, there is the second problem of providing the integrity and authenticity in order to achieve legal acceptance in a court of law. By conceiving digital evidence simply as one instance of digital data, it is evident that modern cryptography offers elegant solutions for this second problem. However, to our knowledge, there is not any previous work proposing a systematic model having a holistic view to address all the related security problems in this particular case of digital evidence verification. In this paper, we present PKIDEV (Public Key Infrastructure based Digital Evidence Verification model) as an integrated solution to provide security for the process of capturing and preserving digital evidence. PKIDEV employs, inter alia, cryptographic techniques like digital signatures and secure time-stamping as well as latest technologies such as GPS and EDGE. In our study, we also identify the problems public-key cryptography brings when it is applied to the verification of digital evidence.
Unsupervised data mining in nanoscale x-ray spectro-microscopic study of NdFeB magnet
Duan, Xiaoyue; Yang, Feifei; Antono, Erin; ...
2016-09-29
Novel developments in X-ray based spectro-microscopic characterization techniques have increased the rate of acquisition of spatially resolved spectroscopic data by several orders of magnitude over what was possible a few years ago. This accelerated data acquisition, with high spatial resolution at nanoscale and sensitivity to subtle differences in chemistry and atomic structure, provides a unique opportunity to investigate hierarchically complex and structurally heterogeneous systems found in functional devices and materials systems. However, handling and analyzing the large volume data generated poses significant challenges. Here we apply an unsupervised data-mining algorithm known as DBSCAN to study a rare-earth element based permanentmore » magnet material, Nd 2Fe 14B. We are able to reduce a large spectro-microscopic dataset of over 300,000 spectra to 3, preserving much of the underlying information. Scientists can easily and quickly analyze in detail three characteristic spectra. Our approach can rapidly provide a concise representation of a large and complex dataset to materials scientists and chemists. For instance, it shows that the surface of common Nd 2Fe 14B magnet is chemically and structurally very different from the bulk, suggesting a possible surface alteration effect possibly due to the corrosion, which could affect the material’s overall properties.« less
Oblique Intrathecal Injection in Lumbar Spine Surgery: A Technical Note.
Jewett, Gordon A E; Yavin, Daniel; Dhaliwal, Perry; Whittaker, Tara; Krupa, JoyAnne; Du Plessis, Stephan
2017-09-01
Intrathecal morphine (ITM) is an efficacious method of providing postoperative analgesia and reducing pain associated complications. Despite adoption in many surgical fields, ITM has yet to become a standard of care in lumbar spine surgery. Spine surgeons' reticence to make use of the technique may in part be attributed to concerns of precipitating a cerebrospinal fluid (CSF) leak. Herein we describe a method for oblique intrathecal injection during lumbar spine surgery to minimize risk of CSF leak. The dural sac is penetrated obliquely at a 30° angle to offset dural and arachnoid puncture sites. Oblique injection in instances of limited dural exposure is made possible by introducing a 60° bend to a standard 30-gauge needle. The technique was applied for injection of ITM or placebo in 104 cases of lumbar surgery in the setting of a randomized controlled trial. Injection was not performed in two cases (2/104, 1.9%) following preinjection dural tear. In the remaining 102 cases no instances of postoperative CSF leakage attributable to oblique intrathecal injection occurred. Three cases (3/102, 2.9%) of transient CSF leakage were observed immediately following intrathecal injection with no associated sequelae or requirement for postsurgical intervention. In two cases, the observed leak was repaired by sealing with fibrin glue, whereas in a single case the leak was self-limited requiring no intervention. Oblique dural puncture was not associated with increased incidence of postoperative CSF leakage. This safe and reliable method of delivery of ITM should therefore be routinely considered in lumbar spine surgery.
Front-end multiplexing—applied to SQUID multiplexing: Athena X-IFU and QUBIC experiments
NASA Astrophysics Data System (ADS)
Prele, D.
2015-08-01
As we have seen for digital camera market and a sensor resolution increasing to "megapixels", all the scientific and high-tech imagers (whatever the wave length - from radio to X-ray range) tends also to always increases the pixels number. So the constraints on front-end signals transmission increase too. An almost unavoidable solution to simplify integration of large arrays of pixels is front-end multiplexing. Moreover, "simple" and "efficient" techniques allow integration of read-out multiplexers in the focal plane itself. For instance, CCD (Charge Coupled Device) technology has boost number of pixels in digital camera. Indeed, this is exactly a planar technology which integrates both the sensors and a front-end multiplexed readout. In this context, front-end multiplexing techniques will be discussed for a better understanding of their advantages and their limits. Finally, the cases of astronomical instruments in the millimeter and in the X-ray ranges using SQUID (Superconducting QUantum Interference Device) will be described.
Space Technology For Tuna Boats
NASA Technical Reports Server (NTRS)
1977-01-01
Freshly-caught tuna is stored below decks in wells cooled to about zero degrees by brine circulated through a refrigerating system. The wells formerly were insulated by cork or fiberglass, but both materials were subject to deterioration; cork, for instance, needs replacement every three years. The Campbell Machine Division of Campbell Industries, San Diego, which manufactures and repairs large boats for the commercial fishing industry, was looking for a better way to insulate tuna storage wells. Learning of the Rockwell technique, Campbell contracted for a test installation on one boat, then bought its own equipment and adopted the spray-foam procedure for their boats. The foam hardens after application. It not only is a superior insulator, it also is considerably lighter and easier to apply. Fishing industry spokesmen say that foam insulation is far more reliable, efficient and economical than prior techniques. More than 40 foam-insulated tuna boats, ranging in cost from $1 million to $4 million, have been built and sold. Principal customers are Ralston Purina's Van Camp Seafood Division and Star-Kist Inc.
ARTS. Accountability Reporting and Tracking System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, J.F.; Faccio, R.M.
ARTS is a micro based prototype of the data elements, screens, and information processing rules that apply to the Accountability Reporting Program. The system focuses on the Accountability Event. The Accountability Event is an occurrence of incurring avoidable costs. The system must be able to CRUD (Create, Retrieve, Update, Delete) instances of the Accountability Event. Additionally, the system must provide for a review committee to update the `event record` with findings and determination information. Lastly, the system must provide for financial representatives to perform a cost reporting process.
Klijn, Sven L; Weijenberg, Matty P; Lemmens, Paul; van den Brandt, Piet A; Lima Passos, Valéria
2017-10-01
Background and objective Group-based trajectory modelling is a model-based clustering technique applied for the identification of latent patterns of temporal changes. Despite its manifold applications in clinical and health sciences, potential problems of the model selection procedure are often overlooked. The choice of the number of latent trajectories (class-enumeration), for instance, is to a large degree based on statistical criteria that are not fail-safe. Moreover, the process as a whole is not transparent. To facilitate class enumeration, we introduce a graphical summary display of several fit and model adequacy criteria, the fit-criteria assessment plot. Methods An R-code that accepts universal data input is presented. The programme condenses relevant group-based trajectory modelling output information of model fit indices in automated graphical displays. Examples based on real and simulated data are provided to illustrate, assess and validate fit-criteria assessment plot's utility. Results Fit-criteria assessment plot provides an overview of fit criteria on a single page, placing users in an informed position to make a decision. Fit-criteria assessment plot does not automatically select the most appropriate model but eases the model assessment procedure. Conclusions Fit-criteria assessment plot is an exploratory, visualisation tool that can be employed to assist decisions in the initial and decisive phase of group-based trajectory modelling analysis. Considering group-based trajectory modelling's widespread resonance in medical and epidemiological sciences, a more comprehensive, easily interpretable and transparent display of the iterative process of class enumeration may foster group-based trajectory modelling's adequate use.
Phosphate-Modified Nucleotides for Monitoring Enzyme Activity.
Ermert, Susanne; Marx, Andreas; Hacker, Stephan M
2017-04-01
Nucleotides modified at the terminal phosphate position have been proven to be interesting entities to study the activity of a variety of different protein classes. In this chapter, we present various types of modifications that were attached as reporter molecules to the phosphate chain of nucleotides and briefly describe the chemical reactions that are frequently used to synthesize them. Furthermore, we discuss a variety of applications of these molecules. Kinase activity, for instance, was studied by transfer of a phosphate modified with a reporter group to the target proteins. This allows not only studying the activity of kinases, but also identifying their target proteins. Moreover, kinases can also be directly labeled with a reporter at a conserved lysine using acyl-phosphate probes. Another important application for phosphate-modified nucleotides is the study of RNA and DNA polymerases. In this context, single-molecule sequencing is made possible using detection in zero-mode waveguides, nanopores or by a Förster resonance energy transfer (FRET)-based mechanism between the polymerase and a fluorophore-labeled nucleotide. Additionally, fluorogenic nucleotides that utilize an intramolecular interaction between a fluorophore and the nucleobase or an intramolecular FRET effect have been successfully developed to study a variety of different enzymes. Finally, also some novel techniques applying electron paramagnetic resonance (EPR)-based detection of nucleotide cleavage or the detection of the cleavage of fluorophosphates are discussed. Taken together, nucleotides modified at the terminal phosphate position have been applied to study the activity of a large diversity of proteins and are valuable tools to enhance the knowledge of biological systems.
Spectral Entropies as Information-Theoretic Tools for Complex Network Comparison
NASA Astrophysics Data System (ADS)
De Domenico, Manlio; Biamonte, Jacob
2016-10-01
Any physical system can be viewed from the perspective that information is implicitly represented in its state. However, the quantification of this information when it comes to complex networks has remained largely elusive. In this work, we use techniques inspired by quantum statistical mechanics to define an entropy measure for complex networks and to develop a set of information-theoretic tools, based on network spectral properties, such as Rényi q entropy, generalized Kullback-Leibler and Jensen-Shannon divergences, the latter allowing us to define a natural distance measure between complex networks. First, we show that by minimizing the Kullback-Leibler divergence between an observed network and a parametric network model, inference of model parameter(s) by means of maximum-likelihood estimation can be achieved and model selection can be performed with appropriate information criteria. Second, we show that the information-theoretic metric quantifies the distance between pairs of networks and we can use it, for instance, to cluster the layers of a multilayer system. By applying this framework to networks corresponding to sites of the human microbiome, we perform hierarchical cluster analysis and recover with high accuracy existing community-based associations. Our results imply that spectral-based statistical inference in complex networks results in demonstrably superior performance as well as a conceptual backbone, filling a gap towards a network information theory.
NASA Astrophysics Data System (ADS)
Hu, X.; Maiti, R.; Liu, X.; Gerhardt, L. C.; Lee, Z. S.; Byers, R.; Franklin, S. E.; Lewis, R.; Matcher, S. J.; Carré, M. J.
2016-03-01
Bio-mechanical properties of the human skin deformed by external forces at difference skin/material interfaces attract much attention in medical research. For instance, such properties are important design factors when one designs a healthcare device, i.e., the device might be applied directly at skin/device interfaces. In this paper, we investigated the bio-mechanical properties, i.e., surface strain, morphological changes of the skin layers, etc., of the human finger-pad and forearm skin as a function of applied pressure by utilizing two non-invasive techniques, i.e., optical coherence tomography (OCT) and digital image correlation (DIC). Skin deformation results of the human finger-pad and forearm skin were obtained while pressed against a transparent optical glass plate under the action of 0.5-24 N force and stretching naturally from 90° flexion to 180° full extension respectively. The obtained OCT images showed the deformation results beneath the skin surface, however, DIC images gave overall information of strain at the surface.
Wilk, Szymon; Kezadri-Hamiaz, Mounira; Rosu, Daniela; Kuziemsky, Craig; Michalowski, Wojtek; Amyot, Daniel; Carrier, Marc
2016-02-01
In healthcare organizations, clinical workflows are executed by interdisciplinary healthcare teams (IHTs) that operate in ways that are difficult to manage. Responding to a need to support such teams, we designed and developed the MET4 multi-agent system that allows IHTs to manage patients according to presentation-specific clinical workflows. In this paper, we describe a significant extension of the MET4 system that allows for supporting rich team dynamics (understood as team formation, management and task-practitioner allocation), including selection and maintenance of the most responsible physician and more complex rules of selecting practitioners for the workflow tasks. In order to develop this extension, we introduced three semantic components: (1) a revised ontology describing concepts and relations pertinent to IHTs, workflows, and managed patients, (2) a set of behavioral rules describing the team dynamics, and (3) an instance base that stores facts corresponding to instances of concepts from the ontology and to relations between these instances. The semantic components are represented in first-order logic and they can be automatically processed using theorem proving and model finding techniques. We employ these techniques to find models that correspond to specific decisions controlling the dynamics of IHT. In the paper, we present the design of extended MET4 with a special focus on the new semantic components. We then describe its proof-of-concept implementation using the WADE multi-agent platform and the Z3 solver (theorem prover/model finder). We illustrate the main ideas discussed in the paper with a clinical scenario of an IHT managing a patient with chronic kidney disease.
Reuter, H.; Jopp, F.; Blanco-Moreno, J. M.; Damgaard, C.; Matsinos, Y.; DeAngelis, D.L.
2010-01-01
A continuing discussion in applied and theoretical ecology focuses on the relationship of different organisational levels and on how ecological systems interact across scales. We address principal approaches to cope with complex across-level issues in ecology by applying elements of hierarchy theory and the theory of complex adaptive systems. A top-down approach, often characterised by the use of statistical techniques, can be applied to analyse large-scale dynamics and identify constraints exerted on lower levels. Current developments are illustrated with examples from the analysis of within-community spatial patterns and large-scale vegetation patterns. A bottom-up approach allows one to elucidate how interactions of individuals shape dynamics at higher levels in a self-organisation process; e.g., population development and community composition. This may be facilitated by various modelling tools, which provide the distinction between focal levels and resulting properties. For instance, resilience in grassland communities has been analysed with a cellular automaton approach, and the driving forces in rodent population oscillations have been identified with an agent-based model. Both modelling tools illustrate the principles of analysing higher level processes by representing the interactions of basic components.The focus of most ecological investigations on either top-down or bottom-up approaches may not be appropriate, if strong cross-scale relationships predominate. Here, we propose an 'across-scale-approach', closely interweaving the inherent potentials of both approaches. This combination of analytical and synthesising approaches will enable ecologists to establish a more coherent access to cross-level interactions in ecological systems. ?? 2010 Gesellschaft f??r ??kologie.
Multiattribute evaluation of regional cotton variety trials.
Basford, K E; Kroonenberg, P M; Delacy, I H; Lawrence, P K
1990-02-01
The Australian Cotton Cultivar Trials (ACCT) are designed to investigate various cotton [Gossypium hirsutum (L.)] lines in several locations in New South Wales and Queensland each year. If these lines are to be assessed by the simultaneous use of yield and lint quality data, then a multivariate technique applicable to three-way data is desirable. Two such techniques, the mixture maximum likelihood method of clustering and three-mode principal component analysis, are described and used to analyze these data. Applied together, the methods enhance each other's usefulness in interpreting the information on the line response patterns across the locations. The methods provide a good integration of the responses across environments of the entries for the different attributes in the trials. For instance, using yield as the sole criterion, the excellence of the namcala and coker group for quality is overlooked. The analyses point to a decision in favor of either high yields of moderate to good quality lint or moderate yield but superior lint quality. The decisions indicated by the methods confirmed the selections made by the plant breeders. The procedures provide a less subjective, relatively easy to apply and interpret analytical method of describing the patterns of performance and associations in complex multiattribute and multilocation trials. This should lead to more efficient selection among lines in such trials.
Reducing Annotation Effort Using Generalized Expectation Criteria
2007-11-30
constraints additionally consider input variables. Active learning is a related problem in which the learner can choose the particular instances to be...labeled. In pool-based active learning [Cohn et al., 1994], the learner has access to a set of unlabeled instances, and can choose the instance that...has the highest expected utility according to some metric. A standard pool- based active learning method is uncertainty sampling [Lewis and Catlett
Number Partitioning via Quantum Adiabatic Computation
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadim N.; Toussaint, Udo
2002-01-01
We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.
An innovative approach for rubber dam isolation of root end tip: A case report.
Mittal, Sunandan; Kumar, Tarun; Mittal, Shifali; Sharma, Jyotika
2015-01-01
The success of an apicoectomy with a retrofilling is dependent upon obtaining an acceptable apical seal. The placement of the variously approved retrograde materials requires adequate access, visibility, lighting, and a sterile dry environment. There are instances, however, in which it is difficult to use the rubber dam. One such instance is during retrograde filling. This case report highlights an innovative technique for rubber dam isolation of root end retrograde filling.
Nef, Tobias; Urwyler, Prabitha; Büchler, Marcel; Tarnanas, Ioannis; Stucki, Reto; Cazzoli, Dario; Müri, René; Mosimann, Urs
2012-01-01
Smart homes for the aging population have recently started attracting the attention of the research community. The “health state” of smart homes is comprised of many different levels; starting with the physical health of citizens, it also includes longer-term health norms and outcomes, as well as the arena of positive behavior changes. One of the problems of interest is to monitor the activities of daily living (ADL) of the elderly, aiming at their protection and well-being. For this purpose, we installed passive infrared (PIR) sensors to detect motion in a specific area inside a smart apartment and used them to collect a set of ADL. In a novel approach, we describe a technology that allows the ground truth collected in one smart home to train activity recognition systems for other smart homes. We asked the users to label all instances of all ADL only once and subsequently applied data mining techniques to cluster in-home sensor firings. Each cluster would therefore represent the instances of the same activity. Once the clusters were associated to their corresponding activities, our system was able to recognize future activities. To improve the activity recognition accuracy, our system preprocessed raw sensor data by identifying overlapping activities. To evaluate the recognition performance from a 200-day dataset, we implemented three different active learning classification algorithms and compared their performance: naive Bayesian (NB), support vector machine (SVM) and random forest (RF). Based on our results, the RF classifier recognized activities with an average specificity of 96.53%, a sensitivity of 68.49%, a precision of 74.41% and an F-measure of 71.33%, outperforming both the NB and SVM classifiers. Further clustering markedly improved the results of the RF classifier. An activity recognition system based on PIR sensors in conjunction with a clustering classification approach was able to detect ADL from datasets collected from different homes. Thus, our PIR-based smart home technology could improve care and provide valuable information to better understand the functioning of our societies, as well as to inform both individual and collective action in a smart city scenario. PMID:26007727
Nef, Tobias; Urwyler, Prabitha; Büchler, Marcel; Tarnanas, Ioannis; Stucki, Reto; Cazzoli, Dario; Müri, René; Mosimann, Urs
2015-05-21
Smart homes for the aging population have recently started attracting the attention of the research community. The "health state" of smart homes is comprised of many different levels; starting with the physical health of citizens, it also includes longer-term health norms and outcomes, as well as the arena of positive behavior changes. One of the problems of interest is to monitor the activities of daily living (ADL) of the elderly, aiming at their protection and well-being. For this purpose, we installed passive infrared (PIR) sensors to detect motion in a specific area inside a smart apartment and used them to collect a set of ADL. In a novel approach, we describe a technology that allows the ground truth collected in one smart home to train activity recognition systems for other smart homes. We asked the users to label all instances of all ADL only once and subsequently applied data mining techniques to cluster in-home sensor firings. Each cluster would therefore represent the instances of the same activity. Once the clusters were associated to their corresponding activities, our system was able to recognize future activities. To improve the activity recognition accuracy, our system preprocessed raw sensor data by identifying overlapping activities. To evaluate the recognition performance from a 200-day dataset, we implemented three different active learning classification algorithms and compared their performance: naive Bayesian (NB), support vector machine (SVM) and random forest (RF). Based on our results, the RF classifier recognized activities with an average specificity of 96.53%, a sensitivity of 68.49%, a precision of 74.41% and an F-measure of 71.33%, outperforming both the NB and SVM classifiers. Further clustering markedly improved the results of the RF classifier. An activity recognition system based on PIR sensors in conjunction with a clustering classification approach was able to detect ADL from datasets collected from different homes. Thus, our PIR-based smart home technology could improve care and provide valuable information to better understand the functioning of our societies, as well as to inform both individual and collective action in a smart city scenario.
Using Digital Logs to Reduce Academic Misdemeanour by Students in Digital Forensic Assessments
ERIC Educational Resources Information Center
Lallie, Harjinder Singh; Lawson, Phillip; Day, David J.
2011-01-01
Identifying academic misdemeanours and actual applied effort in student assessments involving practical work can be problematic. For instance, it can be difficult to assess the actual effort that a student applied, the sequence and method applied, and whether there was any form of collusion or collaboration. In this paper we propose a system of…
Westbrook, Johanna I; Coiera, Enrico W; Braithwaite, Jeffrey
2005-01-01
Online evidence retrieval systems are one potential tool in supporting evidence-based practice. We have undertaken a program of research to investigate how hospital-based clinicians (doctors, nurses and allied health professionals) use these systems, factors influencing use and their impact on decision-making and health care delivery. A central component of this work has been the development and testing of a broad range of evaluation techniques. This paper provides an overview of the results obtained from three stages of this evaluation and details the results derived from the final stage which sought to test two methods for assessing the integration of an online evidence system and its impact on decision making and patient care. The critical incident and journey mapping techniques were applied. Semi-structured interviews were conducted with 29 clinicians who were experienced users of the online evidence system. Clinicians were asked to described recent instances in which the information obtained using the online evidence system was especially helpful with their work. A grounded approach to data analysis was taken producing three categories of impact. The journey mapping technique was adapted as a method to describe and quantify clinicians' integration of CIAP into their practice and the impact of this on patient care. The analogy of a journey is used to capture the many stages in this integration process, from introduction to the system to full integration into everyday clinical practice with measurable outcomes. Transcribed interview accounts of system use were mapped against the journey stages and scored. Clinicians generated 85 critical incidents and one quarter of these provided specific examples of system use leading to improvements in patient care. The journey mapping technique proved to be a useful method for providing a quantification of the ways and extent to which clincians had integrated system use into practice, and insights into how information systems can influence organisational culture. Further work is required on this technique to assess its value as an evaluation method. The study demonstrates the strength of a triangulated evidence approach to assessing the use and impact of online clinical evidence systems.
Concrete waterproofing in nuclear industry.
Scherbyna, Alexander N; Urusov, Sergei V
2005-01-01
One of the main points of aggregate safety during the transportation and storage of radioactive materials is to supply waterproofing for all constructions having direct contact with radiating substances and providing strength, seismic shielding etc. This is the problem with all waterside structures in nuclear industry and concrete installations in the treatment and storage of radioactive materials. In this connection, the problem of developing efficient techniques both for the repair of operating constructions and the waterproofing of new objects of the specified assignment is genuine. Various techniques of concrete waterproofing are widely applied in the world today. However, in conditions of radiation many of these techniques can bring not a profit but irreparable damage of durability and reliability of a concrete construction; for instance, when waterproofing materials contain organic constituents, polymers etc. Application of new technology or materials in basic construction elements requires in-depth analysis and thorough testing. The price of an error might be very large. A comparative analysis shows that one of the most promising types of waterproofing materials for radiation loaded concrete constructions is "integral capillary systems" (ICS). The tests on radiation, thermal and strength stability of ICS and ICS-treated concrete samples were initiated and fulfilled in RFNC-VNIITF. The main result is--ICS applying is increasing of waterproofing and strength properties of concrete in conditions of readiation The paper is devoted to describing the research strategy, the tests and their results and also to planning of new tests.
Functional Measurement: An Incredibly Flexible Tool
ERIC Educational Resources Information Center
Mullet, Etienne; Morales Martinez, Guadalupe Elizabeth; Makris, Ioannis; Roge, Bernadette; Munoz Sastre, Maria Teresa
2012-01-01
Functional Measurement (FM) has been applied to a variety of settings that can be considered as "extreme" settings; that is, settings involving participants with severe cognitive disabilities or involving unusual stimulus material. FM has, as instance, been successfully applied for analyzing (a) numerosity judgments among children as…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cyr, Eric C.; Shadid, John N.; Tuminaro, Raymond S.
This study describes the design of Teko, an object-oriented C++ library for implementing advanced block preconditioners. Mathematical design criteria that elucidate the needs of block preconditioning libraries and techniques are explained and shown to motivate the structure of Teko. For instance, a principal design choice was for Teko to strongly reflect the mathematical statement of the preconditioners to reduce development burden and permit focus on the numerics. Additional mechanisms are explained that provide a pathway to developing an optimized production capable block preconditioning capability with Teko. Finally, Teko is demonstrated on fluid flow and magnetohydrodynamics applications. In addition to highlightingmore » the features of the Teko library, these new results illustrate the effectiveness of recent preconditioning developments applied to advanced discretization approaches.« less
Cyr, Eric C.; Shadid, John N.; Tuminaro, Raymond S.
2016-10-27
This study describes the design of Teko, an object-oriented C++ library for implementing advanced block preconditioners. Mathematical design criteria that elucidate the needs of block preconditioning libraries and techniques are explained and shown to motivate the structure of Teko. For instance, a principal design choice was for Teko to strongly reflect the mathematical statement of the preconditioners to reduce development burden and permit focus on the numerics. Additional mechanisms are explained that provide a pathway to developing an optimized production capable block preconditioning capability with Teko. Finally, Teko is demonstrated on fluid flow and magnetohydrodynamics applications. In addition to highlightingmore » the features of the Teko library, these new results illustrate the effectiveness of recent preconditioning developments applied to advanced discretization approaches.« less
Makeyev, Oleksandr; Sazonov, Edward; Schuckers, Stephanie; Lopez-Meyer, Paulo; Melanson, Ed; Neuman, Michael
2007-01-01
In this paper we propose a sound recognition technique based on the limited receptive area (LIRA) neural classifier and continuous wavelet transform (CWT). LIRA neural classifier was developed as a multipurpose image recognition system. Previous tests of LIRA demonstrated good results in different image recognition tasks including: handwritten digit recognition, face recognition, metal surface texture recognition, and micro work piece shape recognition. We propose a sound recognition technique where scalograms of sound instances serve as inputs of the LIRA neural classifier. The methodology was tested in recognition of swallowing sounds. Swallowing sound recognition may be employed in systems for automated swallowing assessment and diagnosis of swallowing disorders. The experimental results suggest high efficiency and reliability of the proposed approach.
Robust pupil center detection using a curvature algorithm
NASA Technical Reports Server (NTRS)
Zhu, D.; Moore, S. T.; Raphan, T.; Wall, C. C. (Principal Investigator)
1999-01-01
Determining the pupil center is fundamental for calculating eye orientation in video-based systems. Existing techniques are error prone and not robust because eyelids, eyelashes, corneal reflections or shadows in many instances occlude the pupil. We have developed a new algorithm which utilizes curvature characteristics of the pupil boundary to eliminate these artifacts. Pupil center is computed based solely on points related to the pupil boundary. For each boundary point, a curvature value is computed. Occlusion of the boundary induces characteristic peaks in the curvature function. Curvature values for normal pupil sizes were determined and a threshold was found which together with heuristics discriminated normal from abnormal curvature. Remaining boundary points were fit with an ellipse using a least squares error criterion. The center of the ellipse is an estimate of the pupil center. This technique is robust and accurately estimates pupil center with less than 40% of the pupil boundary points visible.
A new communications technique for the nonvocal person, using the Apple II Computer.
Seamone, W
1982-01-01
The purpose of this paper is to describe a technique for nonvocal personal communication for the severely handicapped person, using the Apple II computer system and standard commercially available software diskettes (Visi-Calc). The user's input in a pseudo-Morse code is generated via minute chin motions or limited finger motions applied to a suitable configured two-switch device, and input via the JHU/APL Morse code interface card. The commands and features of the program's row-column matrix, originally intended and widely used for financial management, are used here to call up and modify a large array of stored sentences which can be useful in personal communication. It is not known at this time if the system is in fact cost-effective for the sole purpose of nonvocal communication, since system tradeoff studies have not been made relative to other techniques. However, in some instances an Apple computer may be already available for other purposes at the institution or in the home, and the system described could simply be another utilization of that personal computer. In any case, the system clearly does not meet the requirement of portability. No special components (except for the JHU/APL Morse interface card) and no special programming experience are required to duplicate the communications technique described.
Magnetocaloric cycle with six stages: Possible application of graphene at low temperature
NASA Astrophysics Data System (ADS)
Reis, M. S.
2015-09-01
The present work proposes a thermodynamic hexacycle based on the magnetocaloric oscillations of graphene, which has either a positive or negative adiabatic temperature change depending on the final value of the magnetic field change. For instance, for graphenes at 25 K, an applied field of 2.06 T/1.87 T promotes a temperature change of ca. -25 K/+3 K. The hexacycle is based on the Brayton cycle and instead of the usual four steps, it has six stages, taking advantage of the extra cooling provided by the inverse adiabatic temperature change. This proposal opens doors for magnetic cooling applications at low temperatures.
Quasi-kernel polynomials and convergence results for quasi-minimal residual iterations
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1992-01-01
Recently, Freund and Nachtigal have proposed a novel polynominal-based iteration, the quasi-minimal residual algorithm (QMR), for solving general nonsingular non-Hermitian linear systems. Motivated by the QMR method, we have introduced the general concept of quasi-kernel polynomials, and we have shown that the QMR algorithm is based on a particular instance of quasi-kernel polynomials. In this paper, we continue our study of quasi-kernel polynomials. In particular, we derive bounds for the norms of quasi-kernel polynomials. These results are then applied to obtain convergence theorems both for the QMR method and for a transpose-free variant of QMR, the TFQMR algorithm.
User oriented data processing at the University of Michigan
NASA Technical Reports Server (NTRS)
Thomson, F. J.
1970-01-01
The multispectral techniques have shown themselves capable of solving problems in a large number of user areas. The results obtained are in some instances quite impressive. In many instances, the multispectral detection of various phenomena is an empirical fact for which there is little physical explanation today. To date, most of the user applications that have been addressed are exploratory in nature. The closest approximation to an operational situation encountered so far is that of the survey of wetlands in North Dakota reported in this paper.
An Efficient Statistical Computation Technique for Health Care Big Data using R
NASA Astrophysics Data System (ADS)
Sushma Rani, N.; Srinivasa Rao, P., Dr; Parimala, P.
2017-08-01
Due to the changes in living conditions and other factors many critical health related problems are arising. The diagnosis of the problem at earlier stages will increase the chances of survival and fast recovery. This reduces the time of recovery and the cost associated for the treatment. One such medical related issue is cancer and breast cancer has been identified as the second leading cause of cancer death. If detected in the early stage it can be cured. Once a patient is detected with breast cancer tumor, it should be classified whether it is cancerous or non-cancerous. So the paper uses k-nearest neighbors(KNN) algorithm which is one of the simplest machine learning algorithms and is an instance-based learning algorithm to classify the data. Day-to -day new records are added which leds to increase in the data to be classified and this tends to be big data problem. The algorithm is implemented in R whichis the most popular platform applied to machine learning algorithms for statistical computing. Experimentation is conducted by using various classification evaluation metric onvarious values of k. The results show that the KNN algorithm out performes better than existing models.
McCann, Joshua C.; Wickersham, Tryon A.; Loor, Juan J.
2014-01-01
Diversity in the forestomach microbiome is one of the key features of ruminant animals. The diverse microbial community adapts to a wide array of dietary feedstuffs and management strategies. Understanding rumen microbiome composition, adaptation, and function has global implications ranging from climatology to applied animal production. Classical knowledge of rumen microbiology was based on anaerobic, culture-dependent methods. Next-generation sequencing and other molecular techniques have uncovered novel features of the rumen microbiome. For instance, pyrosequencing of the 16S ribosomal RNA gene has revealed the taxonomic identity of bacteria and archaea to the genus level, and when complemented with barcoding adds multiple samples to a single run. Whole genome shotgun sequencing generates true metagenomic sequences to predict the functional capability of a microbiome, and can also be used to construct genomes of isolated organisms. Integration of high-throughput data describing the rumen microbiome with classic fermentation and animal performance parameters has produced meaningful advances and opened additional areas for study. In this review, we highlight recent studies of the rumen microbiome in the context of cattle production focusing on nutrition, rumen development, animal efficiency, and microbial function. PMID:24940050
Beltman, Joost B; Urbanus, Jos; Velds, Arno; van Rooij, Nienke; Rohr, Jan C; Naik, Shalin H; Schumacher, Ton N
2016-04-02
Next generation sequencing (NGS) of amplified DNA is a powerful tool to describe genetic heterogeneity within cell populations that can both be used to investigate the clonal structure of cell populations and to perform genetic lineage tracing. For applications in which both abundant and rare sequences are biologically relevant, the relatively high error rate of NGS techniques complicates data analysis, as it is difficult to distinguish rare true sequences from spurious sequences that are generated by PCR or sequencing errors. This issue, for instance, applies to cellular barcoding strategies that aim to follow the amount and type of offspring of single cells, by supplying these with unique heritable DNA tags. Here, we use genetic barcoding data from the Illumina HiSeq platform to show that straightforward read threshold-based filtering of data is typically insufficient to filter out spurious barcodes. Importantly, we demonstrate that specific sequencing errors occur at an approximately constant rate across different samples that are sequenced in parallel. We exploit this observation by developing a novel approach to filter out spurious sequences. Application of our new method demonstrates its value in the identification of true sequences amongst spurious sequences in biological data sets.
NASA Astrophysics Data System (ADS)
Cicone, Antonio; Zhou, Haomin; Piersanti, Mirko; Materassi, Massimo; Spogli, Luca
2017-04-01
Nonlinear and nonstationary signals are ubiquitous in real life. Their decomposition and analysis is of crucial importance in many research fields. Traditional techniques, like Fourier and wavelet Transform have been proved to be limited in this context. In the last two decades new kind of nonlinear methods have been developed which are able to unravel hidden features of these kinds of signals. In this talk we will review the state of the art and present a new method, called Adaptive Local Iterative Filtering (ALIF). This method, developed originally to study mono-dimensional signals, unlike any other technique proposed so far, can be easily generalized to study two or higher dimensional signals. Furthermore, unlike most of the similar methods, it does not require any a priori assumption on the signal itself, so that the method can be applied as it is to any kind of signals. Applications of ALIF algorithm to real life signals analysis will be presented. Like, for instance, the behavior of the water level near the coastline in presence of a Tsunami, the length of the day signal, the temperature and pressure measured at ground level on a global grid, and the radio power scintillation from GNSS signals.
Studies of the micromorphology of sputtered TiN thin films by autocorrelation techniques
NASA Astrophysics Data System (ADS)
Smagoń, Kamil; Stach, Sebastian; Ţălu, Ştefan; Arman, Ali; Achour, Amine; Luna, Carlos; Ghobadi, Nader; Mardani, Mohsen; Hafezi, Fatemeh; Ahmadpourian, Azin; Ganji, Mohsen; Grayeli Korpi, Alireza
2017-12-01
Autocorrelation techniques are crucial tools for the study of the micromorphology of surfaces: They provide the description of anisotropic properties and the identification of repeated patterns on the surface, facilitating the comparison of samples. In the present investigation, some fundamental concepts of these techniques including the autocorrelation function and autocorrelation length have been reviewed and applied in the study of titanium nitride thin films by atomic force microscopy (AFM). The studied samples were grown on glass substrates by reactive magnetron sputtering at different substrate temperatures (from 25 {}°C to 400 {}°C , and their micromorphology was studied by AFM. The obtained AFM data were analyzed using MountainsMap Premium software obtaining the correlation function, the structure of isotropy and the spatial parameters according to ISO 25178 and EUR 15178N. These studies indicated that the substrate temperature during the deposition process is an important parameter to modify the micromorphology of sputtered TiN thin films and to find optimized surface properties. For instance, the autocorrelation length exhibited a maximum value for the sample prepared at a substrate temperature of 300 {}°C , and the sample obtained at 400 {}°C presented a maximum angle of the direction of the surface structure.
Quantum Error Correction for Minor Embedded Quantum Annealing
NASA Astrophysics Data System (ADS)
Vinci, Walter; Paz Silva, Gerardo; Mishra, Anurag; Albash, Tameem; Lidar, Daniel
2015-03-01
While quantum annealing can take advantage of the intrinsic robustness of adiabatic dynamics, some form of quantum error correction (QEC) is necessary in order to preserve its advantages over classical computation. Moreover, realistic quantum annealers are subject to a restricted connectivity between qubits. Minor embedding techniques use several physical qubits to represent a single logical qubit with a larger set of interactions, but necessarily introduce new types of errors (whenever the physical qubits corresponding to the same logical qubit disagree). We present a QEC scheme where a minor embedding is used to generate a 8 × 8 × 2 cubic connectivity out of the native one and perform experiments on a D-Wave quantum annealer. Using a combination of optimized encoding and decoding techniques, our scheme enables the D-Wave device to solve minor embedded hard instances at least as well as it would on a native implementation. Our work is a proof-of-concept that minor embedding can be advantageously implemented in order to increase both the robustness and the connectivity of a programmable quantum annealer. Applied in conjunction with decoding techniques, this paves the way toward scalable quantum annealing with applications to hard optimization problems.
NASA Astrophysics Data System (ADS)
Drakopoulou, E.; Cowan, G. A.; Needham, M. D.; Playfer, S.; Taani, M.
2018-04-01
The application of machine learning techniques to the reconstruction of lepton energies in water Cherenkov detectors is discussed and illustrated for TITUS, a proposed intermediate detector for the Hyper-Kamiokande experiment. It is found that applying these techniques leads to an improvement of more than 50% in the energy resolution for all lepton energies compared to an approach based upon lookup tables. Machine learning techniques can be easily applied to different detector configurations and the results are comparable to likelihood-function based techniques that are currently used.
Cernuda, Carlos; Lughofer, Edwin; Klein, Helmut; Forster, Clemens; Pawliczek, Marcin; Brandstetter, Markus
2017-01-01
During the production process of beer, it is of utmost importance to guarantee a high consistency of the beer quality. For instance, the bitterness is an essential quality parameter which has to be controlled within the specifications at the beginning of the production process in the unfermented beer (wort) as well as in final products such as beer and beer mix beverages. Nowadays, analytical techniques for quality control in beer production are mainly based on manual supervision, i.e., samples are taken from the process and analyzed in the laboratory. This typically requires significant lab technicians efforts for only a small fraction of samples to be analyzed, which leads to significant costs for beer breweries and companies. Fourier transform mid-infrared (FT-MIR) spectroscopy was used in combination with nonlinear multivariate calibration techniques to overcome (i) the time consuming off-line analyses in beer production and (ii) already known limitations of standard linear chemometric methods, like partial least squares (PLS), for important quality parameters Speers et al. (J I Brewing. 2003;109(3):229-235), Zhang et al. (J I Brewing. 2012;118(4):361-367) such as bitterness, citric acid, total acids, free amino nitrogen, final attenuation, or foam stability. The calibration models are established with enhanced nonlinear techniques based (i) on a new piece-wise linear version of PLS by employing fuzzy rules for local partitioning the latent variable space and (ii) on extensions of support vector regression variants (-PLSSVR and ν-PLSSVR), for overcoming high computation times in high-dimensional problems and time-intensive and inappropriate settings of the kernel parameters. Furthermore, we introduce a new model selection scheme based on bagged ensembles in order to improve robustness and thus predictive quality of the final models. The approaches are tested on real-world calibration data sets for wort and beer mix beverages, and successfully compared to linear methods, showing a clear out-performance in most cases and being able to meet the model quality requirements defined by the experts at the beer company. Figure Workflow for calibration of non-Linear model ensembles from FT-MIR spectra in beer production .
Comparison of Frequency-Domain Array Methods for Studying Earthquake Rupture Process
NASA Astrophysics Data System (ADS)
Sheng, Y.; Yin, J.; Yao, H.
2014-12-01
Seismic array methods, in both time- and frequency- domains, have been widely used to study the rupture process and energy radiation of earthquakes. With better spatial resolution, the high-resolution frequency-domain methods, such as Multiple Signal Classification (MUSIC) (Schimdt, 1986; Meng et al., 2011) and the recently developed Compressive Sensing (CS) technique (Yao et al., 2011, 2013), are revealing new features of earthquake rupture processes. We have performed various tests on the methods of MUSIC, CS, minimum-variance distortionless response (MVDR) Beamforming and conventional Beamforming in order to better understand the advantages and features of these methods for studying earthquake rupture processes. We use the ricker wavelet to synthesize seismograms and use these frequency-domain techniques to relocate the synthetic sources we set, for instance, two sources separated in space but, their waveforms completely overlapping in the time domain. We also test the effects of the sliding window scheme on the recovery of a series of input sources, in particular, some artifacts that are caused by the sliding window scheme. Based on our tests, we find that CS, which is developed from the theory of sparsity inversion, has relatively high spatial resolution than the other frequency-domain methods and has better performance at lower frequencies. In high-frequency bands, MUSIC, as well as MVDR Beamforming, is more stable, especially in the multi-source situation. Meanwhile, CS tends to produce more artifacts when data have poor signal-to-noise ratio. Although these techniques can distinctly improve the spatial resolution, they still produce some artifacts along with the sliding of the time window. Furthermore, we propose a new method, which combines both the time-domain and frequency-domain techniques, to suppress these artifacts and obtain more reliable earthquake rupture images. Finally, we apply this new technique to study the 2013 Okhotsk deep mega earthquake in order to better capture the rupture characteristics (e.g., rupture area and velocity) of this earthquake.
Two Student Self-Management Techniques Applied to Data-Based Program Modification.
ERIC Educational Resources Information Center
Wesson, Caren
Two student self-management techniques, student charting and student selection of instructional activities, were applied to ongoing data-based program modification. Forty-two elementary school resource room students were assigned randomly (within teacher) to one of three treatment conditions: Teacher Chart-Teacher Select Instructional Activities…
Processor and method for developing a set of admissible fixture designs for a workpiece
Brost, R.C.; Goldberg, K.Y.; Wallack, A.S.; Canny, J.
1996-08-13
A fixture process and method is provided for developing a complete set of all admissible fixture designs for a workpiece which prevents the workpiece from translating or rotating. The fixture processor generates the set of all admissible designs based on geometric access constraints and expected applied forces on the workpiece. For instance, the fixture processor may generate a set of admissible fixture designs for first, second and third locators placed in an array of holes on a fixture plate and a translating clamp attached to the fixture plate for contacting the workpiece. In another instance, a fixture vice is used in which first, second, third and fourth locators are used and first and second fixture jaws are tightened to secure the workpiece. The fixture process also ranks the set of admissible fixture designs according to a predetermined quality metric so that the optimal fixture design for the desired purpose may be identified from the set of all admissible fixture designs. 27 figs.
Processor and method for developing a set of admissible fixture designs for a workpiece
Brost, Randolph C.; Goldberg, Kenneth Y.; Canny, John; Wallack, Aaron S.
1999-01-01
Methods and apparatus are provided for developing a complete set of all admissible Type I and Type II fixture designs for a workpiece. The fixture processor generates the set of all admissible designs based on geometric access constraints and expected applied forces on the workpiece. For instance, the fixture processor may generate a set of admissible fixture designs for first, second and third locators placed in an array of holes on a fixture plate and a translating clamp attached to the fixture plate for contacting the workpiece. In another instance, a fixture vise is used in which first, second, third and fourth locators are used and first and second fixture jaws are tightened to secure the workpiece. The fixture process also ranks the set of admissible fixture designs according to a predetermined quality metric so that the optimal fixture design for the desired purpose may be identified from the set of all admissible fixture designs.
Processor and method for developing a set of admissible fixture designs for a workpiece
Brost, Randolph C.; Goldberg, Kenneth Y.; Wallack, Aaron S.; Canny, John
1996-01-01
A fixture process and method is provided for developing a complete set of all admissible fixture designs for a workpiece which prevents the workpiece from translating or rotating. The fixture processor generates the set of all admissible designs based on geometric access constraints and expected applied forces on the workpiece. For instance, the fixture processor may generate a set of admissible fixture designs for first, second and third locators placed in an array of holes on a fixture plate and a translating clamp attached to the fixture plate for contacting the workpiece. In another instance, a fixture vice is used in which first, second, third and fourth locators are used and first and second fixture jaws are tightened to secure the workpiece. The fixture process also ranks the set of admissible fixture designs according to a predetermined quality metric so that the optimal fixture design for the desired purpose may be identified from the set of all admissible fixture designs.
Processor and method for developing a set of admissible fixture designs for a workpiece
Brost, R.C.; Goldberg, K.Y.; Canny, J.; Wallack, A.S.
1999-01-05
Methods and apparatus are provided for developing a complete set of all admissible Type 1 and Type 2 fixture designs for a workpiece. The fixture processor generates the set of all admissible designs based on geometric access constraints and expected applied forces on the workpiece. For instance, the fixture processor may generate a set of admissible fixture designs for first, second and third locators placed in an array of holes on a fixture plate and a translating clamp attached to the fixture plate for contacting the workpiece. In another instance, a fixture vise is used in which first, second, third and fourth locators are used and first and second fixture jaws are tightened to secure the workpiece. The fixture process also ranks the set of admissible fixture designs according to a predetermined quality metric so that the optimal fixture design for the desired purpose may be identified from the set of all admissible fixture designs. 44 figs.
Mitochondrial Replacement: Ethics and Identity
Wilkinson, Stephen; Appleby, John B.
2015-01-01
Abstract Mitochondrial replacement techniques (MRTs) have the potential to allow prospective parents who are at risk of passing on debilitating or even life‐threatening mitochondrial disorders to have healthy children to whom they are genetically related. Ethical concerns have however been raised about these techniques. This article focuses on one aspect of the ethical debate, the question of whether there is any moral difference between the two types of MRT proposed: Pronuclear Transfer (PNT) and Maternal Spindle Transfer (MST). It examines how questions of identity impact on the ethical evaluation of each technique and argues that there is an important difference between the two. PNT, it is argued, is a form of therapy based on embryo modification while MST is, instead, an instance of selective reproduction. The article's main ethical conclusion is that, in some circumstances, there is a stronger obligation to use PNT than MST. PMID:26481204
Sung, Jiun-Yu; Chow, Chi-Wai; Yeh, Chien-Hung
2014-04-07
Visible light communication (VLC) using LEDs has attracted significant attention recently for the future secure, license-free and electromagnetic-interference (EMI)-free optical wireless communication. Dimming technique in LED lamp is advantageous for energy efficiency. Color control can be performed in the red-green-blue (RGB) LEDs by using dimming technique. It is highly desirable to employ dimming technique to provide simultaneous color and dimming control and high speed VLC. Here, we proposed and demonstrated a LED dimming control using dimming-discrete-multi-tone (DMT) modulation. High speed DMT-based VLC with simultaneous color and dimming control is demonstrated for the first time to the best of our knowledge. Demonstration and analyses for several modulation conditions and transmission distances are performed, for instance, demonstrating the data rate of 103.5 Mb/s (using RGB LED) with fast Fourier transform (FFT) size of 512.
Bubble-based acoustic radiation force using chirp insonation to reduce standing wave effects.
Erpelding, Todd N; Hollman, Kyle W; O'Donnell, Matthew
2007-02-01
Bubble-based acoustic radiation force can measure local viscoelastic properties of tissue. High intensity acoustic waves applied to laser-generated bubbles induce displacements inversely proportional to local Young's modulus. In certain instances, long pulse durations are desirable but are susceptible to standing wave artifacts, which corrupt displacement measurements. Chirp pulse acoustic radiation force was investigated as a method to reduce standing wave artifacts. Chirp pulses with linear frequency sweep magnitudes of 100, 200 and 300 kHz centered around 1.5 MHz were applied to glass beads within gelatin phantoms and laser-generated bubbles within porcine lenses. The ultrasound transducer was translated axially to vary standing wave conditions, while comparing displacements using chirp pulses and 1.5 MHz tone burst pulses of the same duration and peak rarefactional pressure. Results demonstrated significant reduction in standing wave effects using chirp pulses, with displacement proportional to acoustic intensity and bubble size.
Bubble-Based Acoustic Radiation Force Using Chirp Insonation to Reduce Standing Wave Effects
Erpelding, Todd N.; Hollman, Kyle W.; O’Donnell, Matthew
2007-01-01
Bubble-based acoustic radiation force can measure local viscoelastic properties of tissue. High intensity acoustic waves applied to laser-generated bubbles induce displacements inversely proportional to local Young’s modulus. In certain instances, long pulse durations are desirable but are susceptible to standing wave artifacts, which corrupt displacement measurements. Chirp pulse acoustic radiation force was investigated as a method to reduce standing wave artifacts. Chirp pulses with linear frequency sweep magnitudes of 100, 200, and 300 kHz centered around 1.5 MHz were applied to glass beads within gelatin phantoms and laser-generated bubbles within porcine lenses. The ultrasound transducer was translated axially to vary standing wave conditions, while comparing displacements using chirp pulses and 1.5 MHz tone burst pulses of the same duration and peak rarefactional pressure. Results demonstrated significant reduction in standing wave effects using chirp pulses, with displacement proportional to acoustic intensity and bubble size. PMID:17306697
Charleston, M A
1995-01-01
This article introduces a coherent language base for describing and working with characteristics of combinatorial optimization problems, which is at once general enough to be used in all such problems and precise enough to allow subtle concepts in this field to be discussed unambiguously. An example is provided of how this nomenclature is applied to an instance of the phylogeny problem. Also noted is the beneficial effect, on the landscape of the solution space, of transforming the observed data to account for multiple changes of character state.
Future of Assurance: Ensuring that a System is Trustworthy
NASA Astrophysics Data System (ADS)
Sadeghi, Ahmad-Reza; Verbauwhede, Ingrid; Vishik, Claire
Significant efforts are put in defining and implementing strong security measures for all components of the comput-ing environment. It is equally important to be able to evaluate the strength and robustness of these measures and establish trust among the components of the computing environment based on parameters and attributes of these elements and best practices associated with their production and deployment. Today the inventory of techniques used for security assurance and to establish trust -- audit, security-conscious development process, cryptographic components, external evaluation - is somewhat limited. These methods have their indisputable strengths and have contributed significantly to the advancement in the area of security assurance. However, shorter product and tech-nology development cycles and the sheer complexity of modern digital systems and processes have begun to decrease the efficiency of these techniques. Moreover, these approaches and technologies address only some aspects of security assurance and, for the most part, evaluate assurance in a general design rather than an instance of a product. Additionally, various components of the computing environment participating in the same processes enjoy different levels of security assurance, making it difficult to ensure adequate levels of protection end-to-end. Finally, most evaluation methodologies rely on the knowledge and skill of the evaluators, making reliable assessments of trustworthiness of a system even harder to achieve. The paper outlines some issues in security assurance that apply across the board, with the focus on the trustworthiness and authenticity of hardware components and evaluates current approaches to assurance.
Isentropic Analysis of a Simulated Hurricane
NASA Technical Reports Server (NTRS)
Mrowiec, Agnieszka A.; Pauluis, Olivier; Zhang, Fuqing
2016-01-01
Hurricanes, like many other atmospheric flows, are associated with turbulent motions over a wide range of scales. Here the authors adapt a new technique based on the isentropic analysis of convective motions to study the thermodynamic structure of the overturning circulation in hurricane simulations. This approach separates the vertical mass transport in terms of the equivalent potential temperature of air parcels. In doing so, one separates the rising air parcels at high entropy from the subsiding air at low entropy. This technique filters out oscillatory motions associated with gravity waves and separates convective overturning from the secondary circulation. This approach is applied here to study the flow of an idealized hurricane simulation with the Weather Research and Forecasting (WRF) Model. The isentropic circulation for a hurricane exhibits similar characteristics to that of moist convection, with a maximum mass transport near the surface associated with a shallow convection and entrainment. There are also important differences. For instance, ascent in the eyewall can be readily identified in the isentropic analysis as an upward mass flux of air with unusually high equivalent potential temperature. The isentropic circulation is further compared here to the Eulerian secondary circulation of the simulated hurricane to show that the mass transport in the isentropic circulation is much larger than the one in secondary circulation. This difference can be directly attributed to the mass transport by convection in the outer rainband and confirms that, even for a strongly organized flow like a hurricane, most of the atmospheric overturning is tied to the smaller scales.
NASA Astrophysics Data System (ADS)
McKane, Alan
2003-12-01
This is a book about the modelling of complex systems and, unlike many books on this subject, concentrates on the discussion of specific systems and gives practical methods for modelling and simulating them. This is not to say that the author does not devote space to the general philosophy and definition of complex systems and agent-based modelling, but the emphasis is definitely on the development of concrete methods for analysing them. This is, in my view, to be welcomed and I thoroughly recommend the book, especially to those with a theoretical physics background who will be very much at home with the language and techniques which are used. The author has developed a formalism for understanding complex systems which is based on the Langevin approach to the study of Brownian motion. This is a mesoscopic description; details of the interactions between the Brownian particle and the molecules of the surrounding fluid are replaced by a randomly fluctuating force. Thus all microscopic detail is replaced by a coarse-grained description which encapsulates the essence of the interactions at the finer level of description. In a similar way, the influences on Brownian agents in a multi-agent system are replaced by stochastic influences which sum up the effects of these interactions on a finer scale. Unlike Brownian particles, Brownian agents are not structureless particles, but instead have some internal states so that, for instance, they may react to changes in the environment or to the presence of other agents. Most of the book is concerned with developing the idea of Brownian agents using the techniques of statistical physics. This development parallels that for Brownian particles in physics, but the author then goes on to apply the technique to problems in biology, economics and the social sciences. This is a clear and well-written book which is a useful addition to the literature on complex systems. It will be interesting to see if the use of Brownian agents becomes a standard tool in the study of complex systems in the future.
The influence of negative training set size on machine learning-based virtual screening.
Kurczab, Rafał; Smusz, Sabina; Bojarski, Andrzej J
2014-01-01
The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening.
The influence of negative training set size on machine learning-based virtual screening
2014-01-01
Background The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. Results The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. Conclusions In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening. PMID:24976867
Patrick, Matthew R.; Kauahikaua, James P.; Antolik, Loren
2010-01-01
Webcams are now standard tools for volcano monitoring and are used at observatories in Alaska, the Cascades, Kamchatka, Hawai'i, Italy, and Japan, among other locations. Webcam images allow invaluable documentation of activity and provide a powerful comparative tool for interpreting other monitoring datastreams, such as seismicity and deformation. Automated image processing can improve the time efficiency and rigor of Webcam image interpretation, and potentially extract more information on eruptive activity. For instance, Lovick and others (2008) provided a suite of processing tools that performed such tasks as noise reduction, eliminating uninteresting images from an image collection, and detecting incandescence, with an application to dome activity at Mount St. Helens during 2007. In this paper, we present two very simple automated approaches for improved characterization and quantification of volcanic incandescence in Webcam images at Kilauea Volcano, Hawai`i. The techniques are implemented in MATLAB (version 2009b, Copyright: The Mathworks, Inc.) to take advantage of the ease of matrix operations. Incandescence is a useful indictor of the location and extent of active lava flows and also a potentially powerful proxy for activity levels at open vents. We apply our techniques to a period covering both summit and east rift zone activity at Kilauea during 2008?2009 and compare the results to complementary datasets (seismicity, tilt) to demonstrate their integrative potential. A great strength of this study is the demonstrated success of these tools in an operational setting at the Hawaiian Volcano Observatory (HVO) over the course of more than a year. Although applied only to Webcam images here, the techniques could be applied to any type of sequential images, such as time-lapse photography. We expect that these tools are applicable to many other volcano monitoring scenarios, and the two MATLAB scripts, as they are implemented at HVO, are included in the appendixes. These scripts would require minor to moderate modifications for use elsewhere, primarily to customize directory navigation. If the user has some familiarity with MATLAB, or programming in general, these modifications should be easy. Although we originally anticipated needing the Image Processing Toolbox, the scripts in the appendixes do not require it. Thus, only the base installation of MATLAB is needed. Because fairly basic MATLAB functions are used, we expect that the script can be run successfully by versions earlier than 2009b.
Discrete-time modelling of musical instruments
NASA Astrophysics Data System (ADS)
Välimäki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti
2006-01-01
This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed.
Integrating reasoning and clinical archetypes using OWL ontologies and SWRL rules.
Lezcano, Leonardo; Sicilia, Miguel-Angel; Rodríguez-Solano, Carlos
2011-04-01
Semantic interoperability is essential to facilitate the computerized support for alerts, workflow management and evidence-based healthcare across heterogeneous electronic health record (EHR) systems. Clinical archetypes, which are formal definitions of specific clinical concepts defined as specializations of a generic reference (information) model, provide a mechanism to express data structures in a shared and interoperable way. However, currently available archetype languages do not provide direct support for mapping to formal ontologies and then exploiting reasoning on clinical knowledge, which are key ingredients of full semantic interoperability, as stated in the SemanticHEALTH report [1]. This paper reports on an approach to translate definitions expressed in the openEHR Archetype Definition Language (ADL) to a formal representation expressed using the Ontology Web Language (OWL). The formal representations are then integrated with rules expressed with Semantic Web Rule Language (SWRL) expressions, providing an approach to apply the SWRL rules to concrete instances of clinical data. Sharing the knowledge expressed in the form of rules is consistent with the philosophy of open sharing, encouraged by archetypes. Our approach also allows the reuse of formal knowledge, expressed through ontologies, and extends reuse to propositions of declarative knowledge, such as those encoded in clinical guidelines. This paper describes the ADL-to-OWL translation approach, describes the techniques to map archetypes to formal ontologies, and demonstrates how rules can be applied to the resulting representation. We provide examples taken from a patient safety alerting system to illustrate our approach. Copyright © 2010 Elsevier Inc. All rights reserved.
The Convergence of Intelligences
NASA Astrophysics Data System (ADS)
Diederich, Joachim
Minsky (1985) argued an extraterrestrial intelligence may be similar to ours despite very different origins. ``Problem- solving'' offers evolutionary advantages and individuals who are part of a technical civilisation should have this capacity. On earth, the principles of problem-solving are the same for humans, some primates and machines based on Artificial Intelligence (AI) techniques. Intelligent systems use ``goals'' and ``sub-goals'' for problem-solving, with memories and representations of ``objects'' and ``sub-objects'' as well as knowledge of relations such as ``cause'' or ``difference.'' Some of these objects are generic and cannot easily be divided into parts. We must, therefore, assume that these objects and relations are universal, and a general property of intelligence. Minsky's arguments from 1985 are extended here. The last decade has seen the development of a general learning theory (``computational learning theory'' (CLT) or ``statistical learning theory'') which equally applies to humans, animals and machines. It is argued that basic learning laws will also apply to an evolved alien intelligence, and this includes limitations of what can be learned efficiently. An example from CLT is that the general learning problem for neural networks is intractable, i.e. it cannot be solved efficiently for all instances (it is ``NP-complete''). It is the objective of this paper to show that evolved intelligences will be constrained by general learning laws and will use task-decomposition for problem-solving. Since learning and problem-solving are core features of intelligence, it can be said that intelligences converge despite very different origins.
Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles.
Munguia, Rodrigo; Urzua, Sarquis; Grau, Antoni
2016-01-01
In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.
NASA Astrophysics Data System (ADS)
Samberg, Andre; Babichenko, Sergei; Poryvkina, Larisa
2005-05-01
Delay between the time when natural disaster, for example, oil accident in coastal water, occurred and the time when environmental protection actions, for example, water and shoreline clean-up, started is of significant importance. Mostly remote sensing techniques are considered as (near) real-time and suitable for multiple tasks. These techniques in combination with rapid environmental assessment methodologies would form multi-tier environmental assessment model, which allows creating (near) real-time datasets and optimizing sampling scenarios. This paper presents the idea of three-tier environmental assessment model. Here all three tiers are briefly described to show the linkages between them, with a particular focus on the first tier. Furthermore, it is described how large-scale environmental assessment can be improved by using an airborne 3-D scanning FLS-AM series hyperspectral lidar. This new aircraft-based sensor is typically applied for oil mapping on sea/ground surface and extracting optical features of subjects. In general, a sampling network, which is based on three-tier environmental assessment model, can include ship(s) and aircraft(s). The airborne 3-D scanning FLS-AM series hyperspectral lidar helps to speed up the whole process of assessing of area of natural disaster significantly, because this is a real-time remote sensing mean. For instance, it can deliver such information as georeferenced oil spill position in WGS-84, the estimated size of the whole oil spill, and the estimated amount of oil in seawater or on ground. All information is produced in digital form and, thus, can be directly transferred into a customer"s GIS (Geographical Information System) system.
Language Mapping with Navigated Repetitive TMS: Proof of Technique and Validation
Tarapore, Phiroz E.; Findlay, Anne M.; Honma, Susanne M.; Mizuiri, Danielle; Houde, John F.; Berger, Mitchel S.; Nagarajan, Srikantan S.
2013-01-01
Objective Lesion-based mapping of speech pathways has been possible only during invasive neurosurgical procedures using direct cortical stimulation (DCS). However, navigated transcranial magnetic stimulation (nTMS) may allow for lesion-based interrogation of language pathways noninvasively. Although not lesion-based, magnetoencephalographic imaging (MEGI) is another noninvasive modality for language mapping. In this study, we compare the accuracy of nTMS and MEGI with DCS. Methods Subjects with lesions around cortical language areas underwent preoperative nTMS and MEGI for language mapping. nTMS maps were generated using a repetitive TMS protocol to deliver trains of stimulations during a picture naming task. MEGI activation maps were derived from adaptive spatial filtering of beta-band power decreases prior to overt speech during picture naming and verb generation tasks. The subjects subsequently underwent awake language mapping via intraoperative DCS. The language maps obtained from each of the 3 modalities were recorded and compared. Results nTMS and MEGI were performed on 12 subjects. nTMS yielded 21 positive language disruption sites (11 speech arrest, 5 anomia, and 5 other) while DCS yielded 10 positive sites (2 speech arrest, 5 anomia, and 3 other). MEGI isolated 32 sites of peak activation with language tasks. Positive language sites were most commonly found in the pars opercularis for all three modalities. In 9 instances the positive DCS site corresponded to a positive nTMS site, while in 1 instance it did not. In 4 instances, a positive nTMS site corresponded to a negative DCS site, while 169 instances of negative nTMS and DCS were recorded. The sensitivity of nTMS was therefore 90%, specificity was 98%, the positive predictive value was 69% and the negative predictive value was 99% as compared with intraoperative DCS. MEGI language sites for verb generation and object naming correlated with nTMS sites in 5 subjects, and with DCS sites in 2 subjects. Conclusion Maps of language function generated with nTMS correlate well with those generated by DCS. Negative nTMS mapping also correlates with negative DCS mapping. In our study, MEGI lacks the same level of correlation with intraoperative mapping; nevertheless it provides useful adjunct information in some cases. nTMS may offer a lesion-based method for noninvasively interrogating language pathways and be valuable in managing patients with peri-eloquent lesions. PMID:23702420
Building Diversified Multiple Trees for classification in high dimensional noisy biomedical data.
Li, Jiuyong; Liu, Lin; Liu, Jixue; Green, Ryan
2017-12-01
It is common that a trained classification model is applied to the operating data that is deviated from the training data because of noise. This paper will test an ensemble method, Diversified Multiple Tree (DMT), on its capability for classifying instances in a new laboratory using the classifier built on the instances of another laboratory. DMT is tested on three real world biomedical data sets from different laboratories in comparison with four benchmark ensemble methods, AdaBoost, Bagging, Random Forests, and Random Trees. Experiments have also been conducted on studying the limitation of DMT and its possible variations. Experimental results show that DMT is significantly more accurate than other benchmark ensemble classifiers on classifying new instances of a different laboratory from the laboratory where instances are used to build the classifier. This paper demonstrates that an ensemble classifier, DMT, is more robust in classifying noisy data than other widely used ensemble methods. DMT works on the data set that supports multiple simple trees.
Bharti, Gaurav; Groves, Leslie; Sanger, Claire; Thompson, James; David, Lisa; Marks, Malcolm
2013-05-01
Transverse rectus abdominus muscle flaps (TRAM) can result in significant abdominal wall donor-site morbidity. We present our experience with bilateral pedicle TRAM breast reconstruction using a double-layered polypropylene mesh fold over technique to repair the rectus fascia. A retrospective study was performed that included patients with bilateral pedicle TRAM breast reconstruction and abdominal reconstruction using a double-layered polypropylene mesh fold over technique. Thirty-five patients met the study criteria with a mean age of 49 years old and mean follow-up of 7.4 years. There were no instances of abdominal hernia and only 2 cases (5.7%) of abdominal bulge. Other abdominal complications included partial umbilical necrosis (14.3%), seroma (11.4%), partial wound dehiscence (8.6%), abdominal weakness (5.7%), abdominal laxity (2.9%), and hematoma (2.9%). The TRAM flap is a reliable option for bilateral autologous breast reconstruction. Using the double mesh repair of the abdominal wall can reduce instances of an abdominal bulge and hernia.
Development of a multi-criteria evaluation system to assess growing pig welfare.
Martín, P; Traulsen, I; Buxadé, C; Krieter, J
2017-03-01
The aim of this paper was to present an alternative multi-criteria evaluation model to assess animal welfare on farms based on the Welfare Quality® (WQ) project, using an example of welfare assessment of growing pigs. The WQ assessment protocol follows a three-step aggregation process. Measures are aggregated into criteria, criteria into principles and principles into an overall assessment. This study focussed on the first step of the aggregation. Multi-attribute utility theory (MAUT) was used to produce a value of welfare for each criterion. The utility functions and the aggregation function were constructed in two separated steps. The Measuring Attractiveness by a Categorical Based Evaluation Technique (MACBETH) method was used for utility function determination and the Choquet Integral (CI) was used as an aggregation operator. The WQ decision-makers' preferences were fitted in order to construct the utility functions and to determine the CI parameters. The methods were tested with generated data sets for farms of growing pigs. Using the MAUT, similar results were obtained to the ones obtained applying the WQ protocol aggregation methods. It can be concluded that due to the use of an interactive approach such as MACBETH, this alternative methodology is more transparent and more flexible than the methodology proposed by WQ, which allows the possibility to modify the model according, for instance, to new scientific knowledge.
NASA Astrophysics Data System (ADS)
Ghamarian, Iman
Nanocrystalline metallic materials have the potential to exhibit outstanding performance which leads to their usage in challenging applications such as coatings and biomedical implant devices. To optimize the performance of nanocrystalline metallic materials according to the desired applications, it is important to have a decent understanding of the structure, processing and properties of these materials. Various efforts have been made to correlate microstructure and properties of nanocrystalline metallic materials. Based on these research activities, it is noticed that microstructure and defects (e.g., dislocations and grain boundaries) play a key role in the behavior of these materials. Therefore, it is of great importance to establish methods to quantitatively study microstructures, defects and their interactions in nanocrystalline metallic materials. Since the mechanisms controlling the properties of nanocrystalline metallic materials occur at a very small length scale, it is fairly difficult to study them. Unfortunately, most of the characterization techniques used to explore these materials do not have the high enough spatial resolution required for the characterization of these materials. For instance, by applying complex profile-fitting algorithms to X-ray diffraction patterns, it is possible to get an estimation of the average grain size and the average dislocation density within a relatively large area. However, these average values are not enough for developing meticulous phenomenological models which are able to correlate microstructure and properties of nanocrystalline metallic materials. As another example, electron backscatter diffraction technique also cannot be used widely in the characterization of these materials due to problems such as relative poor spatial resolution (which is 90 nm) and the degradation of Kikuchi diffraction patterns in severely deformed nano-size grain metallic materials. In this study, ASTAR(TM)/precession electron diffraction is introduced as a relatively new orientation microscopy technique to characterize defects (e.g., geometrically necessary dislocations and grain boundaries) in challenging nanocrystalline metallic materials. The capability of this characterization technique to quantitatively determine the dislocation density distributions of geometrically necessary dislocations in severely deformed metallic materials is assessed. Based on the developed method, it is possible to determine the distributions and accumulations of dislocations with respect to the nearest grain boundaries and triple junctions. Also, the competency of this technique to study the grain boundary character distributions of nanocrystalline metallic materials is presented.
The applicability of dental wear in age estimation for a modern American population.
Faillace, Katie E; Bethard, Jonathan D; Marks, Murray K
2017-12-01
Though applied in bioarchaeology, dental wear is an underexplored age indicator in the biological anthropology of contemporary populations, although research has been conducted on dental attrition in forensic contexts (Kim et al., , Journal of Forensic Sciences, 45, 303; Prince et al., , Journal of Forensic Sciences, 53, 588; Yun et al., , Journal of Forensic Sciences, 52, 678). The purpose of this study is to apply and adapt existing techniques for age estimation based on dental wear to a modern American population, with the aim of producing accurate age range estimates for individuals from an industrialized context. Methodologies following Yun and Prince were applied to a random sample from the University of New Mexico (n = 583) and Universidade de Coimbra (n = 50) cast and skeletal collections. Analysis of variance (ANOVA) and linear regression analyses were conducted to examine the relationship between tooth wear scores and age. Application of both Yun et al. () and Prince et al. () methodologies resulted in inaccurate age estimates. Recalibrated sectioning points correctly classified individuals as over or under 50 years for 88% of the sample. Linear regression demonstrated 60% of age estimates fell within ±10 years of the actual age, and accuracy improved for individuals under 45 years, with 74% of predictions within ±10 years. This study demonstrates age estimation from dental wear is possible for modern populations, with comparable age intervals to other established methods. It provides a quantifiable method of seriation into "older" and "younger" adult categories, and provides more reliable age interval estimates than cranial sutures in instances where only the skull is available. © 2017 Wiley Periodicals, Inc.
Raman-based imaging uncovers the effects of alginate hydrogel implants in spinal cord injury
NASA Astrophysics Data System (ADS)
Galli, Roberta; Tamosaityte, Sandra; Koch, Maria; Sitoci-Ficici, Kerim H.; Later, Robert; Uckermann, Ortrud; Beiermeister, Rudolf; Gelinsky, Michael; Schackert, Gabriele; Kirsch, Matthias; Koch, Edmund; Steiner, Gerald
2015-07-01
The treatment of spinal cord injury by using implants that provide a permissive environment for axonal growth is in the focus of the research for regenerative therapies. Here, Raman-based label-free techniques were applied for the characterization of morphochemical properties of surgically induced spinal cord injury in the rat that received an implant of soft unfunctionalized alginate hydrogel. Raman microspectroscopy followed by chemometrics allowed mapping the different degenerative areas, while multimodal multiphoton microscopy (e.g. the combination of coherent anti-Stokes Raman scattering (CARS), endogenous two-photon fluorescence and second harmonic generation on the same platform) enabled to address the morphochemistry of the tissue at cellular level. The regions of injury, characterized by demyelination and scarring, were retrieved and the distribution of key tissue components was evaluated by Raman mapping. The alginate hydrogel was detected in the lesion up to six months after implantation and had positive effects on the nervous tissue. For instance, multimodal multiphoton microscopy complemented the results of Raman mapping, providing the micromorphology of lipid-rich tissue structures by CARS and enabling to discern lipid-rich regions that contained myelinated axons from degenerative regions characterized by myelin fragmentation and presence of foam cells. These findings demonstrate that Raman-based imaging methods provide useful information for the evaluation of alginate implant effects and have therefore the potential to contribute to new strategies for monitoring degenerative and regenerative processes induced in SCI, thereby improving the effectiveness of therapies.
Automatic peak selection by a Benjamini-Hochberg-based algorithm.
Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin
2013-01-01
A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into [Formula: see text]-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx.
Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm
Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin
2013-01-01
A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into -values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. PMID:23308147
ERIC Educational Resources Information Center
Ward-Penny, Robert; Johnston-Wilder, Sue; Johnston-Wilder, Peter
2013-01-01
One-third of the current A-level mathematics curriculum is determined by choice, constructed out of "applied mathematics" modules in mechanics, statistics and decision mathematics. Although this choice arguably involves the most sizeable instance of choice in the current English school mathematics curriculum, and it has a significant…
Jacoby, Larry L; Wahlheim, Christopher N
2013-07-01
Suppose that you were asked which of two movies you had most recently seen. The results of the experiments reported here suggest that your answer would be more accurate if, when viewing the later movie, you were reminded of the earlier one. In the present experiments, we investigated the role of remindings in recency judgments and cued-recall performance. We did this by presenting a list composed of two instances from each of several different categories and later asking participants to select (Exp. 1) or to recall (Exp. 2) the more recently presented instance. Reminding was manipulated by varying instructions to look back over memory of earlier instances during the presentation of later instances. As compared to a control condition, cued-recall performance revealed facilitation effects when remindings occurred and were later recollected, but interference effects in their absence. The effects of reminding on recency judgments paralleled those on cued recall of more recently presented instances. We interpret these results as showing that reminding produces a recursive representation that embeds memory for an earlier-presented category instance into that of a later-presented one and, thereby, preserves their temporal order. Large individual differences in the probabilities of remindings and of their later recollection were observed. The widespread importance of recursive reminding for theory and for applied purposes is discussed.
Technologies for Assessment of Motor Disorders in Parkinson’s Disease: A Review
Oung, Qi Wei; Muthusamy, Hariharan; Lee, Hoi Leong; Basah, Shafriza Nisha; Yaacob, Sazali; Sarillee, Mohamed; Lee, Chia Hau
2015-01-01
Parkinson’s Disease (PD) is characterized as the commonest neurodegenerative illness that gradually degenerates the central nervous system. The goal of this review is to come out with a summary of the recent progress of numerous forms of sensors and systems that are related to diagnosis of PD in the past decades. The paper reviews the substantial researches on the application of technological tools (objective techniques) in the PD field applying different types of sensors proposed by previous researchers. In addition, this also includes the use of clinical tools (subjective techniques) for PD assessments, for instance, patient self-reports, patient diaries and the international gold standard reference scale, Unified Parkinson Disease Rating Scale (UPDRS). Comparative studies and critical descriptions of these approaches have been highlighted in this paper, giving an insight on the current state of the art. It is followed by explaining the merits of the multiple sensor fusion platform compared to single sensor platform for better monitoring progression of PD, and ends with thoughts about the future direction towards the need of multimodal sensor integration platform for the assessment of PD. PMID:26404288
Machine learning approaches for estimation of prediction interval for the model output.
Shrestha, Durga L; Solomatine, Dimitri P
2006-03-01
A novel method for estimating prediction uncertainty using machine learning techniques is presented. Uncertainty is expressed in the form of the two quantiles (constituting the prediction interval) of the underlying distribution of prediction errors. The idea is to partition the input space into different zones or clusters having similar model errors using fuzzy c-means clustering. The prediction interval is constructed for each cluster on the basis of empirical distributions of the errors associated with all instances belonging to the cluster under consideration and propagated from each cluster to the examples according to their membership grades in each cluster. Then a regression model is built for in-sample data using computed prediction limits as targets, and finally, this model is applied to estimate the prediction intervals (limits) for out-of-sample data. The method was tested on artificial and real hydrologic data sets using various machine learning techniques. Preliminary results show that the method is superior to other methods estimating the prediction interval. A new method for evaluating performance for estimating prediction interval is proposed as well.
Peripheral intravenous nutrition without fat in neonatal surgery.
Coran, A G; Weintraub
1977-04-01
During a 1 yr period, 19 infants less than 2 mo of age were fed intravenously with an infusate composed of glucose, amino acids, electrolytes, and vitamins. The solution was infused at a rate of 200 ml/kg/day or more for periods ranging from 5-247 days. No central venous catheters were utilized; the solutions were always administered through a needle in a peripheral vein. Weight gains similar to those seen with other techniques of intravenous nutrition were observed in all of the patients studied. No instance of fluid overload in the form of pulmonary edema, peripheral edema, or congestive heart failure was seen, and osmotic diuresis was not observed because of the lower tonicity of the infusate. Phlebitis was seen in 1/5 of the infusions, but was reversed by stopping the infusion and applying warm soaks. Three cases of skin slough were observed and two of these healed spontaneously without the need of skin grafting. The advantages of this technique over central venous nutrition are the elimination of the complications related to the central venous catheter, namely, sepsis and superior vena cava thrombosis.
Analyzing Distributed Functions in an Integrated Hazard Analysis
NASA Technical Reports Server (NTRS)
Morris, A. Terry; Massie, Michael J.
2010-01-01
Large scale integration of today's aerospace systems is achievable through the use of distributed systems. Validating the safety of distributed systems is significantly more difficult as compared to centralized systems because of the complexity of the interactions between simultaneously active components. Integrated hazard analysis (IHA), a process used to identify unacceptable risks and to provide a means of controlling them, can be applied to either centralized or distributed systems. IHA, though, must be tailored to fit the particular system being analyzed. Distributed systems, for instance, must be analyzed for hazards in terms of the functions that rely on them. This paper will describe systems-oriented IHA techniques (as opposed to traditional failure-event or reliability techniques) that should be employed for distributed systems in aerospace environments. Special considerations will be addressed when dealing with specific distributed systems such as active thermal control, electrical power, command and data handling, and software systems (including the interaction with fault management systems). Because of the significance of second-order effects in large scale distributed systems, the paper will also describe how to analyze secondary functions to secondary functions through the use of channelization.
KA-SB: from data integration to large scale reasoning
Roldán-García, María del Mar; Navas-Delgado, Ismael; Kerzazi, Amine; Chniber, Othmane; Molina-Castro, Joaquín; Aldana-Montes, José F
2009-01-01
Background The analysis of information in the biological domain is usually focused on the analysis of data from single on-line data sources. Unfortunately, studying a biological process requires having access to disperse, heterogeneous, autonomous data sources. In this context, an analysis of the information is not possible without the integration of such data. Methods KA-SB is a querying and analysis system for final users based on combining a data integration solution with a reasoner. Thus, the tool has been created with a process divided into two steps: 1) KOMF, the Khaos Ontology-based Mediator Framework, is used to retrieve information from heterogeneous and distributed databases; 2) the integrated information is crystallized in a (persistent and high performance) reasoner (DBOWL). This information could be further analyzed later (by means of querying and reasoning). Results In this paper we present a novel system that combines the use of a mediation system with the reasoning capabilities of a large scale reasoner to provide a way of finding new knowledge and of analyzing the integrated information from different databases, which is retrieved as a set of ontology instances. This tool uses a graphical query interface to build user queries easily, which shows a graphical representation of the ontology and allows users o build queries by clicking on the ontology concepts. Conclusion These kinds of systems (based on KOMF) will provide users with very large amounts of information (interpreted as ontology instances once retrieved), which cannot be managed using traditional main memory-based reasoners. We propose a process for creating persistent and scalable knowledgebases from sets of OWL instances obtained by integrating heterogeneous data sources with KOMF. This process has been applied to develop a demo tool , which uses the BioPax Level 3 ontology as the integration schema, and integrates UNIPROT, KEGG, CHEBI, BRENDA and SABIORK databases. PMID:19796402
Social and content aware One-Class recommendation of papers in scientific social networks.
Wang, Gang; He, XiRan; Ishuga, Carolyne Isigi
2017-01-01
With the rapid development of information technology, scientific social networks (SSNs) have become the fastest and most convenient way for researchers to communicate with each other. Many published papers are shared via SSNs every day, resulting in the problem of information overload. How to appropriately recommend personalized and highly valuable papers for researchers is becoming more urgent. However, when recommending papers in SSNs, only a small amount of positive instances are available, leaving a vast amount of unlabelled data, in which negative instances and potential unseen positive instances are mixed together, which naturally belongs to One-Class Collaborative Filtering (OCCF) problem. Therefore, considering the extreme data imbalance and data sparsity of this OCCF problem, a hybrid approach of Social and Content aware One-class Recommendation of Papers in SSNs, termed SCORP, is proposed in this study. Unlike previous approaches recommended to address the OCCF problem, social information, which has been proved playing a significant role in performing recommendations in many domains, is applied in both the profiling of content-based filtering and the collaborative filtering to achieve superior recommendations. To verify the effectiveness of the proposed SCORP approach, a real-life dataset from CiteULike was employed. The experimental results demonstrate that the proposed approach is superior to all of the compared approaches, thus providing a more effective method for recommending papers in SSNs.
Social and content aware One-Class recommendation of papers in scientific social networks
Wang, Gang; He, XiRan
2017-01-01
With the rapid development of information technology, scientific social networks (SSNs) have become the fastest and most convenient way for researchers to communicate with each other. Many published papers are shared via SSNs every day, resulting in the problem of information overload. How to appropriately recommend personalized and highly valuable papers for researchers is becoming more urgent. However, when recommending papers in SSNs, only a small amount of positive instances are available, leaving a vast amount of unlabelled data, in which negative instances and potential unseen positive instances are mixed together, which naturally belongs to One-Class Collaborative Filtering (OCCF) problem. Therefore, considering the extreme data imbalance and data sparsity of this OCCF problem, a hybrid approach of Social and Content aware One-class Recommendation of Papers in SSNs, termed SCORP, is proposed in this study. Unlike previous approaches recommended to address the OCCF problem, social information, which has been proved playing a significant role in performing recommendations in many domains, is applied in both the profiling of content-based filtering and the collaborative filtering to achieve superior recommendations. To verify the effectiveness of the proposed SCORP approach, a real-life dataset from CiteULike was employed. The experimental results demonstrate that the proposed approach is superior to all of the compared approaches, thus providing a more effective method for recommending papers in SSNs. PMID:28771495
Multi-user investigation organizer
NASA Technical Reports Server (NTRS)
Panontin, Tina L. (Inventor); Williams, James F. (Inventor); Carvalho, Robert E. (Inventor); Sturken, Ian (Inventor); Wolfe, Shawn R. (Inventor); Gawdiak, Yuri O. (Inventor); Keller, Richard M. (Inventor)
2009-01-01
A system that allows a team of geographically dispersed users to collaboratively analyze a mishap event. The system includes a reconfigurable ontology, including instances that are related to and characterize the mishap, a semantic network that receives, indexes and stores, for retrieval, viewing and editing, the instances and links between the instances, a network browser interface for retrieving and viewing screens that present the instances and links to other instances and that allow editing thereof, and a rule-based inference engine, including a collection of rules associated with establishment of links between the instances. A possible conclusion arising from analysis of the mishap event may be characterized as one or more of: not a credible conclusion; an unlikely conclusion; a credible conclusion; conclusion needs analysis; conclusion needs supporting data; conclusion proposed to be closed; and an un-reviewed conclusion.
Optical quantification of forces at play during stem cell differentiation
NASA Astrophysics Data System (ADS)
Ritter, Christine M.; Brickman, Joshua M.; Oddershede, Lene B.
2016-03-01
A cell is in constant interaction with its environment, it responds to external mechanical, chemical and biological signals. The response to these signals can be of various nature, for instance intra-cellular mechanical re-arrangements, cell-cell interactions, or cellular reinforcements. Optical methods are quite attractive for investigating the mechanics inside living cells as, e.g., optical traps are amongst the only nanotools that can reach and manipulate, measure forces, inside a living cell. In the recent years it has become increasingly evident that not only biochemical and biomolecular cues, but also that mechanical ones, play an important roles in stem cell differentiation. The first evidence for the importance of mechanical cues emerged from studies showing that substrate stiffness had an impact on stem cell differentiation. Recently, techniques such as optical tweezers and stretchers have been applied to stem cells, producing new insights into the role of mechanics in regulating renewal and differentiation. Here, we describe how optical tweezers and optical stretchers can be applied as a tool to investigate stem cell mechanics and some of the recent results to come out of this work.
Machine learning of frustrated classical spin models. I. Principal component analysis
NASA Astrophysics Data System (ADS)
Wang, Ce; Zhai, Hui
2017-10-01
This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.
Atmospheric parameterization schemes for satellite cloud property retrieval during FIRE IFO 2
NASA Technical Reports Server (NTRS)
Titlow, James; Baum, Bryan A.
1993-01-01
Satellite cloud retrieval algorithms generally require atmospheric temperature and humidity profiles to determine such cloud properties as pressure and height. For instance, the CO2 slicing technique called the ratio method requires the calculation of theoretical upwelling radiances both at the surface and a prescribed number (40) of atmospheric levels. This technique has been applied to data from, for example, the High Resolution Infrared Radiometer Sounder (HIRS/2, henceforth HIRS) flown aboard the NOAA series of polar orbiting satellites and the High Resolution Interferometer Sounder (HIS). In this particular study, four NOAA-11 HIRS channels in the 15-micron region are used. The ratio method may be applied to various channel combinations to estimate cloud top heights using channels in the 15-mu m region. Presently, the multispectral, multiresolution (MSMR) scheme uses 4 HIRS channel combination estimates for mid- to high-level cloud pressure retrieval and Advanced Very High Resolution Radiometer (AVHRR) data for low-level (is greater than 700 mb) cloud level retrieval. In order to determine theoretical upwelling radiances, atmospheric temperature and water vapor profiles must be provided as well as profiles of other radiatively important gas absorber constituents such as CO2, O3, and CH4. The assumed temperature and humidity profiles have a large effect on transmittance and radiance profiles, which in turn are used with HIRS data to calculate cloud pressure, and thus cloud height and temperature. For large spatial scale satellite data analysis, atmospheric parameterization schemes for cloud retrieval algorithms are usually based on a gridded product such as that provided by the European Center for Medium Range Weather Forecasting (ECMWF) or the National Meteorological Center (NMC). These global, gridded products prescribe temperature and humidity profiles for a limited number of pressure levels (up to 14) in a vertical atmospheric column. The FIRE IFO 2 experiment provides an opportunity to investigate current atmospheric profile parameterization schemes, compare satellite cloud height results using both gridded products (ECMWF) and high vertical resolution sonde data from the National Weather Service (NWS) and Cross Chain Loran Atmospheric Sounding System (CLASS), and suggest modifications in atmospheric parameterization schemes based on these results.
Magnetocaloric cycle with six stages: Possible application of graphene at low temperature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reis, M. S., E-mail: marior@if.uff.br
2015-09-07
The present work proposes a thermodynamic hexacycle based on the magnetocaloric oscillations of graphene, which has either a positive or negative adiabatic temperature change depending on the final value of the magnetic field change. For instance, for graphenes at 25 K, an applied field of 2.06 T/1.87 T promotes a temperature change of ca. −25 K/+3 K. The hexacycle is based on the Brayton cycle and instead of the usual four steps, it has six stages, taking advantage of the extra cooling provided by the inverse adiabatic temperature change. This proposal opens doors for magnetic cooling applications at low temperatures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zubko, I. Yu., E-mail: zoubko@list.ru; Kochurov, V. I.
2015-10-27
For the aim of the crystal temperature control the computational-statistical approach to studying thermo-mechanical properties for finite sized crystals is presented. The approach is based on the combination of the high-performance computational techniques and statistical analysis of the crystal response on external thermo-mechanical actions for specimens with the statistically small amount of atoms (for instance, nanoparticles). The heat motion of atoms is imitated in the statics approach by including the independent degrees of freedom for atoms connected with their oscillations. We obtained that under heating, graphene material response is nonsymmetric.
Genetic programming based ensemble system for microarray data classification.
Liu, Kun-Hong; Tong, Muchenxuan; Xie, Shu-Tong; Yee Ng, Vincent To
2015-01-01
Recently, more and more machine learning techniques have been applied to microarray data analysis. The aim of this study is to propose a genetic programming (GP) based new ensemble system (named GPES), which can be used to effectively classify different types of cancers. Decision trees are deployed as base classifiers in this ensemble framework with three operators: Min, Max, and Average. Each individual of the GP is an ensemble system, and they become more and more accurate in the evolutionary process. The feature selection technique and balanced subsampling technique are applied to increase the diversity in each ensemble system. The final ensemble committee is selected by a forward search algorithm, which is shown to be capable of fitting data automatically. The performance of GPES is evaluated using five binary class and six multiclass microarray datasets, and results show that the algorithm can achieve better results in most cases compared with some other ensemble systems. By using elaborate base classifiers or applying other sampling techniques, the performance of GPES may be further improved.
Genetic Programming Based Ensemble System for Microarray Data Classification
Liu, Kun-Hong; Tong, Muchenxuan; Xie, Shu-Tong; Yee Ng, Vincent To
2015-01-01
Recently, more and more machine learning techniques have been applied to microarray data analysis. The aim of this study is to propose a genetic programming (GP) based new ensemble system (named GPES), which can be used to effectively classify different types of cancers. Decision trees are deployed as base classifiers in this ensemble framework with three operators: Min, Max, and Average. Each individual of the GP is an ensemble system, and they become more and more accurate in the evolutionary process. The feature selection technique and balanced subsampling technique are applied to increase the diversity in each ensemble system. The final ensemble committee is selected by a forward search algorithm, which is shown to be capable of fitting data automatically. The performance of GPES is evaluated using five binary class and six multiclass microarray datasets, and results show that the algorithm can achieve better results in most cases compared with some other ensemble systems. By using elaborate base classifiers or applying other sampling techniques, the performance of GPES may be further improved. PMID:25810748
Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun
2015-01-01
Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.
Semi-Supervised Active Learning for Sound Classification in Hybrid Learning Environments.
Han, Wenjing; Coutinho, Eduardo; Ruan, Huabin; Li, Haifeng; Schuller, Björn; Yu, Xiaojie; Zhu, Xuan
2016-01-01
Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances.
Semi-Supervised Active Learning for Sound Classification in Hybrid Learning Environments
Han, Wenjing; Coutinho, Eduardo; Li, Haifeng; Schuller, Björn; Yu, Xiaojie; Zhu, Xuan
2016-01-01
Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances. PMID:27627768
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.
2007-01-01
Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.
Phylo: A Citizen Science Approach for Improving Multiple Sequence Alignment
Kam, Alfred; Kwak, Daniel; Leung, Clarence; Wu, Chu; Zarour, Eleyine; Sarmenta, Luis; Blanchette, Mathieu; Waldispühl, Jérôme
2012-01-01
Background Comparative genomics, or the study of the relationships of genome structure and function across different species, offers a powerful tool for studying evolution, annotating genomes, and understanding the causes of various genetic disorders. However, aligning multiple sequences of DNA, an essential intermediate step for most types of analyses, is a difficult computational task. In parallel, citizen science, an approach that takes advantage of the fact that the human brain is exquisitely tuned to solving specific types of problems, is becoming increasingly popular. There, instances of hard computational problems are dispatched to a crowd of non-expert human game players and solutions are sent back to a central server. Methodology/Principal Findings We introduce Phylo, a human-based computing framework applying “crowd sourcing” techniques to solve the Multiple Sequence Alignment (MSA) problem. The key idea of Phylo is to convert the MSA problem into a casual game that can be played by ordinary web users with a minimal prior knowledge of the biological context. We applied this strategy to improve the alignment of the promoters of disease-related genes from up to 44 vertebrate species. Since the launch in November 2010, we received more than 350,000 solutions submitted from more than 12,000 registered users. Our results show that solutions submitted contributed to improving the accuracy of up to 70% of the alignment blocks considered. Conclusions/Significance We demonstrate that, combined with classical algorithms, crowd computing techniques can be successfully used to help improving the accuracy of MSA. More importantly, we show that an NP-hard computational problem can be embedded in casual game that can be easily played by people without significant scientific training. This suggests that citizen science approaches can be used to exploit the billions of “human-brain peta-flops” of computation that are spent every day playing games. Phylo is available at: http://phylo.cs.mcgill.ca. PMID:22412834
A Human Activity Recognition System Based on Dynamic Clustering of Skeleton Data.
Manzi, Alessandro; Dario, Paolo; Cavallo, Filippo
2017-05-11
Human activity recognition is an important area in computer vision, with its wide range of applications including ambient assisted living. In this paper, an activity recognition system based on skeleton data extracted from a depth camera is presented. The system makes use of machine learning techniques to classify the actions that are described with a set of a few basic postures. The training phase creates several models related to the number of clustered postures by means of a multiclass Support Vector Machine (SVM), trained with Sequential Minimal Optimization (SMO). The classification phase adopts the X-means algorithm to find the optimal number of clusters dynamically. The contribution of the paper is twofold. The first aim is to perform activity recognition employing features based on a small number of informative postures, extracted independently from each activity instance; secondly, it aims to assess the minimum number of frames needed for an adequate classification. The system is evaluated on two publicly available datasets, the Cornell Activity Dataset (CAD-60) and the Telecommunication Systems Team (TST) Fall detection dataset. The number of clusters needed to model each instance ranges from two to four elements. The proposed approach reaches excellent performances using only about 4 s of input data (~100 frames) and outperforms the state of the art when it uses approximately 500 frames on the CAD-60 dataset. The results are promising for the test in real context.
Machine grading of lumber : practical concerns for lumber producers
William L. Galligan; Kent A. McDonald
2000-01-01
Machine lumber grading has been applied in commercial operations in North America since 1963, and research has shown that machine grading can improve the efficient use of wood. However, industry has been reluctant to apply research findings without clear evidence that the change from visual to machine grading will be a profitable one. For instance, mill managers need...
48 CFR 1652.232-72 - Non-commingling of FEHBP funds.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... (b) In certain instances the physical separation of FEHBP funds may not be practical or desirable. In... waiver shall be requested in advance and the Carrier shall demonstrate that accounting techniques have...
48 CFR 1652.232-72 - Non-commingling of FEHBP funds.
Code of Federal Regulations, 2014 CFR
2014-10-01
.... (b) In certain instances the physical separation of FEHBP funds may not be practical or desirable. In... waiver shall be requested in advance and the Carrier shall demonstrate that accounting techniques have...
48 CFR 1652.232-72 - Non-commingling of FEHBP funds.
Code of Federal Regulations, 2012 CFR
2012-10-01
.... (b) In certain instances the physical separation of FEHBP funds may not be practical or desirable. In... waiver shall be requested in advance and the Carrier shall demonstrate that accounting techniques have...
48 CFR 1652.232-72 - Non-commingling of FEHBP funds.
Code of Federal Regulations, 2013 CFR
2013-10-01
.... (b) In certain instances the physical separation of FEHBP funds may not be practical or desirable. In... waiver shall be requested in advance and the Carrier shall demonstrate that accounting techniques have...
48 CFR 1652.232-72 - Non-commingling of FEHBP funds.
Code of Federal Regulations, 2011 CFR
2011-10-01
.... (b) In certain instances the physical separation of FEHBP funds may not be practical or desirable. In... waiver shall be requested in advance and the Carrier shall demonstrate that accounting techniques have...
Wax encapsulation of water-soluble compounds for application in foods.
Mellema, M; Van Benthum, W A J; Boer, B; Von Harras, J; Visser, A
2006-11-01
Water-soluble ingredients have been successfully encapsulated in wax using two preparation techniques. The first technique ('solid preparation') leads to relatively large wax particles. The second technique ('liquid preparation') leads to relatively small wax particles immersed in vegetable oil. On the first technique: stable encapsulation of water-soluble colourants (dissolved at low concentration in water) has been achieved making use of beeswax and PGPR. The leakage from the capsules, for instance of size 2 mm, is about 30% after 16 weeks storage in water at room temperature. To form such capsules a minimum wax mass of 40% relative to the total mass is needed. High amounts of salt or acids at the inside water phase causes more leaking, probably because of the osmotic pressure difference. Osmotic matching of inner and outer phase can lead to a dramatic reduction in leakage. Fat capsules are less suitable to incorporate water soluble colourants. The reason for this could be a difference in crystal structure (fat is less ductile and more brittle). On the second technique: stable encapsulation of water-soluble colourants (encapsulated in solid wax particles) has been achieved making use of carnauba wax. The leakage from the capsules, for instance of size 250 mm, is about 40% after 1 weeks storage in water at room temperature.
Sports Stars: Analyzing the Performance of Astronomers at Visualization-based Discovery
NASA Astrophysics Data System (ADS)
Fluke, C. J.; Parrington, L.; Hegarty, S.; MacMahon, C.; Morgan, S.; Hassan, A. H.; Kilborn, V. A.
2017-05-01
In this data-rich era of astronomy, there is a growing reliance on automated techniques to discover new knowledge. The role of the astronomer may change from being a discoverer to being a confirmer. But what do astronomers actually look at when they distinguish between “sources” and “noise?” What are the differences between novice and expert astronomers when it comes to visual-based discovery? Can we identify elite talent or coach astronomers to maximize their potential for discovery? By looking to the field of sports performance analysis, we consider an established, domain-wide approach, where the expertise of the viewer (i.e., a member of the coaching team) plays a crucial role in identifying and determining the subtle features of gameplay that provide a winning advantage. As an initial case study, we investigate whether the SportsCode performance analysis software can be used to understand and document how an experienced Hi astronomer makes discoveries in spectral data cubes. We find that the process of timeline-based coding can be applied to spectral cube data by mapping spectral channels to frames within a movie. SportsCode provides a range of easy to use methods for annotation, including feature-based codes and labels, text annotations associated with codes, and image-based drawing. The outputs, including instance movies that are uniquely associated with coded events, provide the basis for a training program or team-based analysis that could be used in unison with discipline specific analysis software. In this coordinated approach to visualization and analysis, SportsCode can act as a visual notebook, recording the insight and decisions in partnership with established analysis methods. Alternatively, in situ annotation and coding of features would be a valuable addition to existing and future visualization and analysis packages.
Monsoon Forecasting based on Imbalanced Classification Techniques
NASA Astrophysics Data System (ADS)
Ribera, Pedro; Troncoso, Alicia; Asencio-Cortes, Gualberto; Vega, Inmaculada; Gallego, David
2017-04-01
Monsoonal systems are quasiperiodic processes of the climatic system that control seasonal precipitation over different regions of the world. The Western North Pacific Summer Monsoon (WNPSM) is one of those monsoons and it is known to have a great impact both over the global climate and over the total precipitation of very densely populated areas. The interannual variability of the WNPSM along the last 50-60 years has been related to different climatic indices such as El Niño, El Niño Modoki, the Indian Ocean Dipole or the Pacific Decadal Oscillation. Recently, a new and longer series characterizing the monthly evolution of the WNPSM, the WNP Directional Index (WNPDI), has been developed, extending its previous length from about 50 years to more than 100 years (1900-2007). Imbalanced classification techniques have been applied to the WNPDI in order to check the capability of traditional climate indices to capture and forecast the evolution of the WNPSM. The problem of forecasting has been transformed into a binary classification problem, in which the positive class represents the occurrence of an extreme monsoon event. Given that the number of extreme monsoons is much lower than the number of non-extreme monsoons, the resultant classification problem is highly imbalanced. The complete dataset is composed of 1296 instances, where only 71 (5.47%) samples correspond to extreme monsoons. Twenty predictor variables based on the cited climatic indices have been proposed, and namely, models based on trees, black box models such as neural networks, support vector machines and nearest neighbors, and finally ensemble-based techniques as random forests have been used in order to forecast the occurrence of extreme monsoons. It can be concluded that the methodology proposed here reports promising results according to the quality parameters evaluated and predicts extreme monsoons for a temporal horizon of a month with a high accuracy. From a climatological point of view, models based on trees show that the index of the El Niño Modoki in the months previous to an extreme monsoon acts as its best predictor. In most cases, the value of the Indian Ocean Dipole index acts as a second order classifier. But El Niño index, more frequently, or the Pacific Decadal Oscillation index, only in one case, do also modulate the intensity of the WNPSM in some cases.
Artificial Intelligence Techniques: Applications for Courseware Development.
ERIC Educational Resources Information Center
Dear, Brian L.
1986-01-01
Introduces some general concepts and techniques of artificial intelligence (natural language interfaces, expert systems, knowledge bases and knowledge representation, heuristics, user-interface metaphors, and object-based environments) and investigates ways these techniques might be applied to analysis, design, development, implementation, and…
Inverse problems in 1D hemodynamics on systemic networks: a sequential approach.
Lombardi, D
2014-02-01
In this work, a sequential approach based on the unscented Kalman filter is applied to solve inverse problems in 1D hemodynamics, on a systemic network. For instance, the arterial stiffness is estimated by exploiting cross-sectional area and mean speed observations in several locations of the arteries. The results are compared with those ones obtained by estimating the pulse wave velocity and the Moens-Korteweg formula. In the last section, a perspective concerning the identification of the terminal models parameters and peripheral circulation (modeled by a Windkessel circuit) is presented. Copyright © 2013 John Wiley & Sons, Ltd.
Medical Application of the SARAF-Proton/Deuteron 40 MeV Superconducting Linac
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halfon, Shlomi
2007-11-26
The Soreq Applied Research Accelerator Facility (SARAF) is based on a superconducting linear accelerator currently being built at the Soreq research center (Israel). The SARAF is planned to generate a 2 mA 4 MeV proton beam during its first year of operation and up to 40 MeV proton or deuteron beam in 2012. The high intensity beam, together with the linac ability to adjust the ion energy provides opportunities for medical research, such as Boron Neutron Capture Therapy (BNCT) and the production of medical radioisotopes, for instance {sup 103}Pd for prostate brachytherapy.
A quantum annealing approach for fault detection and diagnosis of graph-based systems
NASA Astrophysics Data System (ADS)
Perdomo-Ortiz, A.; Fluegemann, J.; Narasimhan, S.; Biswas, R.; Smelyanskiy, V. N.
2015-02-01
Diagnosing the minimal set of faults capable of explaining a set of given observations, e.g., from sensor readouts, is a hard combinatorial optimization problem usually tackled with artificial intelligence techniques. We present the mapping of this combinatorial problem to quadratic unconstrained binary optimization (QUBO), and the experimental results of instances embedded onto a quantum annealing device with 509 quantum bits. Besides being the first time a quantum approach has been proposed for problems in the advanced diagnostics community, to the best of our knowledge this work is also the first research utilizing the route Problem → QUBO → Direct embedding into quantum hardware, where we are able to implement and tackle problem instances with sizes that go beyond previously reported toy-model proof-of-principle quantum annealing implementations; this is a significant leap in the solution of problems via direct-embedding adiabatic quantum optimization. We discuss some of the programmability challenges in the current generation of the quantum device as well as a few possible ways to extend this work to more complex arbitrary network graphs.
Optical biopsy of lymph node morphology using optical coherence tomography.
Luo, Wei; Nguyen, Freddy T; Zysk, Adam M; Ralston, Tyler S; Brockenbrough, John; Marks, Daniel L; Oldenburg, Amy L; Boppart, Stephen A
2005-10-01
Optical diagnostic imaging techniques are increasingly being used in the clinical environment, allowing for improved screening and diagnosis while minimizing the number of invasive procedures. Diffuse optical tomography, for example, is capable of whole-breast imaging and is being developed as an alternative to traditional X-ray mammography. While this may eventually be a very effective screening method, other optical techniques are better suited for imaging on the cellular and molecular scale. Optical Coherence Tomography (OCT), for instance, is capable of high-resolution cross-sectional imaging of tissue morphology. In a manner analogous to ultrasound imaging except using optics, pulses of near-infrared light are sent into the tissue while coherence-gated reflections are measured interferometrically to form a cross-sectional image of tissue. In this paper we apply OCT techniques for the high-resolution three-dimensional visualization of lymph node morphology. We present the first reported OCT images showing detailed morphological structure and corresponding histological features of lymph nodes from a carcinogen-induced rat mammary tumor model, as well as from a human lymph node containing late stage metastatic disease. The results illustrate the potential for OCT to visualize detailed lymph node structures on the scale of micrometastases and the potential for the detection of metastatic nodal disease intraoperatively.
Assessment of the integrity of concrete bridge structures by acoustic emission technique
NASA Astrophysics Data System (ADS)
Yoon, Dong-Jin; Park, Philip; Jung, Juong-Chae; Lee, Seung-Seok
2002-06-01
This study was aimed at developing a new method for assessing the integrity of concrete structures. Especially acoustic emission technique was used in carrying out both laboratory experiment and field application. From the previous laboratory study, we confirmed that AE analysis provided a promising approach for estimating the level of damage and distress in concrete structures. The Felicity ratio, one of the key parameter for assessing damage, exhibits a favorable correlation with the overall damage level. The total number of AE events under stepwise cyclic loading also showed a good agreement with the damage level. In this study, a new suggested technique was applied to several concrete bridges in Korea in order to verify the applicability in field. The AE response was analyzed to obtain key parameters such as the total number and rate of AE events, AE parameter analysis for each event, and the characteristic features of the waveform as well as Felicity ratio analysis. Stepwise loading-unloading procedure for AE generation was introduced in field test by using each different weight of vehicle. According to the condition of bridge, for instance new or old bridge, AE event rate and AE generation behavior indicated many different aspects. The results showed that the suggested analyzing method would be a promising approach for assessing the integrity of concrete structures.
Spatial information semantic query based on SPARQL
NASA Astrophysics Data System (ADS)
Xiao, Zhifeng; Huang, Lei; Zhai, Xiaofang
2009-10-01
How can the efficiency of spatial information inquiries be enhanced in today's fast-growing information age? We are rich in geospatial data but poor in up-to-date geospatial information and knowledge that are ready to be accessed by public users. This paper adopts an approach for querying spatial semantic by building an Web Ontology language(OWL) format ontology and introducing SPARQL Protocol and RDF Query Language(SPARQL) to search spatial semantic relations. It is important to establish spatial semantics that support for effective spatial reasoning for performing semantic query. Compared to earlier keyword-based and information retrieval techniques that rely on syntax, we use semantic approaches in our spatial queries system. Semantic approaches need to be developed by ontology, so we use OWL to describe spatial information extracted by the large-scale map of Wuhan. Spatial information expressed by ontology with formal semantics is available to machines for processing and to people for understanding. The approach is illustrated by introducing a case study for using SPARQL to query geo-spatial ontology instances of Wuhan. The paper shows that making use of SPARQL to search OWL ontology instances can ensure the result's accuracy and applicability. The result also indicates constructing a geo-spatial semantic query system has positive efforts on forming spatial query and retrieval.
Reduced-order model for dynamic optimization of pressure swing adsorption processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, A.; Biegler, L.; Zitney, S.
2007-01-01
Over the past decades, pressure swing adsorption (PSA) processes have been widely used as energy-efficient gas and liquid separation techniques, especially for high purity hydrogen purification from refinery gases. The separation processes are based on solid-gas equilibrium and operate under periodic transient conditions. Models for PSA processes are therefore multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together. The solution of this coupled stiff PDE system is governed by steep concentrations and temperature fronts moving with time. As a result, the optimization of such systems for either designmore » or operation represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Model reduction is one approach to generate cost-efficient low-order models which can be used as surrogate models in the optimization problems. The study develops a reduced-order model (ROM) based on proper orthogonal decomposition (POD), which is a low-dimensional approximation to a dynamic PDE-based model. Initially, a representative ensemble of solutions of the dynamic PDE system is constructed by solving a higher-order discretization of the model using the method of lines, a two-stage approach that discretizes the PDEs in space and then integrates the resulting DAEs over time. Next, the ROM method applies the Karhunen-Loeve expansion to derive a small set of empirical eigenfunctions (POD modes) which are used as basis functions within a Galerkin's projection framework to derive a low-order DAE system that accurately describes the dominant dynamics of the PDE system. The proposed method leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization before and making optimization problem computationally-efficient. The method has been applied to the dynamic coupled PDE-based model of a two-bed four-step PSA process for separation of hydrogen from methane. Separate ROMs have been developed for each operating step with different POD modes for each of them. A significant reduction in the order of the number of states has been achieved. The gas-phase mole fraction, solid-state loading and temperature profiles from the low-order ROM and from the high-order simulations have been compared. Moreover, the profiles for a different set of inputs and parameter values fed to the same ROM were compared with the accurate profiles from the high-order simulations. Current results indicate the proposed ROM methodology as a promising surrogate modeling technique for cost-effective optimization purposes. Moreover, deviations from the ROM for different set of inputs and parameters suggest that a recalibration of the model is required for the optimization studies. Results for these will also be presented with the aforementioned results.« less
Systems engineering interfaces: A model based approach
NASA Astrophysics Data System (ADS)
Fosse, E.; Delp, C. L.
The engineering of interfaces is a critical function of the discipline of Systems Engineering. Included in interface engineering are instances of interaction. Interfaces provide the specifications of the relevant properties of a system or component that can be connected to other systems or components while instances of interaction are identified in order to specify the actual integration to other systems or components. Current Systems Engineering practices rely on a variety of documents and diagrams to describe interface specifications and instances of interaction. The SysML[1] specification provides a precise model based representation for interfaces and interface instance integration. This paper will describe interface engineering as implemented by the Operations Revitalization Task using SysML, starting with a generic case and culminating with a focus on a Flight System to Ground Interaction. The reusability of the interface engineering approach presented as well as its extensibility to more complex interfaces and interactions will be shown. Model-derived tables will support the case studies shown and are examples of model-based documentation products.
Arterial Mechanical Motion Estimation Based on a Semi-Rigid Body Deformation Approach
Guzman, Pablo; Hamarneh, Ghassan; Ros, Rafael; Ros, Eduardo
2014-01-01
Arterial motion estimation in ultrasound (US) sequences is a hard task due to noise and discontinuities in the signal derived from US artifacts. Characterizing the mechanical properties of the artery is a promising novel imaging technique to diagnose various cardiovascular pathologies and a new way of obtaining relevant clinical information, such as determining the absence of dicrotic peak, estimating the Augmentation Index (AIx), the arterial pressure or the arterial stiffness. One of the advantages of using US imaging is the non-invasive nature of the technique unlike Intra Vascular Ultra Sound (IVUS) or angiography invasive techniques, plus the relative low cost of the US units. In this paper, we propose a semi rigid deformable method based on Soft Bodies dynamics realized by a hybrid motion approach based on cross-correlation and optical flow methods to quantify the elasticity of the artery. We evaluate and compare different techniques (for instance optical flow methods) on which our approach is based. The goal of this comparative study is to identify the best model to be used and the impact of the accuracy of these different stages in the proposed method. To this end, an exhaustive assessment has been conducted in order to decide which model is the most appropriate for registering the variation of the arterial diameter over time. Our experiments involved a total of 1620 evaluations within nine simulated sequences of 84 frames each and the estimation of four error metrics. We conclude that our proposed approach obtains approximately 2.5 times higher accuracy than conventional state-of-the-art techniques. PMID:24871987
SIFT Meets CNN: A Decade Survey of Instance Retrieval.
Zheng, Liang; Yang, Yi; Tian, Qi
2018-05-01
In the early days, content-based image retrieval (CBIR) was studied with global features. Since 2003, image retrieval based on local descriptors (de facto SIFT) has been extensively studied for over a decade due to the advantage of SIFT in dealing with image transformations. Recently, image representations based on the convolutional neural network (CNN) have attracted increasing interest in the community and demonstrated impressive performance. Given this time of rapid evolution, this article provides a comprehensive survey of instance retrieval over the last decade. Two broad categories, SIFT-based and CNN-based methods, are presented. For the former, according to the codebook size, we organize the literature into using large/medium-sized/small codebooks. For the latter, we discuss three lines of methods, i.e., using pre-trained or fine-tuned CNN models, and hybrid methods. The first two perform a single-pass of an image to the network, while the last category employs a patch-based feature extraction scheme. This survey presents milestones in modern instance retrieval, reviews a broad selection of previous works in different categories, and provides insights on the connection between SIFT and CNN-based methods. After analyzing and comparing retrieval performance of different categories on several datasets, we discuss promising directions towards generic and specialized instance retrieval.
Doloc-Mihu, Anca; Calabrese, Ronald L
2016-01-01
The underlying mechanisms that support robustness in neuronal networks are as yet unknown. However, recent studies provide evidence that neuronal networks are robust to natural variations, modulation, and environmental perturbations of parameters, such as maximal conductances of intrinsic membrane and synaptic currents. Here we sought a method for assessing robustness, which might easily be applied to large brute-force databases of model instances. Starting with groups of instances with appropriate activity (e.g., tonic spiking), our method classifies instances into much smaller subgroups, called families, in which all members vary only by the one parameter that defines the family. By analyzing the structures of families, we developed measures of robustness for activity type. Then, we applied these measures to our previously developed model database, HCO-db, of a two-neuron half-center oscillator (HCO), a neuronal microcircuit from the leech heartbeat central pattern generator where the appropriate activity type is alternating bursting. In HCO-db, the maximal conductances of five intrinsic and two synaptic currents were varied over eight values (leak reversal potential also varied, five values). We focused on how variations of particular conductance parameters maintain normal alternating bursting activity while still allowing for functional modulation of period and spike frequency. We explored the trade-off between robustness of activity type and desirable change in activity characteristics when intrinsic conductances are altered and identified the hyperpolarization-activated (h) current as an ideal target for modulation. We also identified ensembles of model instances that closely approximate physiological activity and can be used in future modeling studies.
Evaluation of feature-based 3-d registration of probabilistic volumetric scenes
NASA Astrophysics Data System (ADS)
Restrepo, Maria I.; Ulusoy, Ali O.; Mundy, Joseph L.
2014-12-01
Automatic estimation of the world surfaces from aerial images has seen much attention and progress in recent years. Among current modeling technologies, probabilistic volumetric models (PVMs) have evolved as an alternative representation that can learn geometry and appearance in a dense and probabilistic manner. Recent progress, in terms of storage and speed, achieved in the area of volumetric modeling, opens the opportunity to develop new frameworks that make use of the PVM to pursue the ultimate goal of creating an entire map of the earth, where one can reason about the semantics and dynamics of the 3-d world. Aligning 3-d models collected at different time-instances constitutes an important step for successful fusion of large spatio-temporal information. This paper evaluates how effectively probabilistic volumetric models can be aligned using robust feature-matching techniques, while considering different scenarios that reflect the kind of variability observed across aerial video collections from different time instances. More precisely, this work investigates variability in terms of discretization, resolution and sampling density, errors in the camera orientation, and changes in illumination and geographic characteristics. All results are given for large-scale, outdoor sites. In order to facilitate the comparison of the registration performance of PVMs to that of other 3-d reconstruction techniques, the registration pipeline is also carried out using Patch-based Multi-View Stereo (PMVS) algorithm. Registration performance is similar for scenes that have favorable geometry and the appearance characteristics necessary for high quality reconstruction. In scenes containing trees, such as a park, or many buildings, such as a city center, registration performance is significantly more accurate when using the PVM.
The effect of force on laser fiber burnback during lithotripsy
NASA Astrophysics Data System (ADS)
Aryaei, Ashkan; Chia, Ray; Peng, Steven
2018-02-01
Optical fibers for lithotripsy are designed to deliver the maximum energy precisely to the treatment site without a decrease in performance and without increasing the risks to patients and users. One of the obstacles to constant energy delivery is burnback of the optical fiber tip. So far, researchers identified mechanical, thermal, and optical factors as mechanisms in burnback phenomena. Among mechanical factors, the force applied by urologists against a stone is expected to play a dominant role in burnback. In this study, we introduce a novel technique to measure accurately the stone depth and volume ablation under varying force. Our results show varying burnback lengths on the optical fibers and varying stone depth and volume ablation depending on the optical fiber core size. For instance, the slope of the burnback as a function of the applied force for 273 μm fibers was more than two times higher than for the 550 μm fibers. The slope of the total volume of stone ablated as function of force for 550 μm fibers was almost twice as much as for the 273 μm fibers. The data suggest urologists can maximize the stone ablation rate and minimize fiber tip burnback by controlling the applied force on the optical fiber during a lithotripsy procedure.
NASA Astrophysics Data System (ADS)
Ajadi, Olaniyi A.
Radar remote sensing can play a critical role in operational monitoring of natural and anthropogenic disasters. Despite its all-weather capabilities, and its high performance in mapping, and monitoring of change, the application of radar remote sensing in operational monitoring activities has been limited. This has largely been due to: (1) the historically high costs associated with obtaining radar data; (2) slow data processing, and delivery procedures; and (3) the limited temporal sampling that was provided by spaceborne radar-based satellites. Recent advances in the capabilities of spaceborne Synthetic Aperture Radar (SAR) sensors have developed an environment that now allows for SAR to make significant contributions to disaster monitoring. New SAR processing strategies that can take full advantage of these new sensor capabilities are currently being developed. Hence, with this PhD dissertation, I aim to: (i) investigate unsupervised change detection techniques that can reliably extract signatures from time series of SAR images, and provide the necessary flexibility for application to a variety of natural, and anthropogenic hazard situations; (ii) investigate effective methods to reduce the effects of speckle and other noise on change detection performance; (iii) automate change detection algorithms using probabilistic Bayesian inferencing; and (iv) ensure that the developed technology is applicable to current, and future SAR sensors to maximize temporal sampling of a hazardous event. This is achieved by developing new algorithms that rely on image amplitude information only, the sole image parameter that is available for every single SAR acquisition.. The motivation and implementation of the change detection concept are described in detail in Chapter 3. In the same chapter, I demonstrated the technique's performance using synthetic data as well as a real-data application to map wildfire progression. I applied Radiometric Terrain Correction (RTC) to the data to increase the sampling frequency, while the developed multiscale-driven approach reliably identified changes embedded in largely stationary background scenes. With this technique, I was able to identify the extent of burn scars with high accuracy. I further applied the application of the change detection technology to oil spill mapping. The analysis highlights that the approach described in Chapter 3 can be applied to this drastically different change detection problem with only little modification. While the core of the change detection technique remained unchanged, I made modifications to the pre-processing step to enable change detection from scenes of continuously varying background. I introduced the Lipschitz regularity (LR) transformation as a technique to normalize the typically dynamic ocean surface, facilitating high performance oil spill detection independent of environmental conditions during image acquisition. For instance, I showed that LR processing reduces the sensitivity of change detection performance to variations in surface winds, which is a known limitation in oil spill detection from SAR. Finally, I applied the change detection technique to aufeis flood mapping along the Sagavanirktok River. Due to the complex nature of aufeis flooded areas, I substituted the resolution-preserving speckle filter used in Chapter 3 with curvelet filters. In addition to validating the performance of the change detection results, I also provide evidence of the wealth of information that can be extracted about aufeis flooding events once a time series of change detection information was extracted from SAR imagery. A summary of the developed change detection techniques is conducted and suggested future work is presented in Chapter 6.
Decomposition Technique for Remaining Useful Life Prediction
NASA Technical Reports Server (NTRS)
Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor); Saxena, Abhinav (Inventor); Celaya, Jose R. (Inventor)
2014-01-01
The prognostic tool disclosed here decomposes the problem of estimating the remaining useful life (RUL) of a component or sub-system into two separate regression problems: the feature-to-damage mapping and the operational conditions-to-damage-rate mapping. These maps are initially generated in off-line mode. One or more regression algorithms are used to generate each of these maps from measurements (and features derived from these), operational conditions, and ground truth information. This decomposition technique allows for the explicit quantification and management of different sources of uncertainty present in the process. Next, the maps are used in an on-line mode where run-time data (sensor measurements and operational conditions) are used in conjunction with the maps generated in off-line mode to estimate both current damage state as well as future damage accumulation. Remaining life is computed by subtracting the instance when the extrapolated damage reaches the failure threshold from the instance when the prediction is made.
ISBDD Model for Classification of Hyperspectral Remote Sensing Imagery
Li, Na; Xu, Zhaopeng; Zhao, Huijie; Huang, Xinchen; Drummond, Jane; Wang, Daming
2018-01-01
The diverse density (DD) algorithm was proposed to handle the problem of low classification accuracy when training samples contain interference such as mixed pixels. The DD algorithm can learn a feature vector from training bags, which comprise instances (pixels). However, the feature vector learned by the DD algorithm cannot always effectively represent one type of ground cover. To handle this problem, an instance space-based diverse density (ISBDD) model that employs a novel training strategy is proposed in this paper. In the ISBDD model, DD values of each pixel are computed instead of learning a feature vector, and as a result, the pixel can be classified according to its DD values. Airborne hyperspectral data collected by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor and the Push-broom Hyperspectral Imager (PHI) are applied to evaluate the performance of the proposed model. Results show that the overall classification accuracy of ISBDD model on the AVIRIS and PHI images is up to 97.65% and 89.02%, respectively, while the kappa coefficient is up to 0.97 and 0.88, respectively. PMID:29510547
40 CFR 63.7852 - What definitions apply to this subpart?
Code of Federal Regulations, 2012 CFR
2012-07-01
... hot metal, usually with dry air or nitrogen, to remove sulfur. Deviation means any instance in which... capture and collection of secondary emissions from a basic oxygen process furnace. Sinter cooler means the...
40 CFR 63.7852 - What definitions apply to this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... hot metal, usually with dry air or nitrogen, to remove sulfur. Deviation means any instance in which... capture and collection of secondary emissions from a basic oxygen process furnace. Sinter cooler means the...
40 CFR 63.7852 - What definitions apply to this subpart?
Code of Federal Regulations, 2013 CFR
2013-07-01
... hot metal, usually with dry air or nitrogen, to remove sulfur. Deviation means any instance in which... capture and collection of secondary emissions from a basic oxygen process furnace. Sinter cooler means the...
40 CFR 63.7852 - What definitions apply to this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
... hot metal, usually with dry air or nitrogen, to remove sulfur. Deviation means any instance in which... capture and collection of secondary emissions from a basic oxygen process furnace. Sinter cooler means the...
40 CFR 63.7852 - What definitions apply to this subpart?
Code of Federal Regulations, 2014 CFR
2014-07-01
... hot metal, usually with dry air or nitrogen, to remove sulfur. Deviation means any instance in which... capture and collection of secondary emissions from a basic oxygen process furnace. Sinter cooler means the...
Lead oxide-decorated graphene oxide/epoxy composite towards X-Ray radiation shielding
NASA Astrophysics Data System (ADS)
Hashemi, Seyyed Alireza; Mousavi, Seyyed Mojtaba; Faghihi, Reza; Arjmand, Mohammad; Sina, Sedigheh; Amani, Ali Mohammad
2018-05-01
In this study, employing modified Hummers method coupled with a multi-stage manufacturing procedure, graphene oxide (GO) decorated with Pb3O4 (GO-Pb3O4) at different weight ratios was synthesized. Thereupon, via the vacuum shock technique, composites holding GO-Pb3O4 at different filler loadings (5 and 10 wt%) and thicknesses (4 and 6 mm) were fabricated. Successful decoration of GO with Pb3O4 was confirmed via FTIR analysis. Moreover, particle size distribution of the produced fillers was examined using particle size analyzer. X-ray attenuation examination revealed that reinforcement of epoxy-based composites with GO-Pb3O4 led to a significant improvement in the overall attenuation rate of X-ray beam. For instance, composites containing 10 wt% GO-Pb3O4 with 6 mm thickness showed 4.06, 4.83 and 3.91 mm equivalent aluminum thickness at 40, 60 and 80 kVp energies, denoting 124.3, 124.6 and 103.6% improvement in the X-ray attenuation rate compared to a sample holding neat epoxy resin, respectively. Simulation results revealed that the effect of GO-Pb3O4 loading on the X-ray shielding performance undermined with increase in the voltage of the applied X-ray beam.
Ion Beam Facility at the University of Chile; Applications and Basic Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miranda, P. A.; Morales, J. R.; Cancino, S.
2010-08-04
The main characteristics of the ion beam facility based on a 3.75 MeV Van de Graaff accelerator at the University of Chile are described at this work. Current activities are mainly focused on the application of the Ion Beam Analysis techniques for environmental, archaeological, and material science analysis. For instance, Rutherford Backscattering Spectrometry (RBS) is applied to measure thin gold film thickness which are used to determine their resistivity and other electrical properties. At this laboratory the Proton Induced X-Ray Emission (PIXE) and Proton Elastic Scattering Analysis (PESA) methodologies are extensively used for trace element analysis of urban aerosols (Santiago,more » Ciudad de Mexico). A similar study is being carried out at the Antarctica Peninsula. Characterization studies on obsidian and vitreous dacite samples using PIXE has been also perform allowing to match some of these artifacts with geological source sites in Chile.Basic physics research is being carried out by measuring low-energy cross section values for the reactions {sup 63}Cu(d,p){sup 64}Cu and {sup Nat}Zn(p,x){sup 67}Ga. Both radionuclide {sup 64}Cu and {sup 67}Ga are required for applications in medicine. Ongoing stopping power cross section measurements of proton and alphas on Pd, Cu, Bi and Mylar are briefly discussed.« less
Ion Beam Facility at the University of Chile; Applications and Basic Research
NASA Astrophysics Data System (ADS)
Miranda, P. A.; Morales, J. R.; Cancino, S.; Dinator, M. I.; Donoso, N.; Sepúlveda, A.; Ortiz, P.; Rojas, S.
2010-08-01
The main characteristics of the ion beam facility based on a 3.75 MeV Van de Graaff accelerator at the University of Chile are described at this work. Current activities are mainly focused on the application of the Ion Beam Analysis techniques for environmental, archaeological, and material science analysis. For instance, Rutherford Backscattering Spectrometry (RBS) is applied to measure thin gold film thickness which are used to determine their resistivity and other electrical properties. At this laboratory the Proton Induced X-Ray Emission (PIXE) and Proton Elastic Scattering Analysis (PESA) methodologies are extensively used for trace element analysis of urban aerosols (Santiago, Ciudad de Mexico). A similar study is being carried out at the Antarctica Peninsula. Characterization studies on obsidian and vitreous dacite samples using PIXE has been also perform allowing to match some of these artifacts with geological source sites in Chile. Basic physics research is being carried out by measuring low-energy cross section values for the reactions 63Cu(d,p)64Cu and NatZn(p,x)67Ga. Both radionuclide 64Cu and 67Ga are required for applications in medicine. Ongoing stopping power cross section measurements of proton and alphas on Pd, Cu, Bi and Mylar are briefly discussed.
NASA Astrophysics Data System (ADS)
Pettit, J. R.; Walker, A. E.; Lowe, M. J. S.
2015-03-01
Pulse-echo ultrasonic NDE examination of large pressure vessel forgings is a design and construction code requirement in the power generation industry. Such inspections aim to size and characterise potential defects that may have formed during the forging process. Typically these defects have a range of orientations and surface roughnesses which can greatly affect ultrasonic wave scattering behaviour. Ultrasonic modelling techniques can provide insight into defect response and therefore aid in characterisation. However, analytical approaches to solving these scattering problems can become inaccurate, especially when applied to increasingly complex defect geometries. To overcome these limitations a elastic Finite Element (FE) method has been developed to simulate pulse-echo inspections of embedded planar defects. The FE model comprises a significantly reduced spatial domain allowing for a Monte-Carlo based approach to consider multiple realisations of defect orientation and surface roughness. The results confirm that defects aligned perpendicular to the path of beam propagation attenuate ultrasonic signals according to the level of surface roughness. However, for defects orientated away from this plane, surface roughness can increase the magnitude of the scattered component propagating back along the path of the incident beam. This study therefore highlights instances where defect roughness increases the magnitude of ultrasonic scattered signals, as opposed to attenuation which is more often assumed.
NASA Astrophysics Data System (ADS)
de Boer, Maaike H. T.; Bouma, Henri; Kruithof, Maarten C.; ter Haar, Frank B.; Fischer, Noëlle M.; Hagendoorn, Laurens K.; Joosten, Bart; Raaijmakers, Stephan
2017-10-01
The information available on-line and off-line, from open as well as from private sources, is growing at an exponential rate and places an increasing demand on the limited resources of Law Enforcement Agencies (LEAs). The absence of appropriate tools and techniques to collect, process, and analyze the volumes of complex and heterogeneous data has created a severe information overload. If a solution is not found, the impact on law enforcement will be dramatic, e.g. because important evidence is missed or the investigation time is too long. Furthermore, there is an uneven level of capabilities to deal with the large volumes of complex and heterogeneous data that come from multiple open and private sources at national level across the EU, which hinders cooperation and information sharing. Consequently, there is a pertinent need to develop tools, systems and processes which expedite online investigations. In this paper, we describe a suite of analysis tools to identify and localize generic concepts, instances of objects and logos in images, which constitutes a significant portion of everyday law enforcement data. We describe how incremental learning based on only a few examples and large-scale indexing are addressed in both concept detection and instance search. Our search technology allows querying of the database by visual examples and by keywords. Our tools are packaged in a Docker container to guarantee easy deployment on a system and our tools exploit possibilities provided by open source toolboxes, contributing to the technical autonomy of LEAs.
Layout Slam with Model Based Loop Closure for 3d Indoor Corridor Reconstruction
NASA Astrophysics Data System (ADS)
Baligh Jahromi, A.; Sohn, G.; Jung, J.; Shahbazi, M.; Kang, J.
2018-05-01
In this paper, we extend a recently proposed visual Simultaneous Localization and Mapping (SLAM) techniques, known as Layout SLAM, to make it robust against error accumulations, abrupt changes of camera orientation and miss-association of newly visited parts of the scene to the previously visited landmarks. To do so, we present a novel technique of loop closing based on layout model matching; i.e., both model information (topology and geometry of reconstructed models) and image information (photometric features) are used to address a loop-closure detection. The advantages of using the layout-related information in the proposed loop-closing technique are twofold. First, it imposes a metric constraint on the global map consistency and, thus, adjusts the mapping scale drifts. Second, it can reduce matching ambiguity in the context of indoor corridors, where the scene is homogenously textured and extracting sufficient amount of distinguishable point features is a challenging task. To test the impact of the proposed technique on the performance of Layout SLAM, we have performed the experiments on wide-angle videos captured by a handheld camera. This dataset was collected from the indoor corridors of a building at York University. The obtained results demonstrate that the proposed method successfully detects the instances of loops while producing very limited trajectory errors.
Automated Inference of Chemical Discriminants of Biological Activity.
Raschka, Sebastian; Scott, Anne M; Huertas, Mar; Li, Weiming; Kuhn, Leslie A
2018-01-01
Ligand-based virtual screening has become a standard technique for the efficient discovery of bioactive small molecules. Following assays to determine the activity of compounds selected by virtual screening, or other approaches in which dozens to thousands of molecules have been tested, machine learning techniques make it straightforward to discover the patterns of chemical groups that correlate with the desired biological activity. Defining the chemical features that generate activity can be used to guide the selection of molecules for subsequent rounds of screening and assaying, as well as help design new, more active molecules for organic synthesis.The quantitative structure-activity relationship machine learning protocols we describe here, using decision trees, random forests, and sequential feature selection, take as input the chemical structure of a single, known active small molecule (e.g., an inhibitor, agonist, or substrate) for comparison with the structure of each tested molecule. Knowledge of the atomic structure of the protein target and its interactions with the active compound are not required. These protocols can be modified and applied to any data set that consists of a series of measured structural, chemical, or other features for each tested molecule, along with the experimentally measured value of the response variable you would like to predict or optimize for your project, for instance, inhibitory activity in a biological assay or ΔG binding . To illustrate the use of different machine learning algorithms, we step through the analysis of a dataset of inhibitor candidates from virtual screening that were tested recently for their ability to inhibit GPCR-mediated signaling in a vertebrate.
Topology-independent shape modeling scheme
NASA Astrophysics Data System (ADS)
Malladi, Ravikanth; Sethian, James A.; Vemuri, Baba C.
1993-06-01
Developing shape models is an important aspect of computer vision research. Geometric and differential properties of the surface can be computed from shape models. They also aid the tasks of object representation and recognition. In this paper we present an innovative new approach for shape modeling which, while retaining important features of the existing methods, overcomes most of their limitations. Our technique can be applied to model arbitrarily complex shapes, shapes with protrusions, and to situations where no a priori assumption about the object's topology can be made. A single instance of our model, when presented with an image having more than one object of interest, has the ability to split freely to represent each object. Our method is based on the level set ideas developed by Osher & Sethian to follow propagating solid/liquid interfaces with curvature-dependent speeds. The interface is a closed, nonintersecting, hypersurface flowing along its gradient field with constant speed or a speed that depends on the curvature. We move the interface by solving a `Hamilton-Jacobi' type equation written for a function in which the interface is a particular level set. A speed function synthesized from the image is used to stop the interface in the vicinity of the object boundaries. The resulting equations of motion are solved by numerical techniques borrowed from the technology of hyperbolic conservation laws. An added advantage of this scheme is that it can easily be extended to any number of space dimensions. The efficacy of the scheme is demonstrated with numerical experiments on synthesized images and noisy medical images.
AI Techniques in a Context-Aware Ubiquitous Environment
NASA Astrophysics Data System (ADS)
Coppola, Paolo; Mea, Vincenzo Della; di Gaspero, Luca; Lomuscio, Raffaella; Mischis, Danny; Mizzaro, Stefano; Nazzi, Elena; Scagnetto, Ivan; Vassena, Luca
Nowadays, the mobile computing paradigm and the widespread diffusion of mobile devices are quickly changing and replacing many common assumptions about software architectures and interaction/communication models. The environment, in particular, or more generally, the so-called user context is claiming a central role in everyday’s use of cellular phones, PDAs, etc. This is due to the huge amount of data “suggested” by the surrounding environment that can be helpful in many common tasks. For instance, the current context can help a search engine to refine the set of results in a useful way, providing the user with a more suitable and exploitable information. Moreover, we can take full advantage of this new data source by “pushing” active contents towards mobile devices, empowering the latter with new features (e.g., applications) that can allow the user to fruitfully interact with the current context. Following this vision, mobile devices become dynamic self-adapting tools, according to the user needs and the possibilities offered by the environment. The present work proposes MoBe: an approach for providing a basic infrastructure for pervasive context-aware applications on mobile devices, in which AI techniques (namely a principled combination of rule-based systems, Bayesian networks and ontologies) are applied to context inference. The aim is to devise a general inferential framework to make easier the development of context-aware applications by integrating the information coming from physical and logical sensors (e.g., position, agenda) and reasoning about this information in order to infer new and more abstract contexts.
Nohria, Nitin; Joyce, William; Roberson, Bruce
2003-07-01
When it comes to improving business performance, managers have no shortage of tools and techniques to choose from. But what really works? What's critical, and what's optional? Two business professors and a former McKinsey consultant set out to answer those questions. In a ground-breaking, five-year study that involved more than 50 academics and consultants, the authors analyzed 200 management techniques as they were employed by 160 companies over ten years. Their findings at a high level? Business basics really matter. In this article, the authors outline the management practices that are imperative for sustained superior financial performance--their "4+2 formula" for business success. They provide examples of companies that achieved varying degrees of success depending on whether they applied the formula, and they suggest ways that other companies can achieve excellence. The 160 companies in their study--called the Evergreen Project--were divided into 40 quads, each comprising four companies in a narrowly defined industry. Based on its performance between 1986 and 1996, each company in each quad was classified as either a winner (for instance, Dollar General), a loser (Kmart), a climber (Target), or a tumbler (the Limited). Without exception, the companies that outperformed their industry peers excelled in what the authors call the four primary management practices: strategy, execution, culture, and structure. And they supplemented their great skill in those areas with a mastery of any two of four secondary management practices: talent, leadership, innovation, and mergers and partnerships. A company that consistently follows this 4+2 formula has a better than 90% chance of sustaining superior performance, according to the authors.
Knowledge Discovery in Spectral Data by Means of Complex Networks
Zanin, Massimiliano; Papo, David; Solís, José Luis González; Espinosa, Juan Carlos Martínez; Frausto-Reyes, Claudio; Anda, Pascual Palomares; Sevilla-Escoboza, Ricardo; Boccaletti, Stefano; Menasalvas, Ernestina; Sousa, Pedro
2013-01-01
In the last decade, complex networks have widely been applied to the study of many natural and man-made systems, and to the extraction of meaningful information from the interaction structures created by genes and proteins. Nevertheless, less attention has been devoted to metabonomics, due to the lack of a natural network representation of spectral data. Here we define a technique for reconstructing networks from spectral data sets, where nodes represent spectral bins, and pairs of them are connected when their intensities follow a pattern associated with a disease. The structural analysis of the resulting network can then be used to feed standard data-mining algorithms, for instance for the classification of new (unlabeled) subjects. Furthermore, we show how the structure of the network is resilient to the presence of external additive noise, and how it can be used to extract relevant knowledge about the development of the disease. PMID:24957895
Knowledge discovery in spectral data by means of complex networks.
Zanin, Massimiliano; Papo, David; Solís, José Luis González; Espinosa, Juan Carlos Martínez; Frausto-Reyes, Claudio; Anda, Pascual Palomares; Sevilla-Escoboza, Ricardo; Jaimes-Reategui, Rider; Boccaletti, Stefano; Menasalvas, Ernestina; Sousa, Pedro
2013-03-11
In the last decade, complex networks have widely been applied to the study of many natural and man-made systems, and to the extraction of meaningful information from the interaction structures created by genes and proteins. Nevertheless, less attention has been devoted to metabonomics, due to the lack of a natural network representation of spectral data. Here we define a technique for reconstructing networks from spectral data sets, where nodes represent spectral bins, and pairs of them are connected when their intensities follow a pattern associated with a disease. The structural analysis of the resulting network can then be used to feed standard data-mining algorithms, for instance for the classification of new (unlabeled) subjects. Furthermore, we show how the structure of the network is resilient to the presence of external additive noise, and how it can be used to extract relevant knowledge about the development of the disease.
Stimuli-responsive cross-linked micelles for on-demand drug delivery against cancers
Li, Yuanpei; Xiao, Kai; Zhu, Wei; Deng, Wenbin; Lam, Kit S.
2013-01-01
Stimuli-responsive cross-linked micelles (SCMs) represent an ideal nanocarrier system for drug delivery against cancers. SCMs exhibit superior structural stability compared to their non-crosslinked counterpart. Therefore, these nanocarriers are able to minimize the premature drug release during blood circulation. The introduction of environmentally sensitive crosslinkers or assembly units makes SCMs responsive to single or multiple stimuli present in tumor local microenvironment or exogenously applied stimuli. In these instances, the payload drug is released almost exclusively in cancerous tissue or cancer cells upon accumulation via enhanced permeability and retention effect or receptor mediated endocytosis. In this review, we highlight recent advances in the development of SCMs for cancer therapy. We also introduce the latest biophysical techniques, such as electron paramagnetic resonance (EPR) spectroscopy and fluorescence resonance energy transfer (FRET), for the characterization of the interactions between SCMs and blood proteins. PMID:24060922
NASA Technical Reports Server (NTRS)
Peterson, David L.; Condon, Estelle (Technical Monitor)
2000-01-01
Proponents of near infrared reflectance spectroscopy (NIRS) have been exceptionally successful in applying NIRS techniques to many instances of organic material analyses. While this research and development began in the 1950s, in recent years, stimulation of advancements in instrumentation is allowing NIRS to begin to find its way into the food processing systems, into food quality and safety, textiles and much more. And, imaging high spectral resolution spectrometers are now being evaluated for the rapid scanning of foodstuffs, such as the inspection of whole chicken carcasses for fecal contamination. The imaging methods are also finding their way into medical applications, such as the non-intrusive monitoring of blood oxygenation in newborns. Can these scientific insights also be taken into space and successfully used to measure the Earth's condition? Is there an analog between the organic analyses in the laboratory and clinical settings and the study of Earth's living biosphere? How are the methods comparable and how do they differ?
NASA Astrophysics Data System (ADS)
Gu, Zhi-Gang; Heinke, Lars; Wöll, Christof; Neumann, Tobias; Wenzel, Wolfgang; Li, Qiang; Fink, Karin; Gordan, Ovidiu D.; Zahn, Dietrich R. T.
2015-11-01
The electronic properties of metal-organic frameworks (MOFs) are increasingly attracting the attention due to potential applications in sensor techniques and (micro-) electronic engineering, for instance, as low-k-dielectric in semiconductor technology. Here, the band gap and the band structure of MOFs of type HKUST-1 are studied in detail by means of spectroscopic ellipsometry applied to thin surface-mounted MOF films and by means of quantum chemical calculations. The analysis of the density of states, the band structure, and the excitation spectrum reveal the importance of the empty Cu-3d orbitals for the electronic properties of HKUST-1. This study shows that, in contrast to common belief, even in the case of this fairly "simple" MOF, the excitation spectra cannot be explained by a superposition of "intra-unit" excitations within the individual building blocks. Instead, "inter-unit" excitations also have to be considered.
Rios, Anthony; Kavuluru, Ramakanth
2017-11-01
The CEGS N-GRID 2016 Shared Task in Clinical Natural Language Processing (NLP) provided a set of 1000 neuropsychiatric notes to participants as part of a competition to predict psychiatric symptom severity scores. This paper summarizes our methods, results, and experiences based on our participation in the second track of the shared task. Classical methods of text classification usually fall into one of three problem types: binary, multi-class, and multi-label classification. In this effort, we study ordinal regression problems with text data where misclassifications are penalized differently based on how far apart the ground truth and model predictions are on the ordinal scale. Specifically, we present our entries (methods and results) in the N-GRID shared task in predicting research domain criteria (RDoC) positive valence ordinal symptom severity scores (absent, mild, moderate, and severe) from psychiatric notes. We propose a novel convolutional neural network (CNN) model designed to handle ordinal regression tasks on psychiatric notes. Broadly speaking, our model combines an ordinal loss function, a CNN, and conventional feature engineering (wide features) into a single model which is learned end-to-end. Given interpretability is an important concern with nonlinear models, we apply a recent approach called locally interpretable model-agnostic explanation (LIME) to identify important words that lead to instance specific predictions. Our best model entered into the shared task placed third among 24 teams and scored a macro mean absolute error (MMAE) based normalized score (100·(1-MMAE)) of 83.86. Since the competition, we improved our score (using basic ensembling) to 85.55, comparable with the winning shared task entry. Applying LIME to model predictions, we demonstrate the feasibility of instance specific prediction interpretation by identifying words that led to a particular decision. In this paper, we present a method that successfully uses wide features and an ordinal loss function applied to convolutional neural networks for ordinal text classification specifically in predicting psychiatric symptom severity scores. Our approach leads to excellent performance on the N-GRID shared task and is also amenable to interpretability using existing model-agnostic approaches. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Soleilhac, Antonin; Bertorelle, Franck; Antoine, Rodolphe
2018-03-01
Protein-templated gold nanoclusters (AuNCs) are very attractive due to their unique fluorescence properties. A major problem however may arise due to protein structure changes upon the nucleation of an AuNC within the protein for any future use as in vivo probes, for instance. In this work, we propose a simple and reliable fluorescence based technique measuring the hydrodynamic size of protein-templated gold nanoclusters. This technique uses the relation between the time resolved fluorescence anisotropy decay and the hydrodynamic volume, through the rotational correlation time. We determine the molecular size of protein-directed AuNCs, with protein templates of increasing sizes, e.g. insulin, lysozyme, and bovine serum albumin (BSA). The comparison of sizes obtained by other techniques (e.g. dynamic light scattering and small-angle X-ray scattering) between bare and gold clusters containing proteins allows us to address the volume changes induced either by conformational changes (for BSA) or the formation of protein dimers (for insulin and lysozyme) during cluster formation and incorporation.
All-gas-phase synthesis of UiO-66 through modulated atomic layer deposition
NASA Astrophysics Data System (ADS)
Lausund, Kristian Blindheim; Nilsen, Ola
2016-11-01
Thin films of stable metal-organic frameworks (MOFs) such as UiO-66 have enormous application potential, for instance in microelectronics. However, all-gas-phase deposition techniques are currently not available for such MOFs. We here report on thin-film deposition of the thermally and chemically stable UiO-66 in an all-gas-phase process by the aid of atomic layer deposition (ALD). Sequential reactions of ZrCl4 and 1,4-benzenedicarboxylic acid produce amorphous organic-inorganic hybrid films that are subsequently crystallized to the UiO-66 structure by treatment in acetic acid vapour. We also introduce a new approach to control the stoichiometry between metal clusters and organic linkers by modulation of the ALD growth with additional acetic acid pulses. An all-gas-phase synthesis technique for UiO-66 could enable implementations in microelectronics that are not compatible with solvothermal synthesis. Since this technique is ALD-based, it could also give enhanced thickness control and the possibility to coat irregular substrates with high aspect ratios.
Electronic Detection of Delayed Test Result Follow-Up in Patients with Hypothyroidism.
Meyer, Ashley N D; Murphy, Daniel R; Al-Mutairi, Aymer; Sittig, Dean F; Wei, Li; Russo, Elise; Singh, Hardeep
2017-07-01
Delays in following up abnormal test results are a common problem in outpatient settings. Surveillance systems that use trigger tools to identify delayed follow-up can help reduce missed opportunities in care. To develop and test an electronic health record (EHR)-based trigger algorithm to identify instances of delayed follow-up of abnormal thyroid-stimulating hormone (TSH) results in patients being treated for hypothyroidism. We developed an algorithm using structured EHR data to identify patients with hypothyroidism who had delayed follow-up (>60 days) after an abnormal TSH. We then retrospectively applied the algorithm to a large EHR data warehouse within the Department of Veterans Affairs (VA), on patient records from two large VA networks for the period from January 1, 2011, to December 31, 2011. Identified records were reviewed to confirm the presence of delays in follow-up. During the study period, 645,555 patients were seen in the outpatient setting within the two networks. Of 293,554 patients with at least one TSH test result, the trigger identified 1250 patients on treatment for hypothyroidism with elevated TSH. Of these patients, 271 were flagged as potentially having delayed follow-up of their test result. Chart reviews confirmed delays in 163 of the 271 flagged patients (PPV = 60.1%). An automated trigger algorithm applied to records in a large EHR data warehouse identified patients with hypothyroidism with potential delays in thyroid function test results follow-up. Future prospective application of the TSH trigger algorithm can be used by clinical teams as a surveillance and quality improvement technique to monitor and improve follow-up.
Weighted networks as randomly reinforced urn processes
NASA Astrophysics Data System (ADS)
Caldarelli, Guido; Chessa, Alessandro; Crimaldi, Irene; Pammolli, Fabio
2013-02-01
We analyze weighted networks as randomly reinforced urn processes, in which the edge-total weights are determined by a reinforcement mechanism. We develop a statistical test and a procedure based on it to study the evolution of networks over time, detecting the “dominance” of some edges with respect to the others and then assessing if a given instance of the network is taken at its steady state or not. Distance from the steady state can be considered as a measure of the relevance of the observed properties of the network. Our results are quite general, in the sense that they are not based on a particular probability distribution or functional form of the random weights. Moreover, the proposed tool can be applied also to dense networks, which have received little attention by the network community so far, since they are often problematic. We apply our procedure in the context of the International Trade Network, determining a core of “dominant edges.”
Text-Based Negotiated Interaction of NNS-NNS and NNS-NS Dyads on Facebook
ERIC Educational Resources Information Center
Liu, Sarah Hsueh-Jui
2017-01-01
This study sought to determine the difference in text-based negotiated interaction between non-native speakers of English (NNS-NNS) and between non-native and natives (NNS-NS) in terms of the frequency of negotiated instances, successfully resolved instances, and interactional strategy use when the dyads collaborated on Facebook. It involved 10…
NASA Astrophysics Data System (ADS)
Olkhov, A.; Lobanov, A.; Staroverova, O.; Tyubaeva, P.; Zykova, A.; Pantyukhov, P.; Popov, A.; Iordanskii, A.
2017-02-01
Ferric iron (III)-based complexes with porphyrins are the homogenous catalysts of auto-oxidation of several biogenic substances. The most perspective carrier for functional low-molecular substances is the polymer fibers with nano-dimensional parameters. Application of natural polymers, poly-(3-hydroxybutyrate) or polylactic acid for instance, makes possible to develop fiber and matrice systems to solve ecological problem in biomedicine The aim of the article is to obtain fibrous material on poly-(3-hydroxybutyrate) and ferric iron (III)-based porphyrins basis and to examine its physical-chemical and antibacterial properties. The work is focused on possibility to apply such material to biomedical purposes. Microphotographs of obtained material showed that addition of 1% wt. ferric iron (III)-based porphyrins to PHB led to increased average diameter and disappeared spindly structures in comparison with initial PHB. Biological tests of nonwoven fabrics showed that fibers, containing ferric iron (III)-based tetraphenylporphyrins, were active in relation to bacterial test-cultures. It was found that materials on polymer and metal complexes with porphyrins basis can be applied to production of decontamination equipment in relation to pathogenic and opportunistic microorganisms.
Ammonia-based feedforward and feedback aeration control in activated sludge processes.
Rieger, Leiv; Jones, Richard M; Dold, Peter L; Bott, Charles B
2014-01-01
Aeration control at wastewater treatment plants based on ammonia as the controlled variable is applied for one of two reasons: (1) to reduce aeration costs, or (2) to reduce peaks in effluent ammonia. Aeration limitation has proven to result in significant energy savings, may reduce external carbon addition, and can improve denitrification and biological phosphorus (bio-P) performance. Ammonia control for limiting aeration has been based mainly on feedback control to constrain complete nitrification by maintaining approximately one to two milligrams of nitrogen per liter of ammonia in the effluent. Increased attention has been given to feedforward ammonia control, where aeration control is based on monitoring influent ammonia load. Typically, the intent is to anticipate the impact of sudden load changes, and thereby reduce effluent ammonia peaks. This paper evaluates the fundamentals of ammonia control with a primary focus on feedforward control concepts. A case study discussion is presented that reviews different ammonia-based control approaches. In most instances, feedback control meets the objectives for both aeration limitation and containment of effluent ammonia peaks. Feedforward control, applied specifically for switching aeration on or off in swing zones, can be beneficial when the plant encounters particularly unusual influent disturbances.
Pourhassan, Mojgan; Neumann, Frank
2018-06-22
The generalized travelling salesperson problem is an important NP-hard combinatorial optimization problem for which meta-heuristics, such as local search and evolutionary algorithms, have been used very successfully. Two hierarchical approaches with different neighbourhood structures, namely a Cluster-Based approach and a Node-Based approach, have been proposed by Hu and Raidl (2008) for solving this problem. In this paper, local search algorithms and simple evolutionary algorithms based on these approaches are investigated from a theoretical perspective. For local search algorithms, we point out the complementary abilities of the two approaches by presenting instances where they mutually outperform each other. Afterwards, we introduce an instance which is hard for both approaches when initialized on a particular point of the search space, but where a variable neighbourhood search combining them finds the optimal solution in polynomial time. Then we turn our attention to analysing the behaviour of simple evolutionary algorithms that use these approaches. We show that the Node-Based approach solves the hard instance of the Cluster-Based approach presented in Corus et al. (2016) in polynomial time. Furthermore, we prove an exponential lower bound on the optimization time of the Node-Based approach for a class of Euclidean instances.
A consensus algorithm for approximate string matching and its application to QRS complex detection
NASA Astrophysics Data System (ADS)
Alba, Alfonso; Mendez, Martin O.; Rubio-Rincon, Miguel E.; Arce-Santana, Edgar R.
2016-08-01
In this paper, a novel algorithm for approximate string matching (ASM) is proposed. The novelty resides in the fact that, unlike most other methods, the proposed algorithm is not based on the Hamming or Levenshtein distances, but instead computes a score for each symbol in the search text based on a consensus measure. Those symbols with sufficiently high scores will likely correspond to approximate instances of the pattern string. To demonstrate the usefulness of the proposed method, it has been applied to the detection of QRS complexes in electrocardiographic signals with competitive results when compared against the classic Pan-Tompkins (PT) algorithm. The proposed method outperformed PT in 72% of the test cases, with no extra computational cost.
Diagnostics and Active Control of Aircraft Interior Noise
NASA Technical Reports Server (NTRS)
Fuller, C. R.
1998-01-01
This project deals with developing advanced methods for investigating and controlling interior noise in aircraft. The work concentrates on developing and applying the techniques of Near Field Acoustic Holography (NAH) and Principal Component Analysis (PCA) to the aircraft interior noise dynamic problem. This involves investigating the current state of the art, developing new techniques and then applying them to the particular problem being studied. The knowledge gained under the first part of the project was then used to develop and apply new, advanced noise control techniques for reducing interior noise. A new fully active control approach based on the PCA was developed and implemented on a test cylinder. Finally an active-passive approach based on tunable vibration absorbers was to be developed and analytically applied to a range of test structures from simple plates to aircraft fuselages.
An adaptive technique to maximize lossless image data compression of satellite images
NASA Technical Reports Server (NTRS)
Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe
1994-01-01
Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.
Recognition Using Hybrid Classifiers.
Osadchy, Margarita; Keren, Daniel; Raviv, Dolev
2016-04-01
A canonical problem in computer vision is category recognition (e.g., find all instances of human faces, cars etc., in an image). Typically, the input for training a binary classifier is a relatively small sample of positive examples, and a huge sample of negative examples, which can be very diverse, consisting of images from a large number of categories. The difficulty of the problem sharply increases with the dimension and size of the negative example set. We propose to alleviate this problem by applying a "hybrid" classifier, which replaces the negative samples by a prior, and then finds a hyperplane which separates the positive samples from this prior. The method is extended to kernel space and to an ensemble-based approach. The resulting binary classifiers achieve an identical or better classification rate than SVM, while requiring far smaller memory and lower computational complexity to train and apply.
How Many Is Enough?—Statistical Principles for Lexicostatistics
Zhang, Menghan; Gong, Tao
2016-01-01
Lexicostatistics has been applied in linguistics to inform phylogenetic relations among languages. There are two important yet not well-studied parameters in this approach: the conventional size of vocabulary list to collect potentially true cognates and the minimum matching instances required to confirm a recurrent sound correspondence. Here, we derive two statistical principles from stochastic theorems to quantify these parameters. These principles validate the practice of using the Swadesh 100- and 200-word lists to indicate degree of relatedness between languages, and enable a frequency-based, dynamic threshold to detect recurrent sound correspondences. Using statistical tests, we further evaluate the generality of the Swadesh 100-word list compared to the Swadesh 200-word list and other 100-word lists sampled randomly from the Swadesh 200-word list. All these provide mathematical support for applying lexicostatistics in historical and comparative linguistics. PMID:28018261
Multiple-instance ensemble learning for hyperspectral images
NASA Astrophysics Data System (ADS)
Ergul, Ugur; Bilgin, Gokhan
2017-10-01
An ensemble framework for multiple-instance (MI) learning (MIL) is introduced for use in hyperspectral images (HSIs) by inspiring the bagging (bootstrap aggregation) method in ensemble learning. Ensemble-based bagging is performed by a small percentage of training samples, and MI bags are formed by a local windowing process with variable window sizes on selected instances. In addition to bootstrap aggregation, random subspace is another method used to diversify base classifiers. The proposed method is implemented using four MIL classification algorithms. The classifier model learning phase is carried out with MI bags, and the estimation phase is performed over single-test instances. In the experimental part of the study, two different HSIs that have ground-truth information are used, and comparative results are demonstrated with state-of-the-art classification methods. In general, the MI ensemble approach produces more compact results in terms of both diversity and error compared to equipollent non-MIL algorithms.
Havrila, Marek; Réblová, Kamila; Zirbel, Craig L.; Leontis, Neocles B.; Šponer, Jiří
2013-01-01
The Sarcin-Ricin RNA motif (SR motif) is one of the most prominent recurrent RNA building blocks that occurs in many different RNA contexts and folds autonomously, i.e., in a context-independent manner. In this study, we combined bioinformatics analysis with explicit-solvent molecular dynamics (MD) simulations to better understand the relation between the RNA sequence and the evolutionary patterns of SR motif. SHAPE probing experiment was also performed to confirm fidelity of MD simulations. We identified 57 instances of the SR motif in a non-redundant subset of the RNA X-ray structure database and analyzed their basepairing, base-phosphate, and backbone-backbone interactions. We extracted sequences aligned to these instances from large ribosomal RNA alignments to determine frequency of occurrence for different sequence variants. We then used a simple scoring scheme based on isostericity to suggest 10 sequence variants with highly variable expected degree of compatibility with the SR motif 3D structure. We carried out MD simulations of SR motifs with these base substitutions. Non isosteric base substitutions led to unstable structures, but so did isosteric substitutions which were unable to make key base-phosphate interactions. MD technique explains why some potentially isosteric SR motifs are not realized during evolution. We also found that inability to form stable cWW geometry is an important factor in case of the first base pair of the flexible region of the SR motif. Comparison of structural, bioinformatics, SHAPE probing and MD simulation data reveals that explicit solvent MD simulations neatly reflect viability of different sequence variants of the SR motif. Thus, MD simulations can efficiently complement bioinformatics tools in studies of conservation patterns of RNA motifs and provide atomistic insight into the role of their different signature interactions. PMID:24144333
Geometrically derived difference formulae for the numerical integration of trajectory problems
NASA Technical Reports Server (NTRS)
Mcleod, R. J. Y.; Sanz-Serna, J. M.
1981-01-01
The term 'trajectory problem' is taken to include problems that can arise, for instance, in connection with contour plotting, or in the application of continuation methods, or during phase-plane analysis. Geometrical techniques are used to construct difference methods for these problems to produce in turn explicit and implicit circularly exact formulae. Based on these formulae, a predictor-corrector method is derived which, when compared with a closely related standard method, shows improved performance. It is found that this latter method produces spurious limit cycles, and this behavior is partly analyzed. Finally, a simple variable-step algorithm is constructed and tested.
Techniques for cash management in scheduling manufacturing operations
NASA Astrophysics Data System (ADS)
Morady Gohareh, Mehdy; Shams Gharneh, Naser; Ghasemy Yaghin, Reza
2017-06-01
The objective in traditional scheduling is usually time based. Minimizing the makespan, total flow times, total tardi costs, etc. are instances of these objectives. In manufacturing, processing each job entails a cost paying and price receiving. Thus, the objective should include some notion of managing the flow of cash. We have defined two new objectives: maximization of average and minimum available cash. For single machine scheduling, it is demonstrated that scheduling jobs in decreasing order of profit ratios maximizes the former and improves productivity. Moreover, scheduling jobs in increasing order of costs and breaking ties in decreasing order of prices maximizes the latter and creates protection against financial instability.
A quantum framework for likelihood ratios
NASA Astrophysics Data System (ADS)
Bond, Rachael L.; He, Yang-Hui; Ormerod, Thomas C.
The ability to calculate precise likelihood ratios is fundamental to science, from Quantum Information Theory through to Quantum State Estimation. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes’ theorem either defaults to the marginal probability driven “naive Bayes’ classifier”, or requires the use of compensatory expectation-maximization techniques. This paper takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement, and demonstrates that Bayes’ theorem is a special case of a more general quantum mechanical expression.
Fluorescence-Based Sensor for Monitoring Activation of Lunar Dust
NASA Technical Reports Server (NTRS)
Wallace, William T.; Jeevarajan, Antony S.
2012-01-01
This sensor unit is designed to determine the level of activation of lunar dust or simulant particles using a fluorescent technique. Activation of the surface of a lunar soil sample (for instance, through grinding) should produce a freshly fractured surface. When these reactive surfaces interact with oxygen and water, they produce hydroxyl radicals. These radicals will react with a terephthalate diluted in the aqueous medium to form 2-hydroxyterephthalate. The fluorescence produced by 2-hydroxyterephthalate provides qualitative proof of the activation of the sample. Using a calibration curve produced by synthesized 2-hydroxyterephthalate, the amount of hydroxyl radicals produced as a function of sample concentration can also be determined.
A methodological approach for designing a usable ontology-based GUI in healthcare.
Lasierra, N; Kushniruk, A; Alesanco, A; Borycki, E; García, J
2013-01-01
This paper presents a methodological approach to the design and evaluation of an interface for an ontology-based system used for designing care plans for monitoring patients at home. In order to define the care plans, physicians need a tool for creating instances of the ontology and configuring some rules. Our purpose is to develop an interface to allow clinicians to interact with the ontology. Although ontology-driven applications do not necessarily present the ontology in the user interface, it is our hypothesis that showing selected parts of the ontology in a "usable" way could enhance clinician's understanding and make easier the definition of the care plans. Based on prototyping and iterative testing, this methodology combines visualization techniques and usability methods. Preliminary results obtained after a formative evaluation indicate the effectiveness of suggested combination.
Detection of Chamber Conditioning Through Optical Emission and Impedance Measurements
NASA Technical Reports Server (NTRS)
Cruden, Brett A.; Rao, M. V. V. S.; Sharma, Surendra P.; Meyyappan, Meyya
2001-01-01
During oxide etch processes, buildup of fluorocarbon residues on reactor sidewalls can cause run-to-run drift and will necessitate some time for conditioning and seasoning of the reactor. Though diagnostics can be applied to study and understand these phenomena, many of them are not practical for use in an industrial reactor. For instance, measurements of ion fluxes and energy by mass spectrometry show that the buildup of insulating fluorocarbon films on the reactor surface will cause a shift in both ion energy and current in an argon plasma. However, such a device cannot be easily integrated into a processing system. The shift in ion energy and flux will be accompanied by an increase in the capacitance of the plasma sheath. The shift in sheath capacitance can be easily measured by a common commercially available impedance probe placed on the inductive coil. A buildup of film on the chamber wall is expected to affect the production of fluorocarbon radicals, and thus the presence of such species in the optical emission spectrum of the plasma can be monitored as well. These two techniques are employed on a GEC (Gaseous Electronics Conference) Reference Cell to assess the validity of optical emission and impedance monitoring as a metric of chamber conditioning. These techniques are applied to experimental runs with CHF3 and CHF3/O2/Ar plasmas, with intermediate monitoring of pure argon plasmas as a reference case for chamber conditions.
Qualitative Methods in Mental Health Services Research
Palinkas, Lawrence A.
2014-01-01
Qualitative and mixed methods play a prominent role in mental health services research. However, the standards for their use are not always evident, especially for those not trained in such methods. This paper reviews the rationale and common approaches to using qualitative and mixed methods in mental health services and implementation research based on a review of the papers included in this special series along with representative examples from the literature. Qualitative methods are used to provide a “thick description” or depth of understanding to complement breadth of understanding afforded by quantitative methods, elicit the perspective of those being studied, explore issues that have not been well studied, develop conceptual theories or test hypotheses, or evaluate the process of a phenomenon or intervention. Qualitative methods adhere to many of the same principles of scientific rigor as quantitative methods, but often differ with respect to study design, data collection and data analysis strategies. For instance, participants for qualitative studies are usually sampled purposefully rather than at random and the design usually reflects an iterative process alternating between data collection and analysis. The most common techniques for data collection are individual semi-structured interviews, focus groups, document reviews, and participant observation. Strategies for analysis are usually inductive, based on principles of grounded theory or phenomenology. Qualitative methods are also used in combination with quantitative methods in mixed method designs for convergence, complementarity, expansion, development, and sampling. Rigorously applied qualitative methods offer great potential in contributing to the scientific foundation of mental health services research. PMID:25350675
Qualitative and mixed methods in mental health services and implementation research.
Palinkas, Lawrence A
2014-01-01
Qualitative and mixed methods play a prominent role in mental health services research. However, the standards for their use are not always evident, especially for those not trained in such methods. This article reviews the rationale and common approaches to using qualitative and mixed methods in mental health services and implementation research based on a review of the articles included in this special series along with representative examples from the literature. Qualitative methods are used to provide a "thick description" or depth of understanding to complement breadth of understanding afforded by quantitative methods, elicit the perspective of those being studied, explore issues that have not been well studied, develop conceptual theories or test hypotheses, or evaluate the process of a phenomenon or intervention. Qualitative methods adhere to many of the same principles of scientific rigor as quantitative methods but often differ with respect to study design, data collection, and data analysis strategies. For instance, participants for qualitative studies are usually sampled purposefully rather than at random and the design usually reflects an iterative process alternating between data collection and analysis. The most common techniques for data collection are individual semistructured interviews, focus groups, document reviews, and participant observation. Strategies for analysis are usually inductive, based on principles of grounded theory or phenomenology. Qualitative methods are also used in combination with quantitative methods in mixed-method designs for convergence, complementarity, expansion, development, and sampling. Rigorously applied qualitative methods offer great potential in contributing to the scientific foundation of mental health services research.
Reachability analysis of real-time systems using time Petri nets.
Wang, J; Deng, Y; Xu, G
2000-01-01
Time Petri nets (TPNs) are a popular Petri net model for specification and verification of real-time systems. A fundamental and most widely applied method for analyzing Petri nets is reachability analysis. The existing technique for reachability analysis of TPNs, however, is not suitable for timing property verification because one cannot derive end-to-end delay in task execution, an important issue for time-critical systems, from the reachability tree constructed using the technique. In this paper, we present a new reachability based analysis technique for TPNs for timing property analysis and verification that effectively addresses the problem. Our technique is based on a concept called clock-stamped state class (CS-class). With the reachability tree generated based on CS-classes, we can directly compute the end-to-end time delay in task execution. Moreover, a CS-class can be uniquely mapped to a traditional state class based on which the conventional reachability tree is constructed. Therefore, our CS-class-based analysis technique is more general than the existing technique. We show how to apply this technique to timing property verification of the TPN model of a command and control (C2) system.
NASA Technical Reports Server (NTRS)
Thomas, F. P.
2006-01-01
Aerospace structures utilize innovative, lightweight composite materials for exploration activities. These structural components, due to various reasons including size limitations, manufacturing facilities, contractual obligations, or particular design requirements, will have to be joined. The common methodologies for joining composite components are the adhesively bonded and mechanically fastened joints and, in certain instances, both methods are simultaneously incorporated into the design. Guidelines and recommendations exist for engineers to develop design criteria and analyze and test composites. However, there are no guidelines or recommendations based on analysis or test data to specify a torque or torque range to apply to metallic mechanical fasteners used to join composite components. Utilizing the torque tension machine at NASA s Marshall Space Flight Center, an initial series of tests were conducted to determine the maximum torque that could be applied to a composite specimen. Acoustic emissions were used to nondestructively assess the specimens during the tests and thermographic imaging after the tests.
NASA Astrophysics Data System (ADS)
Yahyaei, Mohsen; Bashiri, Mahdi
2017-12-01
The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.
Extracting Dynamic Evidence Networks
2004-12-01
on the performance of the three models as a function of training set size, and on experiments showing the viability of using active learning techniques...potential relation instances, which include 28K actual relations. 2.3.2 Active Learning We also ran a set of experiments designed to explore the...viability of using active learning techniques to maximize the usefulness of the training data annotated for use by the system. The idea is to
Craciun, Ana Maria; Focsan, Monica; Vulpoi, Adriana
2017-01-01
Metal and in particular noble metal nanoparticles represent a very special class of materials which can be applied as prepared or as composite materials. In most of the cases, two main properties are exploited in a vast number of publications: biocompatibility and surface plasmon resonance (SPR). For instance, these two important properties are exploitable in plasmonic diagnostics, bioactive glasses/glass ceramics and catalysis. The most frequently applied noble metal nanoparticle that is universally applicable in all the previously mentioned research areas is gold, although in the case of bioactive glasses/glass ceramics, silver and copper nanoparticles are more frequently applied. The composite partners/supports/matrix/scaffolds for these nanoparticles can vary depending on the chosen application (biopolymers, semiconductor-based composites: TiO2, WO3, Bi2WO6, biomaterials: SiO2 or P2O5-based glasses and glass ceramics, polymers: polyvinyl alcohol (PVA), Gelatin, polyethylene glycol (PEG), polylactic acid (PLA), etc.). The scientific works on these materials’ applicability and the development of new approaches will be targeted in the present review, focusing in several cases on the functioning mechanism and on the role of the noble metal. PMID:28773196
[A comprehensive approach to designing of magnetotherapy techniques based on the Atos device].
Raĭgorodskiĭ, Iu M; Semiachkin, G P; Tatarenko, D A
1995-01-01
The paper determines how to apply a comprehensive approach to designing magnetic therapeutical techniques based on concomitant exposures to two or more physical factors. It shows the advantages of the running pattern of a magnetic field and photostimuli in terms of optimization of physiotherapeutical exposures. An Atos apparatus with an Amblio-1 attachment is used as an example to demonstrate how to apply the comprehensive approach for ophthalmology.
Implementation of a thesaurus in an electronic photograph imaging system
NASA Astrophysics Data System (ADS)
Partlow, Denise
1995-11-01
A photograph imaging system presents a unique set of requirements for indexing and retrieving images, unlike a standard imaging system for written documents. This paper presents the requirements, technical design, and development results for a hierarchical ANSI standard thesaurus embedded into a photograph archival system. The thesaurus design incorporates storage reduction techniques, permits fast searches, and contains flexible indexing methods. It can be extended to many applications other than the retrieval of photographs. When photographic images are indexed into an electronic system, they are subject to a variety of indexing problems based on what the indexer `sees.' For instance, the indexer may categorize an image as a boat when others might refer to it as a ship, sailboat, or raft. The thesaurus will allow a user to locate images containing any synonym for boat, regardless of how the image was actually indexed. In addition to indexing problems, photos may need to be retrieved based on a broad category, for instance, flowers. The thesaurus allows a search for `flowers' to locate all images containing a rose, hibiscus, or daisy, yet still allow a specific search for an image containing only a rose. The technical design and method of implementation for such a thesaurus is presented. The thesaurus is implemented using an SQL relational data base management system that supports blobs, binary large objects. The design incorporates unique compression methods for storing the thesaurus words. Words are indexed to photographs using the compressed word and allow for very rapid searches, eliminating lengthy string matches.
Non-destructive scanning for applied stress by the continuous magnetic Barkhausen noise method
NASA Astrophysics Data System (ADS)
Franco Grijalba, Freddy A.; Padovese, L. R.
2018-01-01
This paper reports the use of a non-destructive continuous magnetic Barkhausen noise technique to detect applied stress on steel surfaces. The stress profile generated in a sample of 1070 steel subjected to a three-point bending test is analyzed. The influence of different parameters such as pickup coil type, scanner speed, applied magnetic field and frequency band analyzed on the effectiveness of the technique is investigated. A moving smoothing window based on a second-order statistical moment is used to analyze the time signal. The findings show that the technique can be used to detect applied stress profiles.
Top down, bottom up structured programming and program structuring
NASA Technical Reports Server (NTRS)
Hamilton, M.; Zeldin, S.
1972-01-01
New design and programming techniques for shuttle software. Based on previous Apollo experience, recommendations are made to apply top-down structured programming techniques to shuttle software. New software verification techniques for large software systems are recommended. HAL, the higher order language selected for the shuttle flight code, is discussed and found to be adequate for implementing these techniques. Recommendations are made to apply the workable combination of top-down, bottom-up methods in the management of shuttle software. Program structuring is discussed relevant to both programming and management techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarapata, A.; Chabior, M.; Zanette, I.
2014-10-15
Many scientific research areas rely on accurate electron density characterization of various materials. For instance in X-ray optics and radiation therapy, there is a need for a fast and reliable technique to quantitatively characterize samples for electron density. We present how a precise measurement of electron density can be performed using an X-ray phase-contrast grating interferometer in a radiographic mode of a homogenous sample in a controlled geometry. A batch of various plastic materials was characterized quantitatively and compared with calculated results. We found that the measured electron densities closely match theoretical values. The technique yields comparable results between amore » monochromatic and a polychromatic X-ray source. Measured electron densities can be further used to design dedicated X-ray phase contrast phantoms and the additional information on small angle scattering should be taken into account in order to exclude unsuitable materials.« less
Heuristic Enhancement of Magneto-Optical Images for NDE
NASA Astrophysics Data System (ADS)
Cacciola, Matteo; Megali, Giuseppe; Pellicanò, Diego; Calcagno, Salvatore; Versaci, Mario; Morabito, FrancescoCarlo
2010-12-01
The quality of measurements in nondestructive testing and evaluation plays a key role in assessing the reliability of different inspection techniques. Each different technique, like the magneto-optic imaging here treated, is affected by some special types of noise which are related to the specific device used for their acquisition. Therefore, the design of even more accurate image processing is often required by relevant applications, for instance, in implementing integrated solutions for flaw detection and characterization. The aim of this paper is to propose a preprocessing procedure based on independent component analysis (ICA) to ease the detection of rivets and/or flaws in the specimens under test. A comparison of the proposed approach with some other advanced image processing methodologies used for denoising magneto-optic images (MOIs) is carried out, in order to show advantages and weakness of ICA in improving the accuracy and performance of the rivets/flaw detection.
Understanding GINA and How GINA Affects Nurses.
Delk, Kayla L
2015-11-01
The Genetic Information Nondiscrimination Act (GINA) is a federal law that became fully effective in 2009 and is intended to prevent employers and health insurers from discriminating against individuals based on their genetic or family history. The article discusses the sections of GINA, what information constitutes genetic information, who enforces GINA, and scenarios in which GINA does not apply. Also discussed are the instances in which an employer may request genetic information from employees, including wellness or genetic monitoring programs. Finally, the article offers a look at how GINA affects nurses who are administering wellness or genetic monitoring programs on behalf of employers. © 2015 The Author(s).
Performance comparison of some evolutionary algorithms on job shop scheduling problems
NASA Astrophysics Data System (ADS)
Mishra, S. K.; Rao, C. S. P.
2016-09-01
Job Shop Scheduling as a state space search problem belonging to NP-hard category due to its complexity and combinational explosion of states. Several naturally inspire evolutionary methods have been developed to solve Job Shop Scheduling Problems. In this paper the evolutionary methods namely Particles Swarm Optimization, Artificial Intelligence, Invasive Weed Optimization, Bacterial Foraging Optimization, Music Based Harmony Search Algorithms are applied and find tuned to model and solve Job Shop Scheduling Problems. To compare about 250 Bench Mark instances have been used to evaluate the performance of these algorithms. The capabilities of each these algorithms in solving Job Shop Scheduling Problems are outlined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent
We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained bymore » replacing missing information with constant values.« less
Bioinspired Composite Materials: Applications in Diagnostics and Therapeutics
NASA Astrophysics Data System (ADS)
Prasad, Alisha; Mahato, Kuldeep; Chandra, Pranjal; Srivastava, Ananya; Joshi, Shrikrishna N.; Maurya, Pawan Kumar
2016-08-01
Evolution-optimized specimens from nature with inimitable properties, and unique structure-function relationships have long served as a source of inspiration for researchers all over the world. For instance, the micro/nanostructured patterns of lotus-leaf and gecko feet helps in self-cleaning, and adhesion, respectively. Such unique properties shown by creatures are results of billions of years of adaptive transformation, that have been mimicked by applying both science and engineering concepts to design bioinspired materials. Various bioinspired composite materials have been developed based on biomimetic principles. This review presents the latest developments in bioinspired materials under various categories with emphasis on diagnostic and therapeutic applications.
Evidence-Based Reptile Housing and Nutrition.
Oonincx, Dennis; van Leeuwen, Jeroen
2017-09-01
The provision of a good light source is important for reptiles. For instance, ultraviolet light is used in social interactions and used for vitamin D synthesis. With respect to housing, most reptilians are best kept pairwise or individually. Environmental enrichment can be effective but depends on the form and the species to which it is applied. Temperature gradients around preferred body temperatures allow accurate thermoregulation, which is essential for reptiles. Natural distributions indicate suitable ambient temperatures, but microclimatic conditions are at least as important. Because the nutrient requirements of reptiles are largely unknown, facilitating self-selection from various dietary items is preferable. Copyright © 2017 Elsevier Inc. All rights reserved.
Chakravarty, Rubel; Dash, Ashutosh; Pillai, M R A
2012-07-01
Electrochemical separation techniques are not widely used in radionuclide generator technology and only a few studies have been reported [1-4]. Nevertheless, this strategy is useful when other parent-daughter separation techniques are not effective or not possible. Such situations are frequent when low specific activity (LSA) parent radionuclides are used for instance with adsorption chromatographic separations, which can result in lower concentration of the daughter radionuclide in the eluent. In addition, radiation instability of the column matrix in many cases can affect the performance of the generator when long lived parent radionuclides are used. Intricate knowledge of the chemistry involved in the electrochemical separation is crucial to develop a reproducible technology that ensures that the pure daughter radionuclide can be obtained in a reasonable time of operation. Crucial parameters to be critically optimized include the applied potential, choice of electrolyte, selection of electrodes, temperature of electrolyte bath and the time of electrolysis in order to ensure that the daughter radionuclide can be reproducibly recovered in high yields and high purity. The successful electrochemical generator technologies which have been developed and are discussed in this paper include the (90)Sr/(90)Y, (188)W/(188)Re and (99)Mo/(99m)Tc generators. Electrochemical separation not only acts as a separation technique but also is an effective concentration methodology which yields high radioactive concentrations of the daughter products. The lower consumption of reagents and minimal generation of radioactive wastes using such electrochemical techniques are compatible with 'green chemistry' principles.
An Integrated Method Based on PSO and EDA for the Max-Cut Problem.
Lin, Geng; Guan, Jian
2016-01-01
The max-cut problem is NP-hard combinatorial optimization problem with many real world applications. In this paper, we propose an integrated method based on particle swarm optimization and estimation of distribution algorithm (PSO-EDA) for solving the max-cut problem. The integrated algorithm overcomes the shortcomings of particle swarm optimization and estimation of distribution algorithm. To enhance the performance of the PSO-EDA, a fast local search procedure is applied. In addition, a path relinking procedure is developed to intensify the search. To evaluate the performance of PSO-EDA, extensive experiments were carried out on two sets of benchmark instances with 800 to 20,000 vertices from the literature. Computational results and comparisons show that PSO-EDA significantly outperforms the existing PSO-based and EDA-based algorithms for the max-cut problem. Compared with other best performing algorithms, PSO-EDA is able to find very competitive results in terms of solution quality.
NASA Astrophysics Data System (ADS)
Shao, Zhongshi; Pi, Dechang; Shao, Weishi
2017-11-01
This article proposes an extended continuous estimation of distribution algorithm (ECEDA) to solve the permutation flow-shop scheduling problem (PFSP). In ECEDA, to make a continuous estimation of distribution algorithm (EDA) suitable for the PFSP, the largest order value rule is applied to convert continuous vectors to discrete job permutations. A probabilistic model based on a mixed Gaussian and Cauchy distribution is built to maintain the exploration ability of the EDA. Two effective local search methods, i.e. revolver-based variable neighbourhood search and Hénon chaotic-based local search, are designed and incorporated into the EDA to enhance the local exploitation. The parameters of the proposed ECEDA are calibrated by means of a design of experiments approach. Simulation results and comparisons based on some benchmark instances show the efficiency of the proposed algorithm for solving the PFSP.
Community structure in networks
NASA Astrophysics Data System (ADS)
Newman, Mark
2004-03-01
Many networked systems, including physical, biological, social, and technological networks, appear to contain ``communities'' -- groups of nodes within which connections are dense, but between which they are sparser. The ability to find such communities in an automated fashion could be of considerable use. Communities in a web graph for instance might correspond to sets of web sites dealing with related topics, while communities in a biochemical network or an electronic circuit might correspond to functional units of some kind. We present a number of new methods for community discovery, including methods based on ``betweenness'' measures and methods based on modularity optimization. We also give examples of applications of these methods to both computer-generated and real-world network data, and show how our techniques can be used to shed light on the sometimes dauntingly complex structure of networked systems.
Rural Renaissance. Revitalizing Small High Schools.
ERIC Educational Resources Information Center
Ford, Edmund A.
Written in 1961, this document presents the rationales and applications of what were and still are, in most instances, considered innovative practices. Subjects discussed are building designs, teaching machines, educational television, flexible scheduling, multiple classes and small-group techniques, teacher assistants, shared services, and…
Outlier Removal in Model-Based Missing Value Imputation for Medical Datasets.
Huang, Min-Wei; Lin, Wei-Chao; Tsai, Chih-Fong
2018-01-01
Many real-world medical datasets contain some proportion of missing (attribute) values. In general, missing value imputation can be performed to solve this problem, which is to provide estimations for the missing values by a reasoning process based on the (complete) observed data. However, if the observed data contain some noisy information or outliers, the estimations of the missing values may not be reliable or may even be quite different from the real values. The aim of this paper is to examine whether a combination of instance selection from the observed data and missing value imputation offers better performance than performing missing value imputation alone. In particular, three instance selection algorithms, DROP3, GA, and IB3, and three imputation algorithms, KNNI, MLP, and SVM, are used in order to find out the best combination. The experimental results show that that performing instance selection can have a positive impact on missing value imputation over the numerical data type of medical datasets, and specific combinations of instance selection and imputation methods can improve the imputation results over the mixed data type of medical datasets. However, instance selection does not have a definitely positive impact on the imputation result for categorical medical datasets.
ERIC Educational Resources Information Center
Gonzalez, Cleotilde; Dutt, Varun
2012-01-01
Hills and Hertwig (2012) challenge the proposed similarity of the exploration-exploitation transitions found in Gonzalez and Dutt (2011) between the 2 experimental paradigms of decisions from experience (sampling and repeated-choice), which was predicted by an instance-based learning (IBL) model. The heart of their argument is that in the sampling…
Noise suppression in surface microseismic data
Forghani-Arani, Farnoush; Batzle, Mike; Behura, Jyoti; Willis, Mark; Haines, Seth S.; Davidson, Michael
2012-01-01
We introduce a passive noise suppression technique, based on the τ − p transform. In the τ − p domain, one can separate microseismic events from surface noise based on distinct characteristics that are not visible in the time-offset domain. By applying the inverse τ − p transform to the separated microseismic event, we suppress the surface noise in the data. Our technique significantly improves the signal-to-noise ratios of the microseismic events and is superior to existing techniques for passive noise suppression in the sense that it preserves the waveform. We introduce a passive noise suppression technique, based on the τ − p transform. In the τ − p domain, one can separate microseismic events from surface noise based on distinct characteristics that are not visible in the time-offset domain. By applying the inverse τ − p transform to the separated microseismic event, we suppress the surface noise in the data. Our technique significantly improves the signal-to-noise ratios of the microseismic events and is superior to existing techniques for passive noise suppression in the sense that it preserves the waveform.
Virginia Henderson's principles and practice of nursing applied to organ donation after brain death.
Nicely, Bruce; DeLario, Ginger T
2011-03-01
Registered nurses were some of the first nonphysician organ transplant and donation specialists in the field, both in procurement and clinical arenas. Nursing theories are abundant in the literature and in nursing curricula, but none have been applied to the donation process. Noted nursing theorist Virginia Henderson (1897-1996), often referred to as the "first lady of nursing," developed a nursing model based on activities of living. Henderson had the pioneering view that nursing stands separately from medicine and that nursing consists of more than simply following physicians' orders. Henderson's Principles and Practice of Nursing is a grand theory that can be applied to many types of nursing. In this article, Henderson's theory is applied to the intensely focused and specialized area of organ donation for transplantation. Although organ donation coordinators may have backgrounds as physicians' assistants, paramedics, or other allied health professions, most are registered nurses. By virtue of the inherent necessity for involvement of the family and friends of the potential donor, Henderson's concepts are applied to the care and management of the organ donor, to the donor's family and friends, and in some instances, to the caregivers themselves.
Automated Video-Based Traffic Count Analysis.
DOT National Transportation Integrated Search
2016-01-01
The goal of this effort has been to develop techniques that could be applied to the : detection and tracking of vehicles in overhead footage of intersections. To that end we : have developed and published techniques for vehicle tracking based on dete...
Semantics of User Interface for Image Retrieval: Possibility Theory and Learning Techniques.
ERIC Educational Resources Information Center
Crehange, M.; And Others
1989-01-01
Discusses the need for a rich semantics for the user interface in interactive image retrieval and presents two methods for building such interfaces: possibility theory applied to fuzzy data retrieval, and a machine learning technique applied to learning the user's deep need. Prototypes developed using videodisks and knowledge-based software are…
A Hybrid Ant Colony Optimization Algorithm for the Extended Capacitated Arc Routing Problem.
Li-Ning Xing; Rohlfshagen, P; Ying-Wu Chen; Xin Yao
2011-08-01
The capacitated arc routing problem (CARP) is representative of numerous practical applications, and in order to widen its scope, we consider an extended version of this problem that entails both total service time and fixed investment costs. We subsequently propose a hybrid ant colony optimization (ACO) algorithm (HACOA) to solve instances of the extended CARP. This approach is characterized by the exploitation of heuristic information, adaptive parameters, and local optimization techniques: Two kinds of heuristic information, arc cluster information and arc priority information, are obtained continuously from the solutions sampled to guide the subsequent optimization process. The adaptive parameters ease the burden of choosing initial values and facilitate improved and more robust results. Finally, local optimization, based on the two-opt heuristic, is employed to improve the overall performance of the proposed algorithm. The resulting HACOA is tested on four sets of benchmark problems containing a total of 87 instances with up to 140 nodes and 380 arcs. In order to evaluate the effectiveness of the proposed method, some existing capacitated arc routing heuristics are extended to cope with the extended version of this problem; the experimental results indicate that the proposed ACO method outperforms these heuristics.
Classification as clustering: a Pareto cooperative-competitive GP approach.
McIntyre, Andrew R; Heywood, Malcolm I
2011-01-01
Intuitively population based algorithms such as genetic programming provide a natural environment for supporting solutions that learn to decompose the overall task between multiple individuals, or a team. This work presents a framework for evolving teams without recourse to prespecifying the number of cooperating individuals. To do so, each individual evolves a mapping to a distribution of outcomes that, following clustering, establishes the parameterization of a (Gaussian) local membership function. This gives individuals the opportunity to represent subsets of tasks, where the overall task is that of classification under the supervised learning domain. Thus, rather than each team member representing an entire class, individuals are free to identify unique subsets of the overall classification task. The framework is supported by techniques from evolutionary multiobjective optimization (EMO) and Pareto competitive coevolution. EMO establishes the basis for encouraging individuals to provide accurate yet nonoverlaping behaviors; whereas competitive coevolution provides the mechanism for scaling to potentially large unbalanced datasets. Benchmarking is performed against recent examples of nonlinear SVM classifiers over 12 UCI datasets with between 150 and 200,000 training instances. Solutions from the proposed coevolutionary multiobjective GP framework appear to provide a good balance between classification performance and model complexity, especially as the dataset instance count increases.
Incremental Transductive Learning Approaches to Schistosomiasis Vector Classification
NASA Astrophysics Data System (ADS)
Fusco, Terence; Bi, Yaxin; Wang, Haiying; Browne, Fiona
2016-08-01
The key issues pertaining to collection of epidemic disease data for our analysis purposes are that it is a labour intensive, time consuming and expensive process resulting in availability of sparse sample data which we use to develop prediction models. To address this sparse data issue, we present the novel Incremental Transductive methods to circumvent the data collection process by applying previously acquired data to provide consistent, confidence-based labelling alternatives to field survey research. We investigated various reasoning approaches for semi-supervised machine learning including Bayesian models for labelling data. The results show that using the proposed methods, we can label instances of data with a class of vector density at a high level of confidence. By applying the Liberal and Strict Training Approaches, we provide a labelling and classification alternative to standalone algorithms. The methods in this paper are components in the process of reducing the proliferation of the Schistosomiasis disease and its effects.
Visualizing and Quantifying Pore Scale Fluid Flow Processes With X-ray Microtomography
NASA Astrophysics Data System (ADS)
Wildenschild, D.; Hopmans, J. W.; Vaz, C. M.; Rivers, M. L.
2001-05-01
When using mathematical models based on Darcy's law it is often necessary to simplify geometry, physics or both and the capillary bundle-of-tubes approach neglects a fundamentally important characteristic of porous solids, namely interconnectedness of the pore space. New approaches to pore-scale modeling that arrange capillary tubes in two- or three-dimensional pore space have been and are still under development: Network models generally represent the pore space by spheres while the pore throats are usually represented by cylinders or conical shapes. Lattice Boltzmann approaches numerically solve the Navier-Stokes equations in a realistic microscopically disordered geometry, which offers the ability to study the microphysical basis of macroscopic flow without the need for a simplified geometry or physics. In addition to these developments in numerical modeling techniques, new theories have proposed that interfacial area should be considered as a primary variable in modeling of a multi-phase flow system. In the wake of this progress emerges an increasing need for new ways of evaluating pore-scale models, and for techniques that can resolve and quantify phase interfaces in porous media. The mechanisms operating at the pore-scale cannot be measured with traditional experimental techniques, however x-ray computerized microtomography (CMT) provides non-invasive observation of, for instance, changing fluid phase content and distribution on the pore scale. Interfacial areas have thus far been measured indirectly, but with the advances in high-resolution imaging using CMT it is possible to track interfacial area and curvature as a function of phase saturation or capillary pressure. We present results obtained at the synchrotron-based microtomography facility (GSECARS, sector 13) at the Advanced Photon Source at Argonne National Laboratory. Cylindrical sand samples of either 6 or 1.5 mm diameter were scanned at different stages of drainage and for varying boundary conditions. A significant difference in fluid saturation and phase distribution was observed for different drainage conditions, clearly showing preferential flow and a dependence on the applied flow rate. For the 1.5 mm sample individual pores and water/air interfaces could be resolved and quantified using image analysis techniques. Use of the Advanced Photon Source was supported by the U.S. Department of Energy, Basic Energy Sciences, Office of Science, under Contract No. W-31-109-Eng-38.
Kernel Methods for Mining Instance Data in Ontologies
NASA Astrophysics Data System (ADS)
Bloehdorn, Stephan; Sure, York
The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.
Evolution and dynamics of shear-layer structures in near-wall turbulence
NASA Technical Reports Server (NTRS)
Johansson, Arne V.; Alfredsson, P. H.; Kim, John
1991-01-01
Near-wall flow structures in turbulent shear flows are analyzed, with particular emphasis on the study of their space-time evolution and connection to turbulence production. The results are obtained from investigation of a database generated from direct numerical simulation of turbulent channel flow at a Reynolds number of 180 based on half-channel width and friction velocity. New light is shed on problems associated with conditional sampling techniques, together with methods to improve these techniques, for use both in physical and numerical experiments. The results clearly indicate that earlier conceptual models of the processes associated with near-wall turbulence production, based on flow visualization and probe measurements need to be modified. For instance, the development of asymmetry in the spanwise direction seems to be an important element in the evolution of near-wall structures in general, and for shear layers in particular. The inhibition of spanwise motion of the near-wall streaky pattern may be the primary reason for the ability of small longitudinal riblets to reduce turbulent skin friction below the value for a flat surface.
ERIC Educational Resources Information Center
Borg, Erik
2009-01-01
Plagiarism and collusion are significant issues for most lecturers whatever their discipline, and to universities and the higher education sector. Universities respond to these issues by developing institutional definitions of plagiarism, which are intended to apply to all instances of plagiarism and collusion. This article first suggests that…
A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints.
Sundharam, Sakthivel Manikandan; Navet, Nicolas; Altmeyer, Sebastian; Havet, Lionel
2018-02-20
Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system.
A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints
Navet, Nicolas; Havet, Lionel
2018-01-01
Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system. PMID:29461489
Advancing Bag-of-Visual-Words Representations for Lesion Classification in Retinal Images
Pires, Ramon; Jelinek, Herbert F.; Wainer, Jacques; Valle, Eduardo; Rocha, Anderson
2014-01-01
Diabetic Retinopathy (DR) is a complication of diabetes that can lead to blindness if not readily discovered. Automated screening algorithms have the potential to improve identification of patients who need further medical attention. However, the identification of lesions must be accurate to be useful for clinical application. The bag-of-visual-words (BoVW) algorithm employs a maximum-margin classifier in a flexible framework that is able to detect the most common DR-related lesions such as microaneurysms, cotton-wool spots and hard exudates. BoVW allows to bypass the need for pre- and post-processing of the retinographic images, as well as the need of specific ad hoc techniques for identification of each type of lesion. An extensive evaluation of the BoVW model, using three large retinograph datasets (DR1, DR2 and Messidor) with different resolution and collected by different healthcare personnel, was performed. The results demonstrate that the BoVW classification approach can identify different lesions within an image without having to utilize different algorithms for each lesion reducing processing time and providing a more flexible diagnostic system. Our BoVW scheme is based on sparse low-level feature detection with a Speeded-Up Robust Features (SURF) local descriptor, and mid-level features based on semi-soft coding with max pooling. The best BoVW representation for retinal image classification was an area under the receiver operating characteristic curve (AUC-ROC) of 97.8% (exudates) and 93.5% (red lesions), applying a cross-dataset validation protocol. To assess the accuracy for detecting cases that require referral within one year, the sparse extraction technique associated with semi-soft coding and max pooling obtained an AUC of 94.22.0%, outperforming current methods. Those results indicate that, for retinal image classification tasks in clinical practice, BoVW is equal and, in some instances, surpasses results obtained using dense detection (widely believed to be the best choice in many vision problems) for the low-level descriptors. PMID:24886780
Zhou, Yuefang; Forbes, Gillian M; Humphris, Gerry
2010-09-01
To investigate camera awareness of female dental nurses and nursery school children as the frequency of camera-related behaviours observed during fluoride varnish applications in a community based health programme. Fifty-one nurse-child interactions (three nurse pairs and 51 children) were video recorded when Childsmile nurses were applying fluoride varnish onto the teeth of children in nursery school settings. Using a pre-developed coding scheme, nurse and child verbal and nonverbal behaviours were coded for camera-related behaviours. On 15 of 51 interactions (29.4%), a total of 31 camera-related behaviours were observed for dental nurses (14 instances over nine interactions) and children (17 instances over six interactions). Camera-related behaviours occurred infrequently, occupied 0.3% of the total interaction time and displayed at all stages of the dental procedure, though tended to peak at initial stages. Certain camera-related behaviours of female dental nurses and nursery school children were observed in their interactions when introducing a dental health preventive intervention. Since the frequency of camera-related behaviours are so few they are of little consequence when video-recording adults and children undertaking dental procedures.
Error reduction in EMG signal decomposition
Kline, Joshua C.
2014-01-01
Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159
Accuracy assessment of linear spectral mixture model due to terrain undulation
NASA Astrophysics Data System (ADS)
Wang, Tianxing; Chen, Songlin; Ma, Ya
2008-12-01
Mixture spectra are common in remote sensing due to the limitations of spatial resolution and the heterogeneity of land surface. During the past 30 years, a lot of subpixel model have developed to investigate the information within mixture pixels. Linear spectral mixture model (LSMM) is a simper and more general subpixel model. LSMM also known as spectral mixture analysis is a widely used procedure to determine the proportion of endmembers (constituent materials) within a pixel based on the endmembers' spectral characteristics. The unmixing accuracy of LSMM is restricted by variety of factors, but now the research about LSMM is mostly focused on appraisement of nonlinear effect relating to itself and techniques used to select endmembers, unfortunately, the environment conditions of study area which could sway the unmixing-accuracy, such as atmospheric scatting and terrain undulation, are not studied. This paper probes emphatically into the accuracy uncertainty of LSMM resulting from the terrain undulation. ASTER dataset was chosen and the C terrain correction algorithm was applied to it. Based on this, fractional abundances for different cover types were extracted from both pre- and post-C terrain illumination corrected ASTER using LSMM. Simultaneously, the regression analyses and the IKONOS image were introduced to assess the unmixing accuracy. Results showed that terrain undulation could dramatically constrain the application of LSMM in mountain area. Specifically, for vegetation abundances, a improved unmixing accuracy of 17.6% (regression against to NDVI) and 18.6% (regression against to MVI) for R2 was achieved respectively by removing terrain undulation. Anyway, this study indicated in a quantitative way that effective removal or minimization of terrain illumination effects was essential for applying LSMM. This paper could also provide a new instance for LSMM applications in mountainous areas. In addition, the methods employed in this study could be effectively used to evaluate different algorithms of terrain undulation correction for further study.
Instance-based categorization: automatic versus intentional forms of retrieval.
Neal, A; Hesketh, B; Andrews, S
1995-03-01
Two experiments are reported which attempt to disentangle the relative contribution of intentional and automatic forms of retrieval to instance-based categorization. A financial decision-making task was used in which subjects had to decide whether a bank would approve loans for a series of applicants. Experiment 1 found that categorization was sensitive to instance-specific knowledge, even when subjects had practiced using a simple rule. L. L. Jacoby's (1991) process-dissociation procedure was adapted for use in Experiment 2 to infer the relative contribution of intentional and automatic retrieval processes to categorization decisions. The results provided (1) strong evidence that intentional retrieval processes influence categorization, and (2) some preliminary evidence suggesting that automatic retrieval processes may also contribute to categorization decisions.
Active Self-Paced Learning for Cost-Effective and Progressive Face Identification.
Lin, Liang; Wang, Keze; Meng, Deyu; Zuo, Wangmeng; Zhang, Lei
2018-01-01
This paper aims to develop a novel cost-effective framework for face identification, which progressively maintains a batch of classifiers with the increasing face images of different individuals. By naturally combining two recently rising techniques: active learning (AL) and self-paced learning (SPL), our framework is capable of automatically annotating new instances and incorporating them into training under weak expert recertification. We first initialize the classifier using a few annotated samples for each individual, and extract image features using the convolutional neural nets. Then, a number of candidates are selected from the unannotated samples for classifier updating, in which we apply the current classifiers ranking the samples by the prediction confidence. In particular, our approach utilizes the high-confidence and low-confidence samples in the self-paced and the active user-query way, respectively. The neural nets are later fine-tuned based on the updated classifiers. Such heuristic implementation is formulated as solving a concise active SPL optimization problem, which also advances the SPL development by supplementing a rational dynamic curriculum constraint. The new model finely accords with the "instructor-student-collaborative" learning mode in human education. The advantages of this proposed framework are two-folds: i) The required number of annotated samples is significantly decreased while the comparable performance is guaranteed. A dramatic reduction of user effort is also achieved over other state-of-the-art active learning techniques. ii) The mixture of SPL and AL effectively improves not only the classifier accuracy compared to existing AL/SPL methods but also the robustness against noisy data. We evaluate our framework on two challenging datasets, which include hundreds of persons under diverse conditions, and demonstrate very promising results. Please find the code of this project at: http://hcp.sysu.edu.cn/projects/aspl/.
OntoPop: An Ontology Population System for the Semantic Web
NASA Astrophysics Data System (ADS)
Thongkrau, Theerayut; Lalitrojwong, Pattarachai
The development of ontology at the instance level requires the extraction of the terms defining the instances from various data sources. These instances then are linked to the concepts of the ontology, and relationships are created between these instances for the next step. However, before establishing links among data, ontology engineers must classify terms or instances from a web document into an ontology concept. The tool for help ontology engineer in this task is called ontology population. The present research is not suitable for ontology development applications, such as long time processing or analyzing large or noisy data sets. OntoPop system introduces a methodology to solve these problems, which comprises two parts. First, we select meaningful features from syntactic relations, which can produce more significant features than any other method. Second, we differentiate feature meaning and reduce noise based on latent semantic analysis. Experimental evaluation demonstrates that the OntoPop works well, significantly out-performing the accuracy of 49.64%, a learning accuracy of 76.93%, and executes time of 5.46 second/instance.
Tag-to-Tag Interference Suppression Technique Based on Time Division for RFID.
Khadka, Grishma; Hwang, Suk-Seung
2017-01-01
Radio-frequency identification (RFID) is a tracking technology that enables immediate automatic object identification and rapid data sharing for a wide variety of modern applications using radio waves for data transmission from a tag to a reader. RFID is already well established in technical areas, and many companies have developed corresponding standards and measurement techniques. In the construction industry, effective monitoring of materials and equipment is an important task, and RFID helps to improve monitoring and controlling capabilities, in addition to enabling automation for construction projects. However, on construction sites, there are many tagged objects and multiple RFID tags that may interfere with each other's communications. This reduces the reliability and efficiency of the RFID system. In this paper, we propose an anti-collision algorithm for communication between multiple tags and a reader. In order to suppress interference signals from multiple neighboring tags, the proposed algorithm employs the time-division (TD) technique, where tags in the interrogation zone are assigned a specific time slot so that at every instance in time, a reader communicates with tags using the specific time slot. We present representative computer simulation examples to illustrate the performance of the proposed anti-collision technique for multiple RFID tags.
An Approach to the Evaluation of Hypermedia.
ERIC Educational Resources Information Center
Knussen, Christina; And Others
1991-01-01
Discusses methods that may be applied to the evaluation of hypermedia, based on six models described by Lawton. Techniques described include observation, self-report measures, interviews, automated measures, psychometric tests, checklists and criterion-based techniques, process models, Experimentally Measuring Usability (EMU), and a naturalistic…
The Progression of Podcasting/Vodcasting in a Technical Physics Class
NASA Astrophysics Data System (ADS)
Glanville, Y. J.
2010-11-01
Technology such as Microsoft PowerPoint presentations, clickers, podcasting, and learning management suites is becoming prevalent in classrooms. Instructors are using these media in both large lecture hall settings and small classrooms with just a handful of students. Traditionally, each of these media is instructor driven. For instance, podcasting (audio recordings) provided my technical physics course with supplemental notes to accompany a traditional algebra-based physics lecture. Podcasting is an ideal tool for this mode of instruction, but podcasting/vodcasting is also an ideal technique for student projects and student-driven learning. I present here the various podcasting/vodcasting projects my students and I have undertaken over the last few years.
An optimal control strategy for two-dimensional motion camouflage with non-holonimic constraints.
Rañó, Iñaki
2012-07-01
Motion camouflage is a stealth behaviour observed both in hover-flies and in dragonflies. Existing controllers for mimicking motion camouflage generate this behaviour on an empirical basis or without considering the kinematic motion restrictions present in animal trajectories. This study summarises our formal contributions to solve the generation of motion camouflage as a non-linear optimal control problem. The dynamics of the system capture the kinematic restrictions to motion of the agents, while the performance index ensures camouflage trajectories. An extensive set of simulations support the technique, and a novel analysis of the obtained trajectories contributes to our understanding of possible mechanisms to obtain sensor based motion camouflage, for instance, in mobile robots.
A cadaveric study of the endoscopic endonasal transclival approach to the basilar artery.
Lai, Leon T; Morgan, Michael K; Chin, David C W; Snidvongs, Kornkiat; Huang, June X Z; Malek, Joanne; Lam, Matthew; McLachlan, Rohan; Harvey, Richard J
2013-04-01
The anterior transclival route to basilar artery aneurysms is not widely performed. The objective of this study was to carry out a feasibility assessment of the transclival approach to basilar aneurysms with advanced endonasal techniques on 11 cadaver heads. Clival dura was exposed from the sella to the foramen magnum between the paraclival segments of the internal carotid arteries (ICA) laterally. An inverted dural "U" flap was reflected inferiorly to expose the basilar artery. The maximal dimensions from operative measurements were recorded. Surgical manoeuvrability of multiple instruments and the proficiency to place proximal and distal vascular clips were evaluated. The mean operative depth (± standard deviation), measured from the anterior choanae to the basilar artery, was 110±6mm. The lateral corridors were limited distally by the medial pterygoids (mean width 21±2mm) and paraclival ICA (mean width 20±2mm). The mean transclival craniectomy dimensions were 19±2mm (width) and 23±4mm (height). Exposure of the basilar-anterior inferior cerebellar artery junction, superior cerebellar artery, and the basilar caput were possible in 100%, 91%, and 64% of instances, respectively. Placements of proximal and distal aneurysm clips were achieved in all instances. Based on our findings, the transclival endoscopic endonasal surgery approach provides excellent visualisation of the basilar artery. Clip application and manoeuvrability of instruments was considered adequate for basilar aneurysm surgery. Surgical skills and instrumentation to control significant haemorrhage can potentially limit the clinical applicability of this technique. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
Optical control and diagnostics sensors for gas turbine machinery
NASA Astrophysics Data System (ADS)
Trolinger, James D.; Jenkins, Thomas P.; Heeg, Bauke
2012-10-01
There exists a vast range of optical techniques that have been under development for solving complex measurement problems related to gas-turbine machinery and phenomena. For instance, several optical techniques are ideally suited for studying fundamental combustion phenomena in laboratory environments. Yet other techniques hold significant promise for use as either on-line gas turbine control sensors, or as health monitoring diagnostics sensors. In this paper, we briefly summarize these and discuss, in more detail, some of the latter class of techniques, including phosphor thermometry, hyperspectral imaging and low coherence interferometry, which are particularly suited for control and diagnostics sensing on hot section components with ceramic thermal barrier coatings (TBCs).
Classification of Malaysia aromatic rice using multivariate statistical analysis
NASA Astrophysics Data System (ADS)
Abdullah, A. H.; Adom, A. H.; Shakaff, A. Y. Md; Masnan, M. J.; Zakaria, A.; Rahim, N. A.; Omar, O.
2015-05-01
Aromatic rice (Oryza sativa L.) is considered as the best quality premium rice. The varieties are preferred by consumers because of its preference criteria such as shape, colour, distinctive aroma and flavour. The price of aromatic rice is higher than ordinary rice due to its special needed growth condition for instance specific climate and soil. Presently, the aromatic rice quality is identified by using its key elements and isotopic variables. The rice can also be classified via Gas Chromatography Mass Spectrometry (GC-MS) or human sensory panels. However, the uses of human sensory panels have significant drawbacks such as lengthy training time, and prone to fatigue as the number of sample increased and inconsistent. The GC-MS analysis techniques on the other hand, require detailed procedures, lengthy analysis and quite costly. This paper presents the application of in-house developed Electronic Nose (e-nose) to classify new aromatic rice varieties. The e-nose is used to classify the variety of aromatic rice based on the samples odour. The samples were taken from the variety of rice. The instrument utilizes multivariate statistical data analysis, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and K-Nearest Neighbours (KNN) to classify the unknown rice samples. The Leave-One-Out (LOO) validation approach is applied to evaluate the ability of KNN to perform recognition and classification of the unspecified samples. The visual observation of the PCA and LDA plots of the rice proves that the instrument was able to separate the samples into different clusters accordingly. The results of LDA and KNN with low misclassification error support the above findings and we may conclude that the e-nose is successfully applied to the classification of the aromatic rice varieties.
Nasritdinova, N Yu; Reznik, V L; Kuatbaieva, A M; Kairbaiev, M R
2016-01-01
The vaccines against human papilloma virus are a potential tool for prevention of cervix cancer and particular other types of cancer. The high inclusion of target group in applied vaccination program is economically effective and successful activity depending in many instances on reliable knowledge and positive attitude of population to inoculation. The cross-sectional study was carried out using previously developed anonymous questionnaires for various groups of population in four pilot regions of Kazakhstan where national ministry of health proposes for inoculation of girls aged 9-13 years two vaccines against human papilloma virus (four- and two-valence) The data base was organized using software Microsoft Access. The materials were integrated and processed using variation statistics techniques in software IBM SPSS Statistics 19 and applying Student criterion and calculating correlation dependences. Out of all respondents, 66% were aware about existence of human papilloma virus/ the main portion of parents 'female adolescents learned about vaccination against human papilloma virus from Internet and medical workers. The most significant factor preventing implementation of vaccination and the proper perception by respondents was absence of confidence in safety of vaccine. About 54% of parents of female adolescents and 75% of teachers consider vaccine as unsafe. And only 72% of medical workers consider vaccine as safe. Despite known effectiveness of vaccination against human papilloma virus, number of problems exist related to implementation of program. The level of awareness and understanding of different groups of population concerning the role of vaccination in development of oncologic pathology and possibility of prevention of cancer at the expense of vaccination. The intersectoral relationships are to be developed between medicine and education system. The significance of information activities of medical control organs and organizations is to be enhanced.
New false color mapping for image fusion
NASA Astrophysics Data System (ADS)
Toet, Alexander; Walraven, Jan
1996-03-01
A pixel-based color-mapping algorithm is presented that produces a fused false color rendering of two gray-level images representing different sensor modalities. The resulting images have a higher information content than each of the original images and retain sensor-specific image information. The unique component of each image modality is enhanced in the resulting fused color image representation. First, the common component of the two original input images is determined. Second, the common component is subtracted from the original images to obtain the unique component of each image. Third, the unique component of each image modality is subtracted from the image of the other modality. This step serves to enhance the representation of sensor-specific details in the final fused result. Finally, a fused color image is produced by displaying the images resulting from the last step through, respectively, the red and green channels of a color display. The method is applied to fuse thermal and visual images. The results show that the color mapping enhances the visibility of certain details and preserves the specificity of the sensor information. The fused images also have a fairly natural appearance. The fusion scheme involves only operations on corresponding pixels. The resolution of a fused image is therefore directly related to the resolution of the input images. Before fusing, the contrast of the images can be enhanced and their noise can be reduced by standard image- processing techniques. The color mapping algorithm is computationally simple. This implies that the investigated approaches can eventually be applied in real time and that the hardware needed is not too complicated or too voluminous (an important consideration when it has to fit in an airplane, for instance).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdullah, A. H.; Adom, A. H.; Shakaff, A. Y. Md
Aromatic rice (Oryza sativa L.) is considered as the best quality premium rice. The varieties are preferred by consumers because of its preference criteria such as shape, colour, distinctive aroma and flavour. The price of aromatic rice is higher than ordinary rice due to its special needed growth condition for instance specific climate and soil. Presently, the aromatic rice quality is identified by using its key elements and isotopic variables. The rice can also be classified via Gas Chromatography Mass Spectrometry (GC-MS) or human sensory panels. However, the uses of human sensory panels have significant drawbacks such as lengthy trainingmore » time, and prone to fatigue as the number of sample increased and inconsistent. The GC–MS analysis techniques on the other hand, require detailed procedures, lengthy analysis and quite costly. This paper presents the application of in-house developed Electronic Nose (e-nose) to classify new aromatic rice varieties. The e-nose is used to classify the variety of aromatic rice based on the samples odour. The samples were taken from the variety of rice. The instrument utilizes multivariate statistical data analysis, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and K-Nearest Neighbours (KNN) to classify the unknown rice samples. The Leave-One-Out (LOO) validation approach is applied to evaluate the ability of KNN to perform recognition and classification of the unspecified samples. The visual observation of the PCA and LDA plots of the rice proves that the instrument was able to separate the samples into different clusters accordingly. The results of LDA and KNN with low misclassification error support the above findings and we may conclude that the e-nose is successfully applied to the classification of the aromatic rice varieties.« less
Taming theory with thought experiments: Understanding and scientific progress.
Stuart, Michael T
2016-08-01
I claim that one way thought experiments contribute to scientific progress is by increasing scientific understanding. Understanding does not have a currently accepted characterization in the philosophical literature, but I argue that we already have ways to test for it. For instance, current pedagogical practice often requires that students demonstrate being in either or both of the following two states: 1) Having grasped the meaning of some relevant theory, concept, law or model, 2) Being able to apply that theory, concept, law or model fruitfully to new instances. Three thought experiments are presented which have been important historically in helping us pass these tests, and two others that cause us to fail. Then I use this operationalization of understanding to clarify the relationships between scientific thought experiments, the understanding they produce, and the progress they enable. I conclude that while no specific instance of understanding (thus conceived) is necessary for scientific progress, understanding in general is. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nagle, Tadhg; Golden, William
Managing strategic contradiction and paradoxical situations has been gaining importance in technological, innovation and management domains. As a result, more and more paradoxical instances and types have been documented in literature. The innovators dilemma is such an instance that gives a detailed description of how disruptive innovations affect firms. However, the innovators dilemma has only been applied to large organisations and more specifically industry incumbents. Through a multiple case study of six eLearning SME’s, this paper investigates the applicability of the innovators dilemma as well as the disruptive effects of Web 2.0 on the organisations. Analysing the data collected over 18 months, it was found that the innovators dilemma did indeed apply to SME’s. However, inline with the original thesis the dilemma only applied to the SME’s established (pre-2002) before the development of Web 2.0 technologies began. Furthermore, the study highlights that the post-2002 firms were also partly vulnerable to the dilemma but were able to avoid any negative effects though technological visionary leadership. In contrast, the pre-2002 firms were lacking this visionary ability and were also constrained by low risk profiles.
Machine learning modelling for predicting soil liquefaction susceptibility
NASA Astrophysics Data System (ADS)
Samui, P.; Sitharam, T. G.
2011-01-01
This study describes two machine learning techniques applied to predict liquefaction susceptibility of soil based on the standard penetration test (SPT) data from the 1999 Chi-Chi, Taiwan earthquake. The first machine learning technique which uses Artificial Neural Network (ANN) based on multi-layer perceptions (MLP) that are trained with Levenberg-Marquardt backpropagation algorithm. The second machine learning technique uses the Support Vector machine (SVM) that is firmly based on the theory of statistical learning theory, uses classification technique. ANN and SVM have been developed to predict liquefaction susceptibility using corrected SPT [(N1)60] and cyclic stress ratio (CSR). Further, an attempt has been made to simplify the models, requiring only the two parameters [(N1)60 and peck ground acceleration (amax/g)], for the prediction of liquefaction susceptibility. The developed ANN and SVM models have also been applied to different case histories available globally. The paper also highlights the capability of the SVM over the ANN models.
All-gas-phase synthesis of UiO-66 through modulated atomic layer deposition
Lausund, Kristian Blindheim; Nilsen, Ola
2016-01-01
Thin films of stable metal-organic frameworks (MOFs) such as UiO-66 have enormous application potential, for instance in microelectronics. However, all-gas-phase deposition techniques are currently not available for such MOFs. We here report on thin-film deposition of the thermally and chemically stable UiO-66 in an all-gas-phase process by the aid of atomic layer deposition (ALD). Sequential reactions of ZrCl4 and 1,4-benzenedicarboxylic acid produce amorphous organic–inorganic hybrid films that are subsequently crystallized to the UiO-66 structure by treatment in acetic acid vapour. We also introduce a new approach to control the stoichiometry between metal clusters and organic linkers by modulation of the ALD growth with additional acetic acid pulses. An all-gas-phase synthesis technique for UiO-66 could enable implementations in microelectronics that are not compatible with solvothermal synthesis. Since this technique is ALD-based, it could also give enhanced thickness control and the possibility to coat irregular substrates with high aspect ratios. PMID:27876797
Radiochemical determination of 241Am and Pu(alpha) in environmental materials.
Warwick, P E; Croudace, I W; Oh, J S
2001-07-15
Americium-241 and plutonium determinations will become of greater importance over the coming decades as 137Cs and 241Pu decay. The impact of 137Cs on environmental chronology has been great, but its potency is waning as it decays and diffuses. Having 241Am and Pu as unequivocal markers for the 1963 weapon fallout maximum is important for short time scale environmental work, but a fast and reliable procedure is required for their separation. The developed method described here begins by digesting samples using a lithium borate fusion although an aqua regia leachate is also effective in many instances. Isolation of the Am and Pu is then achieved using a combination of extraction chromatography and conventional anion exchange chromatography. The whole procedure has been optimized, validated, and assessed for safety. The straightforwardness of this technique permits the analysis of large numbers of samples and makes 241Am-based techniques for high-resolution sediment accumulation rate studies attractive. In addition, the technique can be employed for the sequential measurement of Pu and Am in environmental surveillance programs, potentially reducing analytical costs and turnround times.
NASA Astrophysics Data System (ADS)
Takan, Taylan; Özkan, Vedat A.; Idikut, Fırat; Yildirim, Ihsan Ozan; Şahin, Asaf B.; Altan, Hakan
2014-10-01
In this work sub-terahertz imaging using Compressive Sensing (CS) techniques for targets placed behind a visibly opaque barrier is demonstrated both experimentally and theoretically. Using a multiplied Schottky diode based millimeter wave source working at 118 GHz, metal cutout targets were illuminated in both reflection and transmission configurations with and without barriers which were made out of drywall. In both modes the image is spatially discretized using laser machined, 10 × 10 pixel metal apertures to demonstrate the technique of compressive sensing. The images were collected by modulating the source and measuring the transmitted flux through the apertures using a Golay cell. Experimental results were compared to simulations of the expected transmission through the metal apertures. Image quality decreases as expected when going from the non-obscured transmission case to the obscured transmission case and finally to the obscured reflection case. However, in all instances the image appears below the Nyquist rate which demonstrates that this technique is a viable option for Through the Wall Reflection Imaging (TWRI) applications.
Soleilhac, Antonin; Bertorelle, Franck; Antoine, Rodolphe
2018-03-15
Protein-templated gold nanoclusters (AuNCs) are very attractive due to their unique fluorescence properties. A major problem however may arise due to protein structure changes upon the nucleation of an AuNC within the protein for any future use as in vivo probes, for instance. In this work, we propose a simple and reliable fluorescence based technique measuring the hydrodynamic size of protein-templated gold nanoclusters. This technique uses the relation between the time resolved fluorescence anisotropy decay and the hydrodynamic volume, through the rotational correlation time. We determine the molecular size of protein-directed AuNCs, with protein templates of increasing sizes, e.g. insulin, lysozyme, and bovine serum albumin (BSA). The comparison of sizes obtained by other techniques (e.g. dynamic light scattering and small-angle X-ray scattering) between bare and gold clusters containing proteins allows us to address the volume changes induced either by conformational changes (for BSA) or the formation of protein dimers (for insulin and lysozyme) during cluster formation and incorporation. Copyright © 2017 Elsevier B.V. All rights reserved.
Muscle categorization using PDF estimation and Naive Bayes classification.
Adel, Tameem M; Smith, Benn E; Stashuk, Daniel W
2012-01-01
The structure of motor unit potentials (MUPs) and their times of occurrence provide information about the motor units (MUs) that created them. As such, electromyographic (EMG) data can be used to categorize muscles as normal or suffering from a neuromuscular disease. Using pattern discovery (PD) allows clinicians to understand the rationale underlying a certain muscle characterization; i.e. it is transparent. Discretization is required in PD, which leads to some loss in accuracy. In this work, characterization techniques that are based on estimating probability density functions (PDFs) for each muscle category are implemented. Characterization probabilities of each motor unit potential train (MUPT) are obtained from these PDFs and then Bayes rule is used to aggregate the MUPT characterization probabilities to calculate muscle level probabilities. Even though this technique is not as transparent as PD, its accuracy is higher than the discrete PD. Ultimately, the goal is to use a technique that is based on both PDFs and PD and make it as transparent and as efficient as possible, but first it was necessary to thoroughly assess how accurate a fully continuous approach can be. Using gaussian PDF estimation achieved improvements in muscle categorization accuracy over PD and further improvements resulted from using feature value histograms to choose more representative PDFs; for instance, using log-normal distribution to represent skewed histograms.
Techniques for noise removal and registration of TIMS data
Hummer-Miller, S.
1990-01-01
Extracting subtle differences from highly correlated thermal infrared aircraft data is possible with appropriate noise filters, constructed and applied in the spatial frequency domain. This paper discusses a heuristic approach to designing noise filters for removing high- and low-spatial frequency striping and banding. Techniques for registering thermal infrared aircraft data to a topographic base using Thematic Mapper data are presented. The noise removal and registration techniques are applied to TIMS thermal infrared aircraft data. -Author
Applying Parallel Processing Techniques to Tether Dynamics Simulation
NASA Technical Reports Server (NTRS)
Wells, B. Earl
1996-01-01
The focus of this research has been to determine the effectiveness of applying parallel processing techniques to a sizable real-world problem, the simulation of the dynamics associated with a tether which connects two objects in low earth orbit, and to explore the degree to which the parallelization process can be automated through the creation of new software tools. The goal has been to utilize this specific application problem as a base to develop more generally applicable techniques.
The future of 'pure' medical science: the need for a new specialist professional research system.
Charlton, Bruce G; Andras, Peter
2005-01-01
Over recent decades, medical research has become mostly an 'applied' science which implicitly aims at steady progress by an accumulation of small improvements, each increment having a high probability of validity. Applied medical science is, therefore, a social system of communications for generating pre-publication peer-reviewed knowledge that is ready for implementation. However, the need for predictability makes modern medical science risk-averse and this is leading to a decline in major therapeutic breakthroughs where new treatments for new diseases are required. There is need for the evolution of a specialized professional research system of pure medial science, whose role would be to generate and critically evaluate radically novel and potentially important theories, techniques, therapies and technologies. Pure science ideas typically have a lower probability of being valid, but the possibility of much greater benefit if they turn out to be true. The domination of medical research by applied criteria means that even good ideas from pure medical science are typically ignored or summarily rejected as being too speculative. Of course, radical and potentially important ideas may currently be published, but at present there is no formal mechanism by which pure science publications may be received, critiqued, evaluated and extended to become suitable for 'application'. Pure medical science needs to evolve to constitute a typical specialized scientific system of formal communications among a professional community. The members of this putative profession would interact via close research groupings, journals, meetings, electronic and web communications--like any other science. Pure medical science units might arise as elite grouping linked to existing world-class applied medical research institutions. However, the pure medical science system would have its own separate aims, procedures for scientific evaluation, institutional organization, funding and support arrangements; and a separate higher-professional career path with distinctive selection criteria. For instance, future leaders of pure medical science institutions would need to be selected on the basis of their specialized cognitive aptitudes and their record of having generated science-transforming ideas, as well as their research management skills. Pure medical science would work most effectively and efficiently if practiced in many independent and competing institutions in several different countries. The main 'market' for pure medical science would be the applied medical scientists, who need radical strategies to solve problems which are not yielding to established methods. The stimulus to create such elite pure medical science institutions might come from the leadership of academic 'entrepreneurs' (for instance, imaginative patrons in the major funding foundations), or be triggered by a widespread public recognition of the probable exhaustion of existing applied medical science approaches to solving major therapeutic challenges.
Stratification-Based Outlier Detection over the Deep Web.
Xian, Xuefeng; Zhao, Pengpeng; Sheng, Victor S; Fang, Ligang; Gu, Caidong; Yang, Yuanfeng; Cui, Zhiming
2016-01-01
For many applications, finding rare instances or outliers can be more interesting than finding common patterns. Existing work in outlier detection never considers the context of deep web. In this paper, we argue that, for many scenarios, it is more meaningful to detect outliers over deep web. In the context of deep web, users must submit queries through a query interface to retrieve corresponding data. Therefore, traditional data mining methods cannot be directly applied. The primary contribution of this paper is to develop a new data mining method for outlier detection over deep web. In our approach, the query space of a deep web data source is stratified based on a pilot sample. Neighborhood sampling and uncertainty sampling are developed in this paper with the goal of improving recall and precision based on stratification. Finally, a careful performance evaluation of our algorithm confirms that our approach can effectively detect outliers in deep web.
He, Dengchao; Zhang, Hongjun; Hao, Wenning; Zhang, Rui; Cheng, Kai
2017-07-01
Distant supervision, a widely applied approach in the field of relation extraction can automatically generate large amounts of labeled training corpus with minimal manual effort. However, the labeled training corpus may have many false-positive data, which would hurt the performance of relation extraction. Moreover, in traditional feature-based distant supervised approaches, extraction models adopt human design features with natural language processing. It may also cause poor performance. To address these two shortcomings, we propose a customized attention-based long short-term memory network. Our approach adopts word-level attention to achieve better data representation for relation extraction without manually designed features to perform distant supervision instead of fully supervised relation extraction, and it utilizes instance-level attention to tackle the problem of false-positive data. Experimental results demonstrate that our proposed approach is effective and achieves better performance than traditional methods.
Stratification-Based Outlier Detection over the Deep Web
Xian, Xuefeng; Zhao, Pengpeng; Sheng, Victor S.; Fang, Ligang; Gu, Caidong; Yang, Yuanfeng; Cui, Zhiming
2016-01-01
For many applications, finding rare instances or outliers can be more interesting than finding common patterns. Existing work in outlier detection never considers the context of deep web. In this paper, we argue that, for many scenarios, it is more meaningful to detect outliers over deep web. In the context of deep web, users must submit queries through a query interface to retrieve corresponding data. Therefore, traditional data mining methods cannot be directly applied. The primary contribution of this paper is to develop a new data mining method for outlier detection over deep web. In our approach, the query space of a deep web data source is stratified based on a pilot sample. Neighborhood sampling and uncertainty sampling are developed in this paper with the goal of improving recall and precision based on stratification. Finally, a careful performance evaluation of our algorithm confirms that our approach can effectively detect outliers in deep web. PMID:27313603
Cervical spine metastases: techniques for anterior reconstruction and stabilization.
Sayama, Christina M; Schmidt, Meic H; Bisson, Erica F
2012-10-01
The surgical management of cervical spine metastases continues to evolve and improve. The authors provide an overview of the various techniques for anterior reconstruction and stabilization of the subaxial cervical spine after corpectomy for spinal metastases. Vertebral body reconstruction can be accomplished using a variety of materials such as bone autograft/allograft, polymethylmethacrylate, interbody spacers, and/or cages with or without supplemental anterior cervical plating. In some instances, posterior instrumentation is needed for additional stabilization.
Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis
NASA Astrophysics Data System (ADS)
Chernoded, Andrey; Dudko, Lev; Myagkov, Igor; Volkov, Petr
2017-10-01
Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.
Low cost MATLAB-based pulse oximeter for deployment in research and development applications.
Shokouhian, M; Morling, R C S; Kale, I
2013-01-01
Problems such as motion artifact and effects of ambient lights have forced developers to design different signal processing techniques and algorithms to increase the reliability and accuracy of the conventional pulse oximeter device. To evaluate the robustness of these techniques, they are applied either to recorded data or are implemented on chip to be applied to real-time data. Recorded data is the most common method of evaluating however it is not as reliable as real-time measurements. On the other hand, hardware implementation can be both expensive and time consuming. This paper presents a low cost MATLAB-based pulse oximeter that can be used for rapid evaluation of newly developed signal processing techniques and algorithms. Flexibility to apply different signal processing techniques, providing both processed and unprocessed data along with low implementation cost are the important features of this design which makes it ideal for research and development purposes, as well as commercial, hospital and healthcare application.
Testing the Joint UK Land Environment Simulator (JULES) for flood forecasting
NASA Astrophysics Data System (ADS)
Batelis, Stamatios-Christos; Rosolem, Rafael; Han, Dawei; Rahman, Mostaquimur
2017-04-01
Land Surface Models (LSM) are based on physics principles and simulate the exchanges of energy, water and biogeochemical cycles between the land surface and lower atmosphere. Such models are typically applied for climate studies or effects of land use changes but as the resolution of LSMs and supporting observations are continuously increasing, its representation of hydrological processes need to be addressed adequately. For example, changes in climate and land use can alter the hydrology of a region, for instance, by altering its flooding regime. LSMs can be a powerful tool because of their ability to spatially represent a region with much finer resolution. However, despite such advantages, its performance has not been extensively assessed for flood forecasting simply because its representation of typical hydrological processes, such as overland flow and river routing, are still either ignored or roughly represented. In this study, we initially test the Joint UK Land Environment Simulator (JULES) as a flood forecast tool focusing on its river routing scheme. In particular, JULES river routing parameterization is based on the Rapid Flow Model (RFM) which relies on six prescribed parameters (two surface and two subsurface wave celerities, and two return flow fractions). Although this routing scheme is simple, the prescription of its six default parameters is still too generalized. Our aim is to understand the importance of each RFM parameter in a series of JULES simulations at a number of catchments in the UK for the 2006-2015 period. This is carried out, for instance, by making a number of assumptions of parameter behaviour (e.g., spatially uniform versus varying and/or temporally constant or time-varying parameters within each catchment). Hourly rainfall radar in combination with the CHESS (Climate, Hydrological and Ecological research Support System) meteorological daily data both at 1 km2 resolution are used. The evaluation of the model is based on hourly runoff data provided by the National River Flood Archive using a number of model performance metrics. We use a calibrated conceptually-based lumped model, more typically applied in flood studies, as a benchmark for our analysis.
Benford's law and continuous dependent random variables
NASA Astrophysics Data System (ADS)
Becker, Thealexa; Burt, David; Corcoran, Taylor C.; Greaves-Tunnell, Alec; Iafrate, Joseph R.; Jing, Joy; Miller, Steven J.; Porfilio, Jaclyn D.; Ronan, Ryan; Samranvedhya, Jirapat; Strauch, Frederick W.; Talbut, Blaine
2018-01-01
Many mathematical, man-made and natural systems exhibit a leading-digit bias, where a first digit (base 10) of 1 occurs not 11% of the time, as one would expect if all digits were equally likely, but rather 30%. This phenomenon is known as Benford's Law. Analyzing which datasets adhere to Benford's Law and how quickly Benford behavior sets in are the two most important problems in the field. Most previous work studied systems of independent random variables, and relied on the independence in their analyses. Inspired by natural processes such as particle decay, we study the dependent random variables that emerge from models of decomposition of conserved quantities. We prove that in many instances the distribution of lengths of the resulting pieces converges to Benford behavior as the number of divisions grow, and give several conjectures for other fragmentation processes. The main difficulty is that the resulting random variables are dependent. We handle this by using tools from Fourier analysis and irrationality exponents to obtain quantified convergence rates as well as introducing and developing techniques to measure and control the dependencies. The construction of these tools is one of the major motivations of this work, as our approach can be applied to many other dependent systems. As an example, we show that the n ! entries in the determinant expansions of n × n matrices with entries independently drawn from nice random variables converges to Benford's Law.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pettit, J. R.; Lowe, M. J. S.; Walker, A. E.
2015-03-31
Pulse-echo ultrasonic NDE examination of large pressure vessel forgings is a design and construction code requirement in the power generation industry. Such inspections aim to size and characterise potential defects that may have formed during the forging process. Typically these defects have a range of orientations and surface roughnesses which can greatly affect ultrasonic wave scattering behaviour. Ultrasonic modelling techniques can provide insight into defect response and therefore aid in characterisation. However, analytical approaches to solving these scattering problems can become inaccurate, especially when applied to increasingly complex defect geometries. To overcome these limitations a elastic Finite Element (FE) methodmore » has been developed to simulate pulse-echo inspections of embedded planar defects. The FE model comprises a significantly reduced spatial domain allowing for a Monte-Carlo based approach to consider multiple realisations of defect orientation and surface roughness. The results confirm that defects aligned perpendicular to the path of beam propagation attenuate ultrasonic signals according to the level of surface roughness. However, for defects orientated away from this plane, surface roughness can increase the magnitude of the scattered component propagating back along the path of the incident beam. This study therefore highlights instances where defect roughness increases the magnitude of ultrasonic scattered signals, as opposed to attenuation which is more often assumed.« less
Consensus models to predict endocrine disruption for all ...
Humans are potentially exposed to tens of thousands of man-made chemicals in the environment. It is well known that some environmental chemicals mimic natural hormones and thus have the potential to be endocrine disruptors. Most of these environmental chemicals have never been tested for their ability to disrupt the endocrine system, in particular, their ability to interact with the estrogen receptor. EPA needs tools to prioritize thousands of chemicals, for instance in the Endocrine Disruptor Screening Program (EDSP). Collaborative Estrogen Receptor Activity Prediction Project (CERAPP) was intended to be a demonstration of the use of predictive computational models on HTS data including ToxCast and Tox21 assays to prioritize a large chemical universe of 32464 unique structures for one specific molecular target – the estrogen receptor. CERAPP combined multiple computational models for prediction of estrogen receptor activity, and used the predicted results to build a unique consensus model. Models were developed in collaboration between 17 groups in the U.S. and Europe and applied to predict the common set of chemicals. Structure-based techniques such as docking and several QSAR modeling approaches were employed, mostly using a common training set of 1677 compounds provided by U.S. EPA, to build a total of 42 classification models and 8 regression models for binding, agonist and antagonist activity. All predictions were evaluated on ToxCast data and on an exte
Optimizing communication satellites payload configuration with exact approaches
NASA Astrophysics Data System (ADS)
Stathakis, Apostolos; Danoy, Grégoire; Bouvry, Pascal; Talbi, El-Ghazali; Morelli, Gianluigi
2015-12-01
The satellite communications market is competitive and rapidly evolving. The payload, which is in charge of applying frequency conversion and amplification to the signals received from Earth before their retransmission, is made of various components. These include reconfigurable switches that permit the re-routing of signals based on market demand or because of some hardware failure. In order to meet modern requirements, the size and the complexity of current communication payloads are increasing significantly. Consequently, the optimal payload configuration, which was previously done manually by the engineers with the use of computerized schematics, is now becoming a difficult and time consuming task. Efficient optimization techniques are therefore required to find the optimal set(s) of switch positions to optimize some operational objective(s). In order to tackle this challenging problem for the satellite industry, this work proposes two Integer Linear Programming (ILP) models. The first one is single-objective and focuses on the minimization of the length of the longest channel path, while the second one is bi-objective and additionally aims at minimizing the number of switch changes in the payload switch matrix. Experiments are conducted on a large set of instances of realistic payload sizes using the CPLEX® solver and two well-known exact multi-objective algorithms. Numerical results demonstrate the efficiency and limitations of the ILP approach on this real-world problem.
Torija, Antonio J; Ruiz, Diego P; Ramos-Ridao, Angel F
2014-06-01
To ensure appropriate soundscape management in urban environments, the urban-planning authorities need a range of tools that enable such a task to be performed. An essential step during the management of urban areas from a sound standpoint should be the evaluation of the soundscape in such an area. In this sense, it has been widely acknowledged that a subjective and acoustical categorization of a soundscape is the first step to evaluate it, providing a basis for designing or adapting it to match people's expectations as well. In this sense, this work proposes a model for automatic classification of urban soundscapes. This model is intended for the automatic classification of urban soundscapes based on underlying acoustical and perceptual criteria. Thus, this classification model is proposed to be used as a tool for a comprehensive urban soundscape evaluation. Because of the great complexity associated with the problem, two machine learning techniques, Support Vector Machines (SVM) and Support Vector Machines trained with Sequential Minimal Optimization (SMO), are implemented in developing model classification. The results indicate that the SMO model outperforms the SVM model in the specific task of soundscape classification. With the implementation of the SMO algorithm, the classification model achieves an outstanding performance (91.3% of instances correctly classified). © 2013 Elsevier B.V. All rights reserved.
An asymmetric mesoscopic model for single bulges in RNA
NASA Astrophysics Data System (ADS)
de Oliveira Martins, Erik; Weber, Gerald
2017-10-01
Simple one-dimensional DNA or RNA mesoscopic models are of interest for their computational efficiency while retaining the key elements of the molecular interactions. However, they only deal with perfectly formed DNA or RNA double helices and consider the intra-strand interactions to be the same on both strands. This makes it difficult to describe highly asymmetric structures such as bulges and loops and, for instance, prevents the application of mesoscopic models to determine RNA secondary structures. Here we derived the conditions for the Peyrard-Bishop mesoscopic model to overcome these limitations and applied it to the calculation of single bulges, the smallest and simplest of these asymmetric structures. We found that these theoretical conditions can indeed be applied to any situation where stacking asymmetry needs to be considered. The full set of parameters for group I RNA bulges was determined from experimental melting temperatures using an optimization procedure, and we also calculated average opening profiles for several RNA sequences. We found that guanosine bulges show the strongest perturbation on their neighboring base pairs, considerably reducing the on-site interactions of their neighboring base pairs.
Osteochondral Interface Tissue Engineering Using Macroscopic Gradients of Bioactive Signals
Dormer, Nathan H.; Singh, Milind; Wang, Limin; Berkland, Cory J.; Detamore, Michael S.
2013-01-01
Continuous gradients exist at osteochondral interfaces, which may be engineered by applying spatially patterned gradients of biological cues. In the present study, a protein-loaded microsphere-based scaffold fabrication strategy was applied to achieve spatially and temporally controlled delivery of bioactive signals in three-dimensional (3D) tissue engineering scaffolds. Bone morphogenetic protein-2 and transforming growth factor-β1-loaded poly(d,llactic- co-glycolic acid) microspheres were utilized with a gradient scaffold fabrication technology to produce microsphere-based scaffolds containing opposing gradients of these signals. Constructs were then seeded with human bone marrow stromal cells (hBMSCs) or human umbilical cord mesenchymal stromal cells (hUCMSCs), and osteochondral tissue regeneration was assessed in gradient scaffolds and compared to multiple control groups. Following a 6-week cell culture, the gradient scaffolds produced regionalized extracellular matrix, and outperformed the blank control scaffolds in cell number, glycosaminoglycan production, collagen content, alkaline phosphatase activity, and in some instances, gene expression of major osteogenic and chondrogenic markers. These results suggest that engineered signal gradients may be beneficial for osteochondral tissue engineering. PMID:20379780
On a viable first-order formulation of relativistic viscous fluids and its applications to cosmology
NASA Astrophysics Data System (ADS)
Disconzi, Marcelo M.; Kephart, Thomas W.; Scherrer, Robert J.
We consider a first-order formulation of relativistic fluids with bulk viscosity based on a stress-energy tensor introduced by Lichnerowicz. Choosing a barotropic equation-of-state, we show that this theory satisfies basic physical requirements and, under the further assumption of vanishing vorticity, that the equations of motion are causal, both in the case of a fixed background and when the equations are coupled to Einstein's equations. Furthermore, Lichnerowicz's proposal does not fit into the general framework of first-order theories studied by Hiscock and Lindblom, and hence their instability results do not apply. These conclusions apply to the full-fledged nonlinear theory, without any equilibrium or near equilibrium assumptions. Similarities and differences between the approach explored here and other theories of relativistic viscosity, including the Mueller-Israel-Stewart formulation, are addressed. Cosmological models based on the Lichnerowicz stress-energy tensor are studied. As the topic of (relativistic) viscous fluids is also of interest outside the general relativity and cosmology communities, such as, for instance, in applications involving heavy-ion collisions, we make our presentation largely self-contained.
A Deformable Smart Skin for Continuous Sensing Based on Electrical Impedance Tomography.
Visentin, Francesco; Fiorini, Paolo; Suzuki, Kenji
2016-11-16
In this paper, we present a low-cost, adaptable, and flexible pressure sensor that can be applied as a smart skin over both stiff and deformable media. The sensor can be easily adapted for use in applications related to the fields of robotics, rehabilitation, or costumer electronic devices. In order to remove most of the stiff components that block the flexibility of the sensor, we based the sensing capability on the use of a tomographic technique known as Electrical Impedance Tomography. The technique allows the internal structure of the domain under study to be inferred by reconstructing its conductivity map. By applying the technique to a material that changes its resistivity according to applied forces, it is possible to identify these changes and then localise the area where the force was applied. We tested the system when applied to flat and curved surfaces. For all configurations, we evaluate the artificial skin capabilities to detect forces applied over a single point, over multiple points, and changes in the underlying geometry. The results are all promising, and open the way for the application of such sensors in different robotic contexts where deformability is the key point.
A Deformable Smart Skin for Continuous Sensing Based on Electrical Impedance Tomography
Visentin, Francesco; Fiorini, Paolo; Suzuki, Kenji
2016-01-01
In this paper, we present a low-cost, adaptable, and flexible pressure sensor that can be applied as a smart skin over both stiff and deformable media. The sensor can be easily adapted for use in applications related to the fields of robotics, rehabilitation, or costumer electronic devices. In order to remove most of the stiff components that block the flexibility of the sensor, we based the sensing capability on the use of a tomographic technique known as Electrical Impedance Tomography. The technique allows the internal structure of the domain under study to be inferred by reconstructing its conductivity map. By applying the technique to a material that changes its resistivity according to applied forces, it is possible to identify these changes and then localise the area where the force was applied. We tested the system when applied to flat and curved surfaces. For all configurations, we evaluate the artificial skin capabilities to detect forces applied over a single point, over multiple points, and changes in the underlying geometry. The results are all promising, and open the way for the application of such sensors in different robotic contexts where deformability is the key point. PMID:27854325
Local adaptive contrast enhancement for color images
NASA Astrophysics Data System (ADS)
Dijk, Judith; den Hollander, Richard J. M.; Schavemaker, John G. M.; Schutte, Klamer
2007-04-01
A camera or display usually has a smaller dynamic range than the human eye. For this reason, objects that can be detected by the naked eye may not be visible in recorded images. Lighting is here an important factor; improper local lighting impairs visibility of details or even entire objects. When a human is observing a scene with different kinds of lighting, such as shadows, he will need to see details in both the dark and light parts of the scene. For grey value images such as IR imagery, algorithms have been developed in which the local contrast of the image is enhanced using local adaptive techniques. In this paper, we present how such algorithms can be adapted so that details in color images are enhanced while color information is retained. We propose to apply the contrast enhancement on color images by applying a grey value contrast enhancement algorithm to the luminance channel of the color signal. The color coordinates of the signal will remain the same. Care is taken that the saturation change is not too high. Gamut mapping is performed so that the output can be displayed on a monitor. The proposed technique can for instance be used by operators monitoring movements of people in order to detect suspicious behavior. To do this effectively, specific individuals should both be easy to recognize and track. This requires optimal local contrast, and is sometimes much helped by color when tracking a person with colored clothes. In such applications, enhanced local contrast in color images leads to more effective monitoring.
Learning Negotiation Policies Using IB3 and Bayesian Networks
NASA Astrophysics Data System (ADS)
Nalepa, Gislaine M.; Ávila, Bráulio C.; Enembreck, Fabrício; Scalabrin, Edson E.
This paper presents an intelligent offer policy in a negotiation environment, in which each agent involved learns the preferences of its opponent in order to improve its own performance. Each agent must also be able to detect drifts in the opponent's preferences so as to quickly adjust itself to their new offer policy. For this purpose, two simple learning techniques were first evaluated: (i) based on instances (IB3) and (ii) based on Bayesian Networks. Additionally, as its known that in theory group learning produces better results than individual/single learning, the efficiency of IB3 and Bayesian classifier groups were also analyzed. Finally, each decision model was evaluated in moments of concept drift, being the drift gradual, moderate or abrupt. Results showed that both groups of classifiers were able to effectively detect drifts in the opponent's preferences.
Aras, N; Altinel, I K; Oommen, J
2003-01-01
In addition to the classical heuristic algorithms of operations research, there have also been several approaches based on artificial neural networks for solving the traveling salesman problem. Their efficiency, however, decreases as the problem size (number of cities) increases. A technique to reduce the complexity of a large-scale traveling salesman problem (TSP) instance is to decompose or partition it into smaller subproblems. We introduce an all-neural decomposition heuristic that is based on a recent self-organizing map called KNIES, which has been successfully implemented for solving both the Euclidean traveling salesman problem and the Euclidean Hamiltonian path problem. Our solution for the Euclidean TSP proceeds by solving the Euclidean HPP for the subproblems, and then patching these solutions together. No such all-neural solution has ever been reported.
Comparison of raised-microdisk whispering-gallery-mode characterization techniques.
Redding, Brandon; Marchena, Elton; Creazzo, Tim; Shi, Shouyuan; Prather, Dennis W
2010-04-01
We compare the two prevailing raised-microdisk whispering-gallery-mode (WGM) characterization techniques, one based on coupling emission to a tapered fiber and the other based on collecting emission in the far field. We applied both techniques to study WGMs in Si nanocrystal raised microdisks and observed dramatically different behavior. We explain this difference in terms of the radiative bending loss on which the far-field collection technique relies and discuss the regimes of operation in which each technique is appropriate.
Islas, Gabriela; Hernandez, Prisciliano
2017-01-01
To achieve analytical success, it is necessary to develop thorough clean-up procedures to extract analytes from the matrix. Dispersive solid phase extraction (DSPE) has been used as a pretreatment technique for the analysis of several compounds. This technique is based on the dispersion of a solid sorbent in liquid samples in the extraction isolation and clean-up of different analytes from complex matrices. DSPE has found a wide range of applications in several fields, and it is considered to be a selective, robust, and versatile technique. The applications of dispersive techniques in the analysis of veterinary drugs in different matrices involve magnetic sorbents, molecularly imprinted polymers, carbon-based nanomaterials, and the Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS) method. Techniques based on DSPE permit minimization of additional steps such as precipitation, centrifugation, and filtration, which decreases the manipulation of the sample. In this review, we describe the main procedures used for synthesis, characterization, and application of this pretreatment technique and how it has been applied to food analysis. PMID:29181027
A Handbook for Parents of Deaf-Blind Children.
ERIC Educational Resources Information Center
Esche, Jeanne; Griffin, Carol
The handbook for parents of deaf blind children describes practical techniques of child care for such activities as sitting, standing, walking, sleeping, washing, eating, dressing, toilet training, disciplining, and playing. For instance, it is explained that some visually handicapped children acquire mannerisms in their early years because they…
Location-based technologies for supporting elderly pedestrian in "getting lost" events.
Pulido Herrera, Edith
2017-05-01
Localization-based technologies promise to keep older adults with dementia safe and support them and their caregivers during getting lost events. This paper summarizes mainly technological contributions to support the target group in these events. Moreover, important aspects of the getting lost phenomenon such as its concept and ethical issues are also briefly addressed. Papers were selected from scientific databases and gray literature. Since the topic is still in its infancy, other terms were used to find contributions associated with getting lost e.g. wandering. Trends of applying localization systems were identified as personal locators, perimeter systems and assistance systems. The first system barely considered the older adult's opinion, while assistance systems may involve context awareness to improve the support for both the elderly and the caregiver. Since few studies report multidisciplinary work with a special focus on getting lost, there is not a strong evidence of the real efficiency of localization systems or guidelines to design systems for the target group. Further research about getting lost is required to obtain insights for developing customizable systems. Moreover, considering conditions of the older adult might increase the impact of developments that combine localization technologies and artificial intelligence techniques. Implications for Rehabilitation Whilst there is no cure for dementia such as Alzheimer's, it is feasible to take advantage of technological developments to somewhat diminish its negative impact. For instance, location-based systems may provide information to early diagnose the Alzheimer's disease by assessing navigational impairments of older adults. Assessing the latest supportive technologies and methodologies may provide insights to adopt strategies to properly manage getting lost events. More user-centered designs will provide appropriate assistance to older adults. Namely, customizable systems could assist older adults in their daily walks with the aim to increase their self-confidence, independence and autonomy.
Nuclear Forensics Analysis with Missing and Uncertain Data
Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent
2015-10-05
We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained bymore » replacing missing information with constant values.« less
Shock formation and the ideal shape of ramp compression waves
NASA Astrophysics Data System (ADS)
Swift, Damian C.; Kraus, Richard G.; Loomis, Eric N.; Hicks, Damien G.; McNaney, James M.; Johnson, Randall P.
2008-12-01
We derive expressions for shock formation based on the local curvature of the flow characteristics during dynamic compression. Given a specific ramp adiabat, calculated for instance from the equation of state for a substance, the ideal nonlinear shape for an applied ramp loading history can be determined. We discuss the region affected by lateral release, which can be presented in compact form for the ideal loading history. Example calculations are given for representative metals and plastic ablators. Continuum dynamics (hydrocode) simulations were in good agreement with the algebraic forms. Example applications are presented for several classes of laser-loading experiment, identifying conditions where shocks are desired but not formed, and where long-duration ramps are desired.
Process Materialization Using Templates and Rules to Design Flexible Process Models
NASA Astrophysics Data System (ADS)
Kumar, Akhil; Yao, Wen
The main idea in this paper is to show how flexible processes can be designed by combining generic process templates and business rules. We instantiate a process by applying rules to specific case data, and running a materialization algorithm. The customized process instance is then executed in an existing workflow engine. We present an architecture and also give an algorithm for process materialization. The rules are written in a logic-based language like Prolog. Our focus is on capturing deeper process knowledge and achieving a holistic approach to robust process design that encompasses control flow, resources and data, as well as makes it easier to accommodate changes to business policy.
NASA Astrophysics Data System (ADS)
Fomin, Fedor V.
Preprocessing (data reduction or kernelization) as a strategy of coping with hard problems is universally used in almost every implementation. The history of preprocessing, like applying reduction rules simplifying truth functions, can be traced back to the 1950's [6]. A natural question in this regard is how to measure the quality of preprocessing rules proposed for a specific problem. For a long time the mathematical analysis of polynomial time preprocessing algorithms was neglected. The basic reason for this anomaly was that if we start with an instance I of an NP-hard problem and can show that in polynomial time we can replace this with an equivalent instance I' with |I'| < |I| then that would imply P=NP in classical complexity.
NASA Astrophysics Data System (ADS)
Altamirano, Felipe Ignacio Castro
This dissertation focuses on the problem of designing rates in the utility sector. It is motivated by recent developments in the electricity industry, where renewable generation technologies and distributed energy resources are becoming increasingly relevant. Both technologies disrupt the sector in unique ways. While renewables make grid operations more complex, and potentially more expensive, distributed energy resources enable consumers to interact two-ways with the grid. Both developments present challenges and opportunities for regulators, who must adapt their techniques for evaluating policies to the emerging technological conditions. The first two chapters of this work make the case for updating existing techniques to evaluate tariff structures. They also propose new methods which are more appropriate given the prospective technological characteristics of the sector. The first chapter constructs an analytic tool based on a model that captures the interaction between pricing and investment. In contrast to previous approaches, this technique allows consistently comparing portfolios of rates while enabling researchers to model with a significantly greater level of detail the supply side of the sector. A key theoretical implication of the model that underlies this technique is that, by properly updating the portfolio of tariffs, a regulator could induce the welfare maximizing adoption of distributed energy resources and enrollment in rate structures. We develop an algorithm to find globally optimal solutions of this model, which is a nonlinear mathematical program. The results of a computational experiment show that the performance of the algorithm dominates that of commercial nonlinear solvers. In addition, to illustrate the practical relevance of the method, we conduct a cost benefit analysis of implementing time-variant tariffs in two electricity systems, California and Denmark. Although portfolios with time-varying rates create value in both systems, these improvements differ enough to advise very different policies. While in Denmark time-varying tariffs appear unattractive, they at least deserve further revision in California. This conclusion is beyond the reach of previous techniques to analyze rates, as they do not capture the interplay between an intermittent supply and a price-responsive demand. While useful, the method we develop in the first chapter has two important limitations. One is the lack of transparency of the parameters that determine demand substitution patterns, and demand heterogeneity; the other is the narrow range of rate structures that could be studied with the technique. Both limitations stem from taking as a primitive a demand function. Following an alternative path, in the second chapter we develop a technique based on a pricing model that has as a fundamental building block the consumer utility maximization problem. Because researchers do not have to limit themselves to problems with unique solutions, this approach significantly increases the flexibility of the model and, in particular, addresses the limitations of the technique we develop in the first chapter. This gain in flexibility decreases the practicality of our method since the underlying model becomes a Bilevel Problem. To be able to handle realistic instances, we develop a decomposition method based on a non-linear variant of the Alternating Direction Method of Multipliers, which combines Conic and Mixed Integer Programming. A numerical experiment shows that the performance of the solution technique is robust to instance sizes and a wide combination of parameters. We illustrate the relevance of the new method with another applied analysis of rate structures. Our results highlight the value of being able to model in detail distributed energy resources. They also show that ignoring transmission constraints can have meaningful impacts on the analysis of rate structures. In addition, we conduct a distributional analysis, which portrays how our method permits regulators and policy makers to study impacts of a rate update on a heterogeneous population. While a switch in rates could have a positive impact on the aggregate of households, it could benefit some more than others, and even harm some customers. Our technique permits to anticipate these impacts, letting regulators decide among rate structures with considerably more information than what would be available with alternative approaches. In the third chapter, we conduct an empirical analysis of rate structures in California, which is currently undergoing a rate reform. To contribute to the ongoing regulatory debate about the future of rates, we analyze in depth a set of plausible tariff alternatives. In our analysis, we focus on a scenario in which advanced metering infrastructure and home energy management systems are widely adopted. Our modeling approach allows us to capture a wide variety of temporal and spatial demand substitution patterns without the need of estimating a large number of parameters. (Abstract shortened by ProQuest.).
Business Informatics: An Engineering Perspective on Information Systems
ERIC Educational Resources Information Center
Helfert, Markus
2008-01-01
Over the last three decades many universities have offered various programmes related to Information Systems. However, the rapid changes in recent years demand constant evaluation and modification of education programmes. Recent challenges include, for instance, the move towards programmes that are more applied and professionally-orientated. The…
Spiegel, D; Stroud, P; Fyfe, A
1998-01-01
The widespread use of complementary and alternative medicine techniques, often explored by patients without discussion with their primary care physician, is seen as a request from patients for care as well as cure. In this article, we discuss the reasons for the growth of and interest in complementary and alternative medicine in an era of rapidly advancing medical technology. There is, for instance, evidence of the efficacy of supportive techniques such as group psychotherapy in improving adjustment and increasing survival time of cancer patients. We describe current and developing complementary medicine programs as well as opportunities for integration of some complementary techniques into standard medical care. PMID:9584661
Exploring Deep Learning and Sparse Matrix Format Selection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Y.; Liao, C.; Shen, X.
We proposed to explore the use of Deep Neural Networks (DNN) for addressing the longstanding barriers. The recent rapid progress of DNN technology has created a large impact in many fields, which has significantly improved the prediction accuracy over traditional machine learning techniques in image classifications, speech recognitions, machine translations, and so on. To some degree, these tasks resemble the decision makings in many HPC tasks, including the aforementioned format selection for SpMV and linear solver selection. For instance, sparse matrix format selection is akin to image classification—such as, to tell whether an image contains a dog or a cat;more » in both problems, the right decisions are primarily determined by the spatial patterns of the elements in an input. For image classification, the patterns are of pixels, and for sparse matrix format selection, they are of non-zero elements. DNN could be naturally applied if we regard a sparse matrix as an image and the format selection or solver selection as classification problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Zhi-Gang; State Key Laboratory of Structural Chemistry, Fujian Institute of Research on the Structure of Matter, Chinese Academy of Sciences, 350002 Fuzhou; Heinke, Lars, E-mail: Lars.Heinke@KIT.edu
The electronic properties of metal-organic frameworks (MOFs) are increasingly attracting the attention due to potential applications in sensor techniques and (micro-) electronic engineering, for instance, as low-k-dielectric in semiconductor technology. Here, the band gap and the band structure of MOFs of type HKUST-1 are studied in detail by means of spectroscopic ellipsometry applied to thin surface-mounted MOF films and by means of quantum chemical calculations. The analysis of the density of states, the band structure, and the excitation spectrum reveal the importance of the empty Cu-3d orbitals for the electronic properties of HKUST-1. This study shows that, in contrast tomore » common belief, even in the case of this fairly “simple” MOF, the excitation spectra cannot be explained by a superposition of “intra-unit” excitations within the individual building blocks. Instead, “inter-unit” excitations also have to be considered.« less
An integrated sensing technique for smart monitoring of water pipelines
NASA Astrophysics Data System (ADS)
Bernini, Romeo; Catapano, Ilaria; Soldovieri, Francesco; Crocco, Lorenzo
2014-05-01
Lowering the rate of water leakage from the network of underground pipes is one of the requirements that "smart" cities have to comply with. In fact, losses in the water supply infrastructure have a remarkable social, environmental and economic impact, which obviously conflicts with the expected efficiency and sustainability of a smart city. As a consequence, there is a huge interest in developing prevention policies based on state-of-art sensing techniques and possibly their integration, as well as in envisaging ad hoc technical solutions designed for the application at hand. As a contribution to this framework, in this communication we present an approach aimed to pursue a thorough non-invasive monitoring of water pipelines, with both high spatial and temporal resolution. This goal is necessary to guarantee that maintenance operations are performed timely, so to reduce the extent of the leakage and its possible side effects, and precisely, so to minimize the cost and the discomfort resulting from operating on the water supply network. The proposed approach integrates two sensing techniques that work at different spatial and temporal scales. The first one is meant to provide a continuous (in both space and time) monitoring of the pipeline and exploits a distributed optic fiber sensor based on the Brillouin scattering phenomenon. This technique provides the "low" spatial resolution information (at meter scale) needed to reveal the presence of a leak and call for interventions [1]. The second technique is based on the use of Ground Penetrating Radar (GPR) and is meant to provide detailed images of area where the damage has been detected. GPR systems equipped with suitable data processing strategies [2,3] are indeed capable of providing images of the shallow underground, where the pipes would be buried, characterized by a spatial resolution in the order of a few centimeters. This capability is crucial to address in the most proper way maintenance operations, by for instance reducing as much as possible the extent of the area where excavations have to undergo or suggesting a suitable timing for the interventions. REFERENCES [1] A.Minardo, G. Persichetti, G.Testa, L. Zeni, R.Bernini, "Long term structural health monitoring by Brillouin fiber-optic sensing: a real case", Journal of Geophysical and Engineering, vol. 9, pp. S64-S69, 2012 [2] L. Crocco, G. Prisco, F. Soldovieri, N. J. Cassidy, "Early-stage leaking pipes GPR monitoring via microwave tomographic inversion", Journal of Applied Geophysics, vol. 67, pp. 270-277, 2009 [3] L. Crocco and F. Soldovieri, T. Millington and N. Cassidy, "Bistatic tomographic GPR imaging for incipient pipeline leakage evaluation", Progress In Electromagnetics Research, PIER, vol. 101, pp. 307-321, 2010
Computer assisted analysis of auroral images obtained from high altitude polar satellites
NASA Technical Reports Server (NTRS)
Samadani, Ramin; Flynn, Michael
1993-01-01
Automatic techniques that allow the extraction of physically significant parameters from auroral images were developed. This allows the processing of a much larger number of images than is currently possible with manual techniques. Our techniques were applied to diverse auroral image datasets. These results were made available to geophysicists at NASA and at universities in the form of a software system that performs the analysis. After some feedback from users, an upgraded system was transferred to NASA and to two universities. The feasibility of user-trained search and retrieval of large amounts of data using our automatically derived parameter indices was demonstrated. Techniques based on classification and regression trees (CART) were developed and applied to broaden the types of images to which the automated search and retrieval may be applied. Our techniques were tested with DE-1 auroral images.
Constraining geostatistical models with hydrological data to improve prediction realism
NASA Astrophysics Data System (ADS)
Demyanov, V.; Rojas, T.; Christie, M.; Arnold, D.
2012-04-01
Geostatistical models reproduce spatial correlation based on the available on site data and more general concepts about the modelled patters, e.g. training images. One of the problem of modelling natural systems with geostatistics is in maintaining realism spatial features and so they agree with the physical processes in nature. Tuning the model parameters to the data may lead to geostatistical realisations with unrealistic spatial patterns, which would still honour the data. Such model would result in poor predictions, even though although fit the available data well. Conditioning the model to a wider range of relevant data provide a remedy that avoid producing unrealistic features in spatial models. For instance, there are vast amounts of information about the geometries of river channels that can be used in describing fluvial environment. Relations between the geometrical channel characteristics (width, depth, wave length, amplitude, etc.) are complex and non-parametric and are exhibit a great deal of uncertainty, which is important to propagate rigorously into the predictive model. These relations can be described within a Bayesian approach as multi-dimensional prior probability distributions. We propose a way to constrain multi-point statistics models with intelligent priors obtained from analysing a vast collection of contemporary river patterns based on previously published works. We applied machine learning techniques, namely neural networks and support vector machines, to extract multivariate non-parametric relations between geometrical characteristics of fluvial channels from the available data. An example demonstrates how ensuring geological realism helps to deliver more reliable prediction of a subsurface oil reservoir in a fluvial depositional environment.
NASA Astrophysics Data System (ADS)
Terrazzino, Alfonso; Volponi, Silvia; Borgogno Mondino, Enrico
2001-12-01
An investigation has been carried out, concerning remote sensing techniques, in order to assess their potential application to the energy system business: the most interesting results concern a new approach, based on digital data from remote sensing, to infrastructures with a large territorial distribution: in particular OverHead Transmission Lines, for the high voltage transmission and distribution of electricity on large distances. Remote sensing could in principle be applied to all the phases of the system lifetime, from planning to design, to construction, management, monitoring and maintenance. In this article, a remote sensing based approach is presented, targeted to the line planning: optimization of OHTLs path and layout, according to different parameters (technical, environmental and industrial). Planning new OHTLs is of particular interest in emerging markets, where typically the cartography is missing or available only on low accuracy scale (1:50.000 and lower), often not updated. Multi- spectral images can be used to generate thematic maps of the region of interest for the planning (soil coverage). Digital Elevation Models (DEMs), allow the planners to easily access the morphologic information of the surface. Other auxiliary information from local laws, environmental instances, international (IEC) standards can be integrated in order to perform an accurate optimized path choice and preliminary spotting of the OHTLs. This operation is carried out by an ABB proprietary optimization algorithm: the output is a preliminary path that bests fits the optimization parameters of the line in a life cycle approach.
Inverse association between urbanicity and treatment resistance in schizophrenia.
Wimberley, Theresa; Pedersen, Carsten B; MacCabe, James H; Støvring, Henrik; Astrup, Aske; Sørensen, Holger J; Horsdal, Henriette T; Mortensen, Preben B; Gasse, Christiane
2016-07-01
Living in a larger city is associated with increased risk of schizophrenia; and world-wide, consistent evidence shows that the higher the degree of urbanicity the higher the risk of schizophrenia. However, the association between urbanicity and treatment-resistant schizophrenia (TRS) as a more severe form of schizophrenia or separate entity of schizophrenia has not been fully explored yet. We aimed to investigate the association between urbanicity and incidence of TRS. A large Danish population-based cohort of all individuals with a first schizophrenia diagnosis after 1996 was followed until 2013 applying survival analysis techniques. TRS was assessed using a treatment-based proxy, defined as the earliest observed instance of either clozapine initiation or hospital admission due to schizophrenia after having received two prior antipsychotic monotherapy trials of adequate duration. Among the 13,349 schizophrenia patients, 17.3% experienced TRS during follow-up (median follow-up: 7years, inter-quartile range: 3-12years). The 5-year risk of TRS ranged from 10.5% in the capital area to 17.6% in the rural areas. Compared with individuals with schizophrenia residing in the capital area, hazard ratios were 1.44 (1.31-1.59) for provincial areas and 1.60 (1.43-1.79) for rural areas. Higher rates of TRS were found in less urbanized areas. The different direction of urban-rural differences regarding TRS and schizophrenia risk may indicate urban-rural systematic differences in treatment practices, or different urban-rural aetiologic types of schizophrenia. Copyright © 2016 Elsevier B.V. All rights reserved.
Efficient Algorithms for Handling Nondeterministic Automata
NASA Astrophysics Data System (ADS)
Vojnar, Tomáš
Finite (word, tree, or omega) automata play an important role in different areas of computer science, including, for instance, formal verification. Often, deterministic automata are used for which traditional algorithms for important operations such as minimisation and inclusion checking are available. However, the use of deterministic automata implies a need to determinise nondeterministic automata that often arise during various computations even when the computations start with deterministic automata. Unfortunately, determinisation is a very expensive step since deterministic automata may be exponentially bigger than the original nondeterministic automata. That is why, it appears advantageous to avoid determinisation and work directly with nondeterministic automata. This, however, brings a need to be able to implement operations traditionally done on deterministic automata on nondeterministic automata instead. In particular, this is the case of inclusion checking and minimisation (or rather reduction of the size of automata). In the talk, we review several recently proposed techniques for inclusion checking on nondeterministic finite word and tree automata as well as Büchi automata. These techniques are based on using the so called antichains, possibly combined with a use of suitable simulation relations (and, in the case of Büchi automata, the so called Ramsey-based or rank-based approaches). Further, we discuss techniques for reducing the size of nondeterministic word and tree automata using quotienting based on the recently proposed notion of mediated equivalences. The talk is based on several common works with Parosh Aziz Abdulla, Ahmed Bouajjani, Yu-Fang Chen, Peter Habermehl, Lisa Kaati, Richard Mayr, Tayssir Touili, Lorenzo Clemente, Lukáš Holík, and Chih-Duo Hong.
2015-12-01
combine satisficing behaviour with learning and adaptation through environmental feedback. This a sequential decision making with one alternative...next action that an opponent will most likely take in a strategic interaction. Also, cognitive models derived from instance- based learning theory (IBL... through instance- based learning . In Y. Li (Ed.), Lecture Notes in Computer Science (Vol. 6818, pp. 281-293). Heidelberg: Springer Berlin. Gonzalez, C
MicroRNA based Pan-Cancer Diagnosis and Treatment Recommendation.
Cheerla, Nikhil; Gevaert, Olivier
2017-01-13
The current state-of-the-art in cancer diagnosis and treatment is not ideal; diagnostic tests are accurate but invasive, and treatments are "one-size fits-all" instead of being personalized. Recently, miRNA's have garnered significant attention as cancer biomarkers, owing to their ease of access (circulating miRNA in the blood) and stability. There have been many studies showing the effectiveness of miRNA data in diagnosing specific cancer types, but few studies explore the role of miRNA in predicting treatment outcome. Here we go a step further, using tissue miRNA and clinical data across 21 cancers from the 'The Cancer Genome Atlas' (TCGA) database. We use machine learning techniques to create an accurate pan-cancer diagnosis system, and a prediction model for treatment outcomes. Finally, using these models, we create a web-based tool that diagnoses cancer and recommends the best treatment options. We achieved 97.2% accuracy for classification using a support vector machine classifier with radial basis. The accuracies improved to 99.9-100% when climbing up the embryonic tree and classifying cancers at different stages. We define the accuracy as the ratio of the total number of instances correctly classified to the total instances. The classifier also performed well, achieving greater than 80% sensitivity for many cancer types on independent validation datasets. Many miRNAs selected by our feature selection algorithm had strong previous associations to various cancers and tumor progression. Then, using miRNA, clinical and treatment data and encoding it in a machine-learning readable format, we built a prognosis predictor model to predict the outcome of treatment with 85% accuracy. We used this model to create a tool that recommends personalized treatment regimens. Both the diagnosis and prognosis model, incorporating semi-supervised learning techniques to improve their accuracies with repeated use, were uploaded online for easy access. Our research is a step towards the final goal of diagnosing cancer and predicting treatment recommendations using non-invasive blood tests.
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Pryputniewicz, Ryszard J.
2002-06-01
Effective suppression of speckle noise content in interferometric data images can help in improving accuracy and resolution of the results obtained with interferometric optical metrology techniques. In this paper, novel speckle noise reduction algorithms based on the discrete wavelet transformation are presented. The algorithms proceed by: (a) estimating the noise level contained in the interferograms of interest, (b) selecting wavelet families, (c) applying the wavelet transformation using the selected families, (d) wavelet thresholding, and (e) applying the inverse wavelet transformation, producing denoised interferograms. The algorithms are applied to the different stages of the processing procedures utilized for generation of quantitative speckle correlation interferometry data of fiber-optic based opto-electronic holography (FOBOEH) techniques, allowing identification of optimal processing conditions. It is shown that wavelet algorithms are effective for speckle noise reduction while preserving image features otherwise faded with other algorithms.
NASA Technical Reports Server (NTRS)
Williams, Richard S. (Editor); Doarn, Charles R. (Editor); Shepanek, Marc A.
2017-01-01
In the realm of aerospace engineering and the physical sciences, we have developed laws of physics based on empirical and research evidence that reliably guide design, research, and development efforts. For instance, an engineer designs a system based on data and experience that can be consistently and repeatedly verified. This reproducibility depends on the consistency and dependability of the materials on which the engineer works and is subject to physics, geometry and convention. In life sciences and medicine, these apply as well, but individuality introduces a host of variables into the mix, resulting in characteristics and outcomes that can be quite broad within a population of individuals. This individuality ranges from differences at the genetic and cellular level to differences in an individuals personality and abilities due to sex and gender, environment, education, etc.
NAPR: a Cloud-Based Framework for Neuroanatomical Age Prediction.
Pardoe, Heath R; Kuzniecky, Ruben
2018-01-01
The availability of cloud computing services has enabled the widespread adoption of the "software as a service" (SaaS) approach for software distribution, which utilizes network-based access to applications running on centralized servers. In this paper we apply the SaaS approach to neuroimaging-based age prediction. Our system, named "NAPR" (Neuroanatomical Age Prediction using R), provides access to predictive modeling software running on a persistent cloud-based Amazon Web Services (AWS) compute instance. The NAPR framework allows external users to estimate the age of individual subjects using cortical thickness maps derived from their own locally processed T1-weighted whole brain MRI scans. As a demonstration of the NAPR approach, we have developed two age prediction models that were trained using healthy control data from the ABIDE, CoRR, DLBS and NKI Rockland neuroimaging datasets (total N = 2367, age range 6-89 years). The provided age prediction models were trained using (i) relevance vector machines and (ii) Gaussian processes machine learning methods applied to cortical thickness surfaces obtained using Freesurfer v5.3. We believe that this transparent approach to out-of-sample evaluation and comparison of neuroimaging age prediction models will facilitate the development of improved age prediction models and allow for robust evaluation of the clinical utility of these methods.
An Agent-Based Modeling Framework and Application for the Generic Nuclear Fuel Cycle
NASA Astrophysics Data System (ADS)
Gidden, Matthew J.
Key components of a novel methodology and implementation of an agent-based, dynamic nuclear fuel cycle simulator, Cyclus , are presented. The nuclear fuel cycle is a complex, physics-dependent supply chain. To date, existing dynamic simulators have not treated constrained fuel supply, time-dependent, isotopic-quality based demand, or fuel fungibility particularly well. Utilizing an agent-based methodology that incorporates sophisticated graph theory and operations research techniques can overcome these deficiencies. This work describes a simulation kernel and agents that interact with it, highlighting the Dynamic Resource Exchange (DRE), the supply-demand framework at the heart of the kernel. The key agent-DRE interaction mechanisms are described, which enable complex entity interaction through the use of physics and socio-economic models. The translation of an exchange instance to a variant of the Multicommodity Transportation Problem, which can be solved feasibly or optimally, follows. An extensive investigation of solution performance and fidelity is then presented. Finally, recommendations for future users of Cyclus and the DRE are provided.
Dante's Comedy: precursors of psychoanalytic technique and psyche.
Szajnberg, Nathan Moses
2010-02-01
This paper uses a literary approach to explore what common ground exists in both psychoanalytic technique and views of the psyche, of 'person'. While Western literature has developed various views of psyche and person over centuries, there have been crystallizing, seminal portraits, for instance Shakespeare's perspective on what is human, some of which have endured to the present. By using Dante's Commedia, particularly the Inferno, a 14th century poem that both integrates and revises previous models of psyche and personhood, we can examine what features of psyche, and 'techniques' in soul-healing psychoanalysts have inherited culturally. Discovering basic features of technique and model of psyche we share as psychoanalysts permits us to explore why we have differences in variations on technique and models of inner life.
NASA Astrophysics Data System (ADS)
Fusco, Terence; Bi, Yaxin; Nugent, Chris; Wu, Shengli
2016-08-01
We can see that the data imputation approach using the Regression CTA has performed more favourably when compared with the alternative methods on this dataset. We now have the evidence to show that this method is viable moving forward with further research in this area. The weighted distribution experiments have provided us with a more balanced and appropriate ratio for snail density classification purposes when using either the 3 or 5 category combination. The most desirable results are found when using 3 categories of SD with the weighted distribution of classes being 20-60-20. This information reflects the optimum classification accuracy across the data range and can be applied to any novel environment feature dataset pertaining to Schistosomiasis vector classification. ITSVM has provided us with a method of labelling SD data which we can use for classification with epidemic disease prediction research. The confidence level selection enables consistent labelling accuracy for bespoke requirements when classifying the data from each year. The SMOTE Equilibrium proposed method has yielded a slight increase with each multiple of synthetic instances that are compounded to the training dataset. The reduction of overfitting and increase of data instances has shown a gradual classification accuracy increase across the data for each year. We will now test to see what the optimum synthetic instance incremental increase is across our data and apply this to our experiments with this research.
Identifying Crucial Parameter Correlations Maintaining Bursting Activity
Doloc-Mihu, Anca; Calabrese, Ronald L.
2014-01-01
Recent experimental and computational studies suggest that linearly correlated sets of parameters (intrinsic and synaptic properties of neurons) allow central pattern-generating networks to produce and maintain their rhythmic activity regardless of changing internal and external conditions. To determine the role of correlated conductances in the robust maintenance of functional bursting activity, we used our existing database of half-center oscillator (HCO) model instances of the leech heartbeat CPG. From the database, we identified functional activity groups of burster (isolated neuron) and half-center oscillator model instances and realistic subgroups of each that showed burst characteristics (principally period and spike frequency) similar to the animal. To find linear correlations among the conductance parameters maintaining functional leech bursting activity, we applied Principal Component Analysis (PCA) to each of these four groups. PCA identified a set of three maximal conductances (leak current, Leak; a persistent K current, K2; and of a persistent Na+ current, P) that correlate linearly for the two groups of burster instances but not for the HCO groups. Visualizations of HCO instances in a reduced space suggested that there might be non-linear relationships between these parameters for these instances. Experimental studies have shown that period is a key attribute influenced by modulatory inputs and temperature variations in heart interneurons. Thus, we explored the sensitivity of period to changes in maximal conductances of Leak, K2, and P, and we found that for our realistic bursters the effect of these parameters on period could not be assessed because when varied individually bursting activity was not maintained. PMID:24945358
Comparative Learning in Partnerships: Control, Competition or Collaboration?
ERIC Educational Resources Information Center
Takahashi, Chie
2008-01-01
This paper examines the quality and development of relations between organisations and the ways in which these are informed by incidental learning experiences in two projects. The paper conceptualizes instances of inter-organisational learning (IOL) applying theories such as principal-agent, prisoners' dilemma and women's place in community…
ERIC Educational Resources Information Center
Donoghue, Gregory M.; Horvath, Jared C.
2016-01-01
Educators strive to understand and apply knowledge gained through scientific endeavours. Yet, within the various sciences of learning, particularly within educational neuroscience, there have been instances of seemingly contradictory or incompatible research findings and theories. We argue that this situation arises through confusion between…
29 CFR 779.232 - Franchise or other arrangements which create a larger enterprise.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 3 2014-07-01 2014-07-01 false Franchise or other arrangements which create a larger... Apply; Enterprise Coverage Leased Departments, Franchise and Other Business Arrangements § 779.232 Franchise or other arrangements which create a larger enterprise. (a) In other instances, franchise...
29 CFR 779.232 - Franchise or other arrangements which create a larger enterprise.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 3 2012-07-01 2012-07-01 false Franchise or other arrangements which create a larger... Apply; Enterprise Coverage Leased Departments, Franchise and Other Business Arrangements § 779.232 Franchise or other arrangements which create a larger enterprise. (a) In other instances, franchise...
29 CFR 779.232 - Franchise or other arrangements which create a larger enterprise.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 3 2013-07-01 2013-07-01 false Franchise or other arrangements which create a larger... Apply; Enterprise Coverage Leased Departments, Franchise and Other Business Arrangements § 779.232 Franchise or other arrangements which create a larger enterprise. (a) In other instances, franchise...
29 CFR 779.232 - Franchise or other arrangements which create a larger enterprise.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Franchise or other arrangements which create a larger... Apply; Enterprise Coverage Leased Departments, Franchise and Other Business Arrangements § 779.232 Franchise or other arrangements which create a larger enterprise. (a) In other instances, franchise...
29 CFR 779.232 - Franchise or other arrangements which create a larger enterprise.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 3 2011-07-01 2011-07-01 false Franchise or other arrangements which create a larger... Apply; Enterprise Coverage Leased Departments, Franchise and Other Business Arrangements § 779.232 Franchise or other arrangements which create a larger enterprise. (a) In other instances, franchise...
Goofy Guide Game: Affordances and Constraints for Engagement and Oral Communication in English
ERIC Educational Resources Information Center
Enticknap-Seppänen, Kaisa
2017-01-01
This study investigates tourism undergraduates' perceptions of learning engagement and oral communication in English through their experiences of testing a pilot purpose-designed educational digital game. Reflecting the implementation of digitalization strategy in universities of applied sciences in Finland, it examines whether single instances of…
ERIC Educational Resources Information Center
del Campo, Marisa A.; Kehle, Thomas J.
2016-01-01
There are many important phenomena involved in human functioning that are unnoticed, misunderstood, not applied, or do not pique the interest of the scientific community. Among these, "autonomous sensory meridian response" ("ASMR") and "frisson" are two very noteworthy instances that may prove to be therapeutically…
Frequency domain technique for a two-dimensional mapping of optical tissue properties
NASA Astrophysics Data System (ADS)
Bocher, Thomas; Beuthan, Juergen; Minet, Olaf; Naber, Rolf-Dieter; Mueller, Gerhard J.
1995-12-01
Locally and individually varying optical tissue parameters (mu) a, (mu) s, and g are responsible for non-neglectible uncertainties in the interpretation of spectroscopic data in optical biopsy techniques. The intrinsic fluorescence signal for instance doesn't depend only on the fluorophore concentration but also on the amount of other background absorbers and on alterations of scattering properties. Therefore neither a correct relative nor an absolute mapping of the lateral fluorophore concentration can be derived from the intrinsic fluorescence signal alone. Using MC-simulations it can be shown that in time-resolved LIFS the simultaneously measured backscattered signal of the excitation wavelength (UV) can be used to develop a special, linearized rescaling algorithm to take into account the most dominant of these varying tissue parameters which is (mu) a,ex. In combination with biochemical calibration measurements we were able to perform fiberbased quantitative NADH- concentration measurements. In this paper a new rescaling method for VIS and IR light in the frequency domain is proposed. It can be applied within the validity range of the diffusion approximation and provides full (mu) a and (mu) s rescaling possibility in a 2- dimensional, non-contact mapping mode. The scanning device is planned to be used in combination with a standard operation microscope of ZEISS, Germany.
Three-dimensional light-tissue interaction models for bioluminescence tomography
NASA Astrophysics Data System (ADS)
Côté, D.; Allard, M.; Henkelman, R. M.; Vitkin, I. A.
2005-09-01
Many diagnostic and therapeutic approaches in medical physics today take advantage of the unique properties of light and its interaction with tissues. Because light scatters in tissue, our ability to develop these techniques depends critically on our knowledge of the distribution of light in tissue. Solutions to the diffusion equation can provide such information, but often lack the flexibility required for more general problems that involve, for instance, inhomogeneous optical properties, light polarization, arbitrary three-dimensional geometries, or arbitrary scattering. Monte Carlo techniques, which statistically sample the light distribution in tissue, offer a better alternative to analytical models. First, we discuss our implementation of a validated three-dimensional polarization-sensitive Monte Carlo algorithm and demonstrate its generality with respect to the geometry and scattering models it can treat. Second, we apply our model to bioluminescence tomography. After appropriate genetic modifications to cell lines, bioluminescence can be used as an indicator of cell activity, and is often used to study tumour growth and treatment in animal models. However, the amount of light escaping the animal is strongly dependent on the position and size of the tumour. Using forward models and structural data from magnetic resonance imaging, we show how the models can help to determine the location and size of tumour made of bioluminescent cancer cells in the brain of a mouse.