Iris recognition based on key image feature extraction.
Ren, X; Tian, Q; Zhang, J; Wu, S; Zeng, Y
2008-01-01
In iris recognition, feature extraction can be influenced by factors such as illumination and contrast, and thus the features extracted may be unreliable, which can cause a high rate of false results in iris pattern recognition. In order to obtain stable features, an algorithm was proposed in this paper to extract key features of a pattern from multiple images. The proposed algorithm built an iris feature template by extracting key features and performed iris identity enrolment. Simulation results showed that the selected key features have high recognition accuracy on the CASIA Iris Set, where both contrast and illumination variance exist.
ERIC Educational Resources Information Center
Kerr, Deirdre; Chung, Gregory K. W. K.
2012-01-01
The assessment cycle of "evidence-centered design" (ECD) provides a framework for treating an educational video game or simulation as an assessment. One of the main steps in the assessment cycle of ECD is the identification of the key features of student performance. While this process is relatively simple for multiple choice tests, when…
Variations in the implementation and characteristics of chiropractic services in VA.
Lisi, Anthony J; Khorsan, Raheleh; Smith, Monica M; Mittman, Brian S
2014-12-01
In 2004, the US Department of Veterans Affairs expanded its delivery of chiropractic care by establishing onsite chiropractic clinics at select facilities across the country. Systematic information regarding the planning and implementation of these clinics and describing their features and performance is lacking. To document the planning, implementation, key features and performance of VA chiropractic clinics, and to identify variations and their underlying causes and key consequences as well as their implications for policy, practice, and research on the introduction of new clinical services into integrated health care delivery systems. Comparative case study of 7 clinics involving site visit-based and telephone-based interviews with 118 key stakeholders, including VA clinicians, clinical leaders and administrative staff, and selected external stakeholders, as well as reviews of key documents and administrative data on clinic performance and service delivery. Interviews were recorded, transcribed, and analyzed using a mixed inductive (exploratory) and deductive approach. Interview data revealed considerable variations in clinic planning and implementation processes and clinic features, as well as perceptions of clinic performance and quality. Administrative data showed high variation in patterns of clinic patient care volume over time. A facility's initial willingness to establish a chiropractic clinic, along with a higher degree of perceived evidence-based and collegial attributes of the facility chiropractor, emerged as key factors associated with higher and more consistent delivery of chiropractic services and higher perceived quality of those services.
Qiao, Hong; Li, Yinlin; Li, Fengfu; Xi, Xuanyang; Wu, Wei
2016-10-01
Recently, many biologically inspired visual computational models have been proposed. The design of these models follows the related biological mechanisms and structures, and these models provide new solutions for visual recognition tasks. In this paper, based on the recent biological evidence, we propose a framework to mimic the active and dynamic learning and recognition process of the primate visual cortex. From principle point of view, the main contributions are that the framework can achieve unsupervised learning of episodic features (including key components and their spatial relations) and semantic features (semantic descriptions of the key components), which support higher level cognition of an object. From performance point of view, the advantages of the framework are as follows: 1) learning episodic features without supervision-for a class of objects without a prior knowledge, the key components, their spatial relations and cover regions can be learned automatically through a deep neural network (DNN); 2) learning semantic features based on episodic features-within the cover regions of the key components, the semantic geometrical values of these components can be computed based on contour detection; 3) forming the general knowledge of a class of objects-the general knowledge of a class of objects can be formed, mainly including the key components, their spatial relations and average semantic values, which is a concise description of the class; and 4) achieving higher level cognition and dynamic updating-for a test image, the model can achieve classification and subclass semantic descriptions. And the test samples with high confidence are selected to dynamically update the whole model. Experiments are conducted on face images, and a good performance is achieved in each layer of the DNN and the semantic description learning process. Furthermore, the model can be generalized to recognition tasks of other objects with learning ability.
NASA Astrophysics Data System (ADS)
Belciug, Smaranda; Serbanescu, Mircea-Sebastian
2015-09-01
Feature selection is considered a key factor in classifications/decision problems. It is currently used in designing intelligent decision systems to choose the best features which allow the best performance. This paper proposes a regression-based approach to select the most important predictors to significantly increase the classification performance. Application to breast cancer detection and recurrence using publically available datasets proved the efficiency of this technique.
From big data to rich data: The key features of athlete wheelchair mobility performance.
van der Slikke, R M A; Berger, M A M; Bregman, D J J; Veeger, H E J
2016-10-03
Quantitative assessment of an athlete׳s individual wheelchair mobility performance is one prerequisite needed to evaluate game performance, improve wheelchair settings and optimize training routines. Inertial Measurement Unit (IMU) based methods can be used to perform such quantitative assessment, providing a large number of kinematic data. The goal of this research was to reduce that large amount of data to a set of key features best describing wheelchair mobility performance in match play and present them in meaningful way for both scientists and athletes. To test the discriminative power, wheelchair mobility characteristics of athletes with different performance levels were compared. The wheelchair kinematics of 29 (inter-)national level athletes were measured during a match using three inertial sensors mounted on the wheelchair. Principal component analysis was used to reduce 22 kinematic outcomes to a set of six outcomes regarding linear and rotational movement; speed and acceleration; average and best performance. In addition, it was explored whether groups of athletes with known performance differences based on their impairment classification also differed with respect to these key outcomes using univariate general linear models. For all six key outcomes classification showed to be a significant factor (p<0.05). We composed a set of six key kinematic outcomes that accurately describe wheelchair mobility performance in match play. The key kinematic outcomes were displayed in an easy to interpret way, usable for athletes, coaches and scientists. This standardized representation enables comparison of different wheelchair sports regarding wheelchair mobility, but also evaluation at the level of an individual athlete. By this means, the tool could enhance further development of wheelchair sports in general. Copyright © 2016 Elsevier Ltd. All rights reserved.
Highlighting High Performance: Michael E. Capuano Early Childhood Center; Somerville, Massachusetts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2006-03-01
This brochure describes the key high-performance building features of the Michael E. Capuano Early Childhood Center. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.
Performance-Based Funding Brief
ERIC Educational Resources Information Center
Washington Higher Education Coordinating Board, 2011
2011-01-01
A number of states have made progress in implementing performance-based funding (PFB) and accountability. This policy brief summarizes main features of performance-based funding systems in three states: Tennessee, Ohio, and Indiana. The brief also identifies key issues that states considering performance-based funding must address, as well as…
Characterizing Feature Matching Performance Over Long Time Periods (Author’s Manuscript)
2015-01-05
older imagery. These applications, including approaches to geo-location, geo- orientation [13], geo-tagging [16], landmark recognition [23], image... orientation between features is less than 10 degrees. We calculate the percent of features from the reference image that fit into each of these three...always because the key point detection algorithm did not find feature points at the same locations and orientation . 5. Conclusions In this paper, we offer
A Novel Real-Time Reference Key Frame Scan Matching Method.
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-05-07
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions' environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.
Natural texture retrieval based on perceptual similarity measurement
NASA Astrophysics Data System (ADS)
Gao, Ying; Dong, Junyu; Lou, Jianwen; Qi, Lin; Liu, Jun
2018-04-01
A typical texture retrieval system performs feature comparison and might not be able to make human-like judgments of image similarity. Meanwhile, it is commonly known that perceptual texture similarity is difficult to be described by traditional image features. In this paper, we propose a new texture retrieval scheme based on texture perceptual similarity. The key of the proposed scheme is that prediction of perceptual similarity is performed by learning a non-linear mapping from image features space to perceptual texture space by using Random Forest. We test the method on natural texture dataset and apply it on a new wallpapers dataset. Experimental results demonstrate that the proposed texture retrieval scheme with perceptual similarity improves the retrieval performance over traditional image features.
a Performance Comparison of Feature Detectors for Planetary Rover Mapping and Localization
NASA Astrophysics Data System (ADS)
Wan, W.; Peng, M.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Teng, B.; Mao, X.; Zhao, Q.; Xin, X.; Jia, M.
2017-07-01
Feature detection and matching are key techniques in computer vision and robotics, and have been successfully implemented in many fields. So far there is no performance comparison of feature detectors and matching methods for planetary mapping and rover localization using rover stereo images. In this research, we present a comprehensive evaluation and comparison of six feature detectors, including Moravec, Förstner, Harris, FAST, SIFT and SURF, aiming for optimal implementation of feature-based matching in planetary surface environment. To facilitate quantitative analysis, a series of evaluation criteria, including distribution evenness of matched points, coverage of detected points, and feature matching accuracy, are developed in the research. In order to perform exhaustive evaluation, stereo images, simulated under different baseline, pitch angle, and interval of adjacent rover locations, are taken as experimental data source. The comparison results show that SIFT offers the best overall performance, especially it is less sensitive to changes of image taken at adjacent locations.
Performance Evaluation of the United Nations Environment Programme Air Quality Monitoring Unit
This report defines the specifics of the environmental test conditions used in the evaluation (systems and conditions), data observations, summarization of key performance evaluation findings, and ease of use features concerning the UNEP pod.
Hwang, Yoo Na; Lee, Ju Hwan; Kim, Ga Young; Jiang, Yuan Yuan; Kim, Sung Min
2015-01-01
This paper focuses on the improvement of the diagnostic accuracy of focal liver lesions by quantifying the key features of cysts, hemangiomas, and malignant lesions on ultrasound images. The focal liver lesions were divided into 29 cysts, 37 hemangiomas, and 33 malignancies. A total of 42 hybrid textural features that composed of 5 first order statistics, 18 gray level co-occurrence matrices, 18 Law's, and echogenicity were extracted. A total of 29 key features that were selected by principal component analysis were used as a set of inputs for a feed-forward neural network. For each lesion, the performance of the diagnosis was evaluated by using the positive predictive value, negative predictive value, sensitivity, specificity, and accuracy. The results of the experiment indicate that the proposed method exhibits great performance, a high diagnosis accuracy of over 96% among all focal liver lesion groups (cyst vs. hemangioma, cyst vs. malignant, and hemangioma vs. malignant) on ultrasound images. The accuracy was slightly increased when echogenicity was included in the optimal feature set. These results indicate that it is possible for the proposed method to be applied clinically.
Wong, Gerard; Leckie, Christopher; Kowalczyk, Adam
2012-01-15
Feature selection is a key concept in machine learning for microarray datasets, where features represented by probesets are typically several orders of magnitude larger than the available sample size. Computational tractability is a key challenge for feature selection algorithms in handling very high-dimensional datasets beyond a hundred thousand features, such as in datasets produced on single nucleotide polymorphism microarrays. In this article, we present a novel feature set reduction approach that enables scalable feature selection on datasets with hundreds of thousands of features and beyond. Our approach enables more efficient handling of higher resolution datasets to achieve better disease subtype classification of samples for potentially more accurate diagnosis and prognosis, which allows clinicians to make more informed decisions in regards to patient treatment options. We applied our feature set reduction approach to several publicly available cancer single nucleotide polymorphism (SNP) array datasets and evaluated its performance in terms of its multiclass predictive classification accuracy over different cancer subtypes, its speedup in execution as well as its scalability with respect to sample size and array resolution. Feature Set Reduction (FSR) was able to reduce the dimensions of an SNP array dataset by more than two orders of magnitude while achieving at least equal, and in most cases superior predictive classification performance over that achieved on features selected by existing feature selection methods alone. An examination of the biological relevance of frequently selected features from FSR-reduced feature sets revealed strong enrichment in association with cancer. FSR was implemented in MATLAB R2010b and is available at http://ww2.cs.mu.oz.au/~gwong/FSR.
A Novel Real-Time Reference Key Frame Scan Matching Method
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-01-01
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions’ environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems. PMID:28481285
Kodak phase-change media for optical tape applications
NASA Technical Reports Server (NTRS)
Tyan, Yuan-Sheng; Preuss, Donald R.; Olin, George R.; Vazan, Fridrich; Pan, Kee-Chuan; Raychaudhuri, Pranab. K.
1993-01-01
The SbInSn phase-change write-once optical medium developed by Eastman Kodak Company is particularly suitable for development into the next generation optical tape media. Its performance for optical recording has already been demonstrated in some of the highest performance optical disk systems. Some of the key performance features are presented.
Chaaraoui, Alexandros Andre; Flórez-Revuelta, Francisco
2014-01-01
This paper presents a novel silhouette-based feature for vision-based human action recognition, which relies on the contour of the silhouette and a radial scheme. Its low-dimensionality and ease of extraction result in an outstanding proficiency for real-time scenarios. This feature is used in a learning algorithm that by means of model fusion of multiple camera streams builds a bag of key poses, which serves as a dictionary of known poses and allows converting the training sequences into sequences of key poses. These are used in order to perform action recognition by means of a sequence matching algorithm. Experimentation on three different datasets returns high and stable recognition rates. To the best of our knowledge, this paper presents the highest results so far on the MuHAVi-MAS dataset. Real-time suitability is given, since the method easily performs above video frequency. Therefore, the related requirements that applications as ambient-assisted living services impose are successfully fulfilled.
Towards a Theory of Identity and Agency in Coming to Learn Mathematics
ERIC Educational Resources Information Center
Grootenboer, Peter; Jorgensen, Robyn
2009-01-01
In writing this paper we draw considerably on the work of Jo Boaler and Leone Burton. Boaler's studies of classrooms have been particularly poignant in alerting the mathematics education community to a number of key features of successful classrooms, and how such features can turn around the successes for students who traditionally perform poorly…
Developing an Approach for Comparing Students' Multimodal Text Creations: A Case Study
ERIC Educational Resources Information Center
Levy, Mike; Kimber, Kay
2009-01-01
Classroom teachers routinely make judgments on the quality of their students' work based on their recognition of how effectively the student has assembled key features of the genre or the medium. Yet how readily can teachers talk about the features of student-created multimodal texts in ways that can improve learning and performance? This article…
High-performance execution of psychophysical tasks with complex visual stimuli in MATLAB
Asaad, Wael F.; Santhanam, Navaneethan; McClellan, Steven
2013-01-01
Behavioral, psychological, and physiological experiments often require the ability to present sensory stimuli, monitor and record subjects' responses, interface with a wide range of devices, and precisely control the timing of events within a behavioral task. Here, we describe our recent progress developing an accessible and full-featured software system for controlling such studies using the MATLAB environment. Compared with earlier reports on this software, key new features have been implemented to allow the presentation of more complex visual stimuli, increase temporal precision, and enhance user interaction. These features greatly improve the performance of the system and broaden its applicability to a wider range of possible experiments. This report describes these new features and improvements, current limitations, and quantifies the performance of the system in a real-world experimental setting. PMID:23034363
Evaluation of the Aurora Application Shade Measurement Accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-12-01
Aurora is an integrated, Web-based application that helps solar installers perform sales, engineering design, and financial analysis. One of Aurora's key features is its high-resolution remote shading analysis.
Hooper, Paula; Knuiman, Matthew; Foster, Sarah; Giles-Corti, Billie
2015-11-01
Planning policy makers are requesting clearer guidance on the key design features required to build neighbourhoods that promote active living. Using a backwards stepwise elimination procedure (logistic regression with generalised estimating equations adjusting for demographic characteristics, self-selection factors, stage of construction and scale of development) this study identified specific design features (n=16) from an operational planning policy ("Liveable Neighbourhoods") that showed the strongest associations with walking behaviours (measured using the Neighbourhood Physical Activity Questionnaire). The interacting effects of design features on walking behaviours were also investigated. The urban design features identified were grouped into the "building blocks of a Liveable Neighbourhood", reflecting the scale, importance and sequencing of the design and implementation phases required to create walkable, pedestrian friendly developments. Copyright © 2015 Elsevier Ltd. All rights reserved.
Earth Observing Scanning Polarimeter (EOSP), phase B
NASA Technical Reports Server (NTRS)
1990-01-01
Evaluations performed during a Phase B study directed towards defining an optimal design for the Earth Observing Scanning Polarimeter (EOSP) instrument is summarized. An overview of the experiment approach is included which provides a summary of the scientific objectives, the background of the measurement approach, and the measurement method. In the instrumentation section, details of the design are discussed starting with the key instrument features required to accomplish the scientific objectives and a system characterization in terms of the Stokes vector/Mueller matrix formalism. This is followed by a detailing of the instrument design concept, the design of the individual elements of the system, the predicted performance, and a summary of appropriate instrument testing and calibration. The selected design makes use of key features of predecessor polarimeters and is fully compatible with the Earth Observing System spacecraft requirements.
Medicaid Nursing Home Pay for Performance: Where Do We Stand?
ERIC Educational Resources Information Center
Arling, Greg; Job, Carol; Cooke, Valerie
2009-01-01
Purpose: Nursing home pay-for-performance (P4P) programs are intended to maximize the value obtained from public and private expenditures by measuring and rewarding better nursing home performance. We surveyed the 6 states with operational P4P systems in 2007. We describe key features of six Medicaid nursing home P4P systems and make…
Registration algorithm of point clouds based on multiscale normal features
NASA Astrophysics Data System (ADS)
Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua
2015-01-01
The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.
The High-Stakes Effects of "Low-Stakes" Testing
ERIC Educational Resources Information Center
Papay, John P.; Murnane, Richard J.; Willett, John B.
2011-01-01
In this paper, the authors examine how information that students receive about their academic performance affects their decisions to enroll in post-secondary education. In particular, they look at one specific piece of data--student performance on the state standardized mathematics test in grades 8 and 10 in Massachusetts. One key feature of such…
Vaccine adverse event text mining system for extracting features from vaccine safety reports.
Botsis, Taxiarchis; Buttolph, Thomas; Nguyen, Michael D; Winiecki, Scott; Woo, Emily Jane; Ball, Robert
2012-01-01
To develop and evaluate a text mining system for extracting key clinical features from vaccine adverse event reporting system (VAERS) narratives to aid in the automated review of adverse event reports. Based upon clinical significance to VAERS reviewing physicians, we defined the primary (diagnosis and cause of death) and secondary features (eg, symptoms) for extraction. We built a novel vaccine adverse event text mining (VaeTM) system based on a semantic text mining strategy. The performance of VaeTM was evaluated using a total of 300 VAERS reports in three sequential evaluations of 100 reports each. Moreover, we evaluated the VaeTM contribution to case classification; an information retrieval-based approach was used for the identification of anaphylaxis cases in a set of reports and was compared with two other methods: a dedicated text classifier and an online tool. The performance metrics of VaeTM were text mining metrics: recall, precision and F-measure. We also conducted a qualitative difference analysis and calculated sensitivity and specificity for classification of anaphylaxis cases based on the above three approaches. VaeTM performed best in extracting diagnosis, second level diagnosis, drug, vaccine, and lot number features (lenient F-measure in the third evaluation: 0.897, 0.817, 0.858, 0.874, and 0.914, respectively). In terms of case classification, high sensitivity was achieved (83.1%); this was equal and better compared to the text classifier (83.1%) and the online tool (40.7%), respectively. Our VaeTM implementation of a semantic text mining strategy shows promise in providing accurate and efficient extraction of key features from VAERS narratives.
NASA Technical Reports Server (NTRS)
Gaston, S.; Wertheim, M.; Orourke, J. A.
1973-01-01
Summary, consolidation and analysis of specifications, manufacturing process and test controls, and performance results for OAO-2 and OAO-3 lot 20 Amp-Hr sealed nickel cadmium cells and batteries are reported. Correlation of improvements in control requirements with performance is a key feature. Updates for a cell/battery computer model to improve performance prediction capability are included. Applicability of regression analysis computer techniques to relate process controls to performance is checked.
The impact of feature selection on one and two-class classification performance for plant microRNAs.
Khalifa, Waleed; Yousef, Malik; Saçar Demirci, Müşerref Duygu; Allmer, Jens
2016-01-01
MicroRNAs (miRNAs) are short nucleotide sequences that form a typical hairpin structure which is recognized by a complex enzyme machinery. It ultimately leads to the incorporation of 18-24 nt long mature miRNAs into RISC where they act as recognition keys to aid in regulation of target mRNAs. It is involved to determine miRNAs experimentally and, therefore, machine learning is used to complement such endeavors. The success of machine learning mostly depends on proper input data and appropriate features for parameterization of the data. Although, in general, two-class classification (TCC) is used in the field; because negative examples are hard to come by, one-class classification (OCC) has been tried for pre-miRNA detection. Since both positive and negative examples are currently somewhat limited, feature selection can prove to be vital for furthering the field of pre-miRNA detection. In this study, we compare the performance of OCC and TCC using eight feature selection methods and seven different plant species providing positive pre-miRNA examples. Feature selection was very successful for OCC where the best feature selection method achieved an average accuracy of 95.6%, thereby being ∼29% better than the worst method which achieved 66.9% accuracy. While the performance is comparable to TCC, which performs up to 3% better than OCC, TCC is much less affected by feature selection and its largest performance gap is ∼13% which only occurs for two of the feature selection methodologies. We conclude that feature selection is crucially important for OCC and that it can perform on par with TCC given the proper set of features.
Work zone traffic management synthesis : work zone pedestrian protection
DOT National Transportation Integrated Search
1997-08-01
This Long Term Pavement Performance (LTPP) data analysis was intended to examine, in a practical way, the LTPP database and to identify the site conditions and design features that significantly affect transverse joint faulting. Key products develope...
Evaluation of security algorithms used for security processing on DICOM images
NASA Astrophysics Data System (ADS)
Chen, Xiaomeng; Shuai, Jie; Zhang, Jianguo; Huang, H. K.
2005-04-01
In this paper, we developed security approach to provide security measures and features in PACS image acquisition and Tele-radiology image transmission. The security processing on medical images was based on public key infrastructure (PKI) and including digital signature and data encryption to achieve the security features of confidentiality, privacy, authenticity, integrity, and non-repudiation. There are many algorithms which can be used in PKI for data encryption and digital signature. In this research, we select several algorithms to perform security processing on different DICOM images in PACS environment, evaluate the security processing performance of these algorithms, and find the relationship between performance with image types, sizes and the implementation methods.
Solving the mystery of the internal structure of casein micelles.
Ingham, B; Erlangga, G D; Smialowska, A; Kirby, N M; Wang, C; Matia-Merino, L; Haverkamp, R G; Carr, A J
2015-04-14
The interpretation of milk X-ray and neutron scattering data in relation to the internal structure of the casein micelle is an ongoing debate. We performed resonant X-ray scattering measurements on liquid milk and conclusively identified key scattering features, namely those corresponding to the size of and the distance between colloidal calcium phosphate particles. An X-ray scattering feature commonly assigned to the particle size is instead due to protein inhomogeneities.
Giraldo, Sergio I; Ramirez, Rafael
2016-01-01
Expert musicians introduce expression in their performances by manipulating sound properties such as timing, energy, pitch, and timbre. Here, we present a data driven computational approach to induce expressive performance rule models for note duration, onset, energy, and ornamentation transformations in jazz guitar music. We extract high-level features from a set of 16 commercial audio recordings (and corresponding music scores) of jazz guitarist Grant Green in order to characterize the expression in the pieces. We apply machine learning techniques to the resulting features to learn expressive performance rule models. We (1) quantitatively evaluate the accuracy of the induced models, (2) analyse the relative importance of the considered musical features, (3) discuss some of the learnt expressive performance rules in the context of previous work, and (4) assess their generailty. The accuracies of the induced predictive models is significantly above base-line levels indicating that the audio performances and the musical features extracted contain sufficient information to automatically learn informative expressive performance patterns. Feature analysis shows that the most important musical features for predicting expressive transformations are note duration, pitch, metrical strength, phrase position, Narmour structure, and tempo and key of the piece. Similarities and differences between the induced expressive rules and the rules reported in the literature were found. Differences may be due to the fact that most previously studied performance data has consisted of classical music recordings. Finally, the rules' performer specificity/generality is assessed by applying the induced rules to performances of the same pieces performed by two other professional jazz guitar players. Results show a consistency in the ornamentation patterns between Grant Green and the other two musicians, which may be interpreted as a good indicator for generality of the ornamentation rules.
Giraldo, Sergio I.; Ramirez, Rafael
2016-01-01
Expert musicians introduce expression in their performances by manipulating sound properties such as timing, energy, pitch, and timbre. Here, we present a data driven computational approach to induce expressive performance rule models for note duration, onset, energy, and ornamentation transformations in jazz guitar music. We extract high-level features from a set of 16 commercial audio recordings (and corresponding music scores) of jazz guitarist Grant Green in order to characterize the expression in the pieces. We apply machine learning techniques to the resulting features to learn expressive performance rule models. We (1) quantitatively evaluate the accuracy of the induced models, (2) analyse the relative importance of the considered musical features, (3) discuss some of the learnt expressive performance rules in the context of previous work, and (4) assess their generailty. The accuracies of the induced predictive models is significantly above base-line levels indicating that the audio performances and the musical features extracted contain sufficient information to automatically learn informative expressive performance patterns. Feature analysis shows that the most important musical features for predicting expressive transformations are note duration, pitch, metrical strength, phrase position, Narmour structure, and tempo and key of the piece. Similarities and differences between the induced expressive rules and the rules reported in the literature were found. Differences may be due to the fact that most previously studied performance data has consisted of classical music recordings. Finally, the rules' performer specificity/generality is assessed by applying the induced rules to performances of the same pieces performed by two other professional jazz guitar players. Results show a consistency in the ornamentation patterns between Grant Green and the other two musicians, which may be interpreted as a good indicator for generality of the ornamentation rules. PMID:28066290
Barbosa, Jocelyn; Lee, Kyubum; Lee, Sunwon; Lodhi, Bilal; Cho, Jae-Gu; Seo, Woo-Keun; Kang, Jaewoo
2016-03-12
Facial palsy or paralysis (FP) is a symptom that loses voluntary muscles movement in one side of the human face, which could be very devastating in the part of the patients. Traditional methods are solely dependent to clinician's judgment and therefore time consuming and subjective in nature. Hence, a quantitative assessment system becomes apparently invaluable for physicians to begin the rehabilitation process; and to produce a reliable and robust method is challenging and still underway. We introduce a novel approach for a quantitative assessment of facial paralysis that tackles classification problem for FP type and degree of severity. Specifically, a novel method of quantitative assessment is presented: an algorithm that extracts the human iris and detects facial landmarks; and a hybrid approach combining the rule-based and machine learning algorithm to analyze and prognosticate facial paralysis using the captured images. A method combining the optimized Daugman's algorithm and Localized Active Contour (LAC) model is proposed to efficiently extract the iris and facial landmark or key points. To improve the performance of LAC, appropriate parameters of initial evolving curve for facial features' segmentation are automatically selected. The symmetry score is measured by the ratio between features extracted from the two sides of the face. Hybrid classifiers (i.e. rule-based with regularized logistic regression) were employed for discriminating healthy and unhealthy subjects, FP type classification, and for facial paralysis grading based on House-Brackmann (H-B) scale. Quantitative analysis was performed to evaluate the performance of the proposed approach. Experiments show that the proposed method demonstrates its efficiency. Facial movement feature extraction on facial images based on iris segmentation and LAC-based key point detection along with a hybrid classifier provides a more efficient way of addressing classification problem on facial palsy type and degree of severity. Combining iris segmentation and key point-based method has several merits that are essential for our real application. Aside from the facial key points, iris segmentation provides significant contribution as it describes the changes of the iris exposure while performing some facial expressions. It reveals the significant difference between the healthy side and the severe palsy side when raising eyebrows with both eyes directed upward, and can model the typical changes in the iris region.
On analyzing colour constancy approach for improving SURF detector performance
NASA Astrophysics Data System (ADS)
Zulkiey, Mohd Asyraf; Zaki, Wan Mimi Diyana Wan; Hussain, Aini; Mustafa, Mohd. Marzuki
2012-04-01
Robust key point detector plays a crucial role in obtaining a good tracking feature. The main challenge in outdoor tracking is the illumination change due to various reasons such as weather fluctuation and occlusion. This paper approaches the illumination change problem by transforming the input image through colour constancy algorithm before applying the SURF detector. Masked grey world approach is chosen because of its ability to perform well under local as well as global illumination change. Every image is transformed to imitate the canonical illuminant and Gaussian distribution is used to model the global change. The simulation results show that the average number of detected key points have increased by 69.92%. Moreover, the average of improved performance cases far out weight the degradation case where the former is improved by 215.23%. The approach is suitable for tracking implementation where sudden illumination occurs frequently and robust key point detection is needed.
NASA Astrophysics Data System (ADS)
Xiong, Wei; Qiu, Bo; Tian, Qi; Mueller, Henning; Xu, Changsheng
2005-04-01
Medical image retrieval is still mainly a research domain with a large variety of applications and techniques. With the ImageCLEF 2004 benchmark, an evaluation framework has been created that includes a database, query topics and ground truth data. Eleven systems (with a total of more than 50 runs) compared their performance in various configurations. The results show that there is not any one feature that performs well on all query tasks. Key to successful retrieval is rather the selection of features and feature weights based on a specific set of input features, thus on the query task. In this paper we propose a novel method based on query topic dependent image features (QTDIF) for content-based medical image retrieval. These feature sets are designed to capture both inter-category and intra-category statistical variations to achieve good retrieval performance in terms of recall and precision. We have used Gaussian Mixture Models (GMM) and blob representation to model medical images and construct the proposed novel QTDIF for CBIR. Finally, trained multi-class support vector machines (SVM) are used for image similarity ranking. The proposed methods have been tested over the Casimage database with around 9000 images, for the given 26 image topics, used for imageCLEF 2004. The retrieval performance has been compared with the medGIFT system, which is based on the GNU Image Finding Tool (GIFT). The experimental results show that the proposed QTDIF-based CBIR can provide significantly better performance than systems based general features only.
NASA Astrophysics Data System (ADS)
Xu, Lili; Luo, Shuqian
2010-11-01
Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.
Xu, Lili; Luo, Shuqian
2010-01-01
Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.
A Novel Re-keying Function Protocol (NRFP) For Wireless Sensor Network Security
Abdullah, Maan Younis; Hua, Gui Wei; Alsharabi, Naif
2008-01-01
This paper describes a novel re-keying function protocol (NRFP) for wireless sensor network security. A re-keying process management system for sensor networks is designed to support in-network processing. The design of the protocol is motivated by decentralization key management for wireless sensor networks (WSNs), covering key deployment, key refreshment, and key establishment. NRFP supports the establishment of novel administrative functions for sensor nodes that derive/re-derive a session key for each communication session. The protocol proposes direct connection, in-direct connection and hybrid connection. NRFP also includes an efficient protocol for local broadcast authentication based on the use of one-way key chains. A salient feature of the authentication protocol is that it supports source authentication without precluding innetwork processing. Security and performance analysis shows that it is very efficient in computation, communication and storage and, that NRFP is also effective in defending against many sophisticated attacks. PMID:27873963
A Novel Re-keying Function Protocol (NRFP) For Wireless Sensor Network Security.
Abdullah, Maan Younis; Hua, Gui Wei; Alsharabi, Naif
2008-12-04
This paper describes a novel re-keying function protocol (NRFP) for wireless sensor network security. A re-keying process management system for sensor networks is designed to support in-network processing. The design of the protocol is motivated by decentralization key management for wireless sensor networks (WSNs), covering key deployment, key refreshment, and key establishment. NRFP supports the establishment of novel administrative functions for sensor nodes that derive/re-derive a session key for each communication session. The protocol proposes direct connection, in-direct connection and hybrid connection. NRFP also includes an efficient protocol for local broadcast authentication based on the use of one-way key chains. A salient feature of the authentication protocol is that it supports source authentication without precluding in-network processing. Security and performance analysis shows that it is very efficient in computation, communication and storage and, that NRFP is also effective in defending against many sophisticated attacks.
Updated Mars Mission Architectures Featuring Nuclear Thermal Propulsion
NASA Technical Reports Server (NTRS)
Rodriguez, Mitchell A.; Percy, Thomas K.
2017-01-01
Nuclear thermal propulsion (NTP) can potentially enable routine human exploration of Mars and the solar system. By using nuclear fission instead of a chemical combustion process, and using hydrogen as the propellant, NTP systems promise rocket efficiencies roughly twice that of the best chemical rocket engines currently available. The most recent major Mars architecture study featuring NTP was the Design Reference Architecture 5.0 (DRA 5.0), performed in 2009. Currently, the predominant transportation options being considered are solar electric propulsion (SEP) and chemical propulsion; however, given NTP's capabilities, an updated architectural analysis is needed. This paper provides a top-level overview of several different architectures featuring updated NTP performance data. New architectures presented include a proposed update to the DRA 5.0 as well as an investigation of architectures based on the current Evolvable Mars Campaign, which is the focus of NASA's current analyses for the Journey to Mars. Architectures investigated leverage the latest information relating to NTP performance and design considerations and address new support elements not available at the time of DRA 5.0, most notably the Orion crew module and the Space Launch System (SLS). The paper provides a top level quantitative comparison of key performance metrics as well as a qualitative discussion of improvements and key challenges still to be addressed. Preliminary results indicate that the updated NTP architectures can significantly reduce the campaign mass and subsequently the costs for assembly and number of launches.
Keys and seats: Spatial response coding underlying the joint spatial compatibility effect.
Dittrich, Kerstin; Dolk, Thomas; Rothe-Wulf, Annelie; Klauer, Karl Christoph; Prinz, Wolfgang
2013-11-01
Spatial compatibility effects (SCEs) are typically observed when participants have to execute spatially defined responses to nonspatial stimulus features (e.g., the color red or green) that randomly appear to the left and the right. Whereas a spatial correspondence of stimulus and response features facilitates response execution, a noncorrespondence impairs task performance. Interestingly, the SCE is drastically reduced when a single participant responds to one stimulus feature (e.g., green) by operating only one response key (individual go/no-go task), whereas a full-blown SCE is observed when the task is distributed between two participants (joint go/no-go task). This joint SCE (a.k.a. the social Simon effect) has previously been explained by action/task co-representation, whereas alternative accounts ascribe joint SCEs to spatial components inherent in joint go/no-go tasks that allow participants to code their responses spatially. Although increasing evidence supports the idea that spatial rather than social aspects are responsible for joint SCEs emerging, it is still unclear to which component(s) the spatial coding refers to: the spatial orientation of response keys, the spatial orientation of responding agents, or both. By varying the spatial orientation of the responding agents (Exp. 1) and of the response keys (Exp. 2), independent of the spatial orientation of the stimuli, in the present study we found joint SCEs only when both the seating and the response key alignment matched the stimulus alignment. These results provide evidence that spatial response coding refers not only to the response key arrangement, but also to the-often neglected-spatial orientation of the responding agents.
Secure image retrieval with multiple keys
NASA Astrophysics Data System (ADS)
Liang, Haihua; Zhang, Xinpeng; Wei, Qiuhan; Cheng, Hang
2018-03-01
This article proposes a secure image retrieval scheme under a multiuser scenario. In this scheme, the owner first encrypts and uploads images and their corresponding features to the cloud; then, the user submits the encrypted feature of the query image to the cloud; next, the cloud compares the encrypted features and returns encrypted images with similar content to the user. To find the nearest neighbor in the encrypted features, an encryption with multiple keys is proposed, in which the query feature of each user is encrypted by his/her own key. To improve the key security and space utilization, global optimization and Gaussian distribution are, respectively, employed to generate multiple keys. The experiments show that the proposed encryption can provide effective and secure image retrieval for each user and ensure confidentiality of the query feature of each user.
A practical guide to assessing clinical decision-making skills using the key features approach.
Farmer, Elizabeth A; Page, Gordon
2005-12-01
This paper in the series on professional assessment provides a practical guide to writing key features problems (KFPs). Key features problems test clinical decision-making skills in written or computer-based formats. They are based on the concept of critical steps or 'key features' in decision making and represent an advance on the older, less reliable patient management problem (PMP) formats. The practical steps in writing these problems are discussed and illustrated by examples. Steps include assembling problem-writing groups, selecting a suitable clinical scenario or problem and defining its key features, writing the questions, selecting question response formats, preparing scoring keys, reviewing item quality and item banking. The KFP format provides educators with a flexible approach to testing clinical decision-making skills with demonstrated validity and reliability when constructed according to the guidelines provided.
14 CFR 171.27 - Performance requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
...” (Annex 10 to the Convention on International Civil Aviation), except that identification by on-off keying... electronic engineering practices for the desired service. (c) Ground inspection consists of an examination of the design features of the equipment to determine (based on recognized and accepted good engineering...
Detection and quantification of flow consistency in business process models.
Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel; Soffer, Pnina; Weber, Barbara
2018-01-01
Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics addressing these challenges, each following a different view of flow consistency. We then report the results of an empirical evaluation, which indicates which metric is more effective in predicting the human perception of this feature. Moreover, two other automatic evaluations describing the performance and the computational capabilities of our metrics are reported as well.
Vrkljan, Brenda H; Anaby, Dana
2011-02-01
Certain vehicle features can help drivers avoid collisions and/or protect occupants in the event of a crash, and therefore, might play an important role when deciding which vehicle to purchase. The objective of this study was to examine the importance attributed to key vehicle features (including safety) that drivers consider when buying a car and its association with age and gender. A sample of 2,002 Canadian drivers aged 18 years and older completed a survey that asked them to rank the importance of eight vehicle features if they were to purchase a vehicle (storage, mileage, safety, price, comfort, performance, design, and reliability). ANOVA tests were performed to: (a) determine if there were differences in the level of importance between features and; (b) examine the effect of age and gender on the importance attributed to these features. Of the features examined, safety and reliability were the most highly rated in terms of importance, whereas design and performance had the lowest rating. Differences in safety and performance across age groups were dependent on gender. This effect was most evident in the youngest and oldest age groups. Safety and reliability were considered the most important features. Age and gender play a significant role in explaining the importance of certain features. Targeted efforts for translating safety-related information to the youngest and oldest consumers should be emphasized due to their high collision, injury, and fatality rates. Copyright © 2011 National Safety Council and Elsevier Ltd. All rights reserved.
Passive solar water heating: breadbox design for the Fred Young Farm Labor Center in Indio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melzer, B; Maeda, B
1979-10-01
An appropriate passive solar preheater for multifamily housing units in the Fred Young Farm Labor Center in Indio, California, was designed and analyzed. A brief summary of passive preheater systems and the key design features used in current designs is presented. The design features necessary for the site requirements are described. The eight preliminary preheater designs reviewed for the project are presented. The results of thermal performance simulation for the eight prototype systems are discussed. Alternative monitoring systems for the installation are described and evaluated. The consultants' recommendations, working drawings, and performance estimates of the system selected are presented. (MHR)
Wan, Cen; Lees, Jonathan G; Minneci, Federico; Orengo, Christine A; Jones, David T
2017-10-01
Accurate gene or protein function prediction is a key challenge in the post-genome era. Most current methods perform well on molecular function prediction, but struggle to provide useful annotations relating to biological process functions due to the limited power of sequence-based features in that functional domain. In this work, we systematically evaluate the predictive power of temporal transcription expression profiles for protein function prediction in Drosophila melanogaster. Our results show significantly better performance on predicting protein function when transcription expression profile-based features are integrated with sequence-derived features, compared with the sequence-derived features alone. We also observe that the combination of expression-based and sequence-based features leads to further improvement of accuracy on predicting all three domains of gene function. Based on the optimal feature combinations, we then propose a novel multi-classifier-based function prediction method for Drosophila melanogaster proteins, FFPred-fly+. Interpreting our machine learning models also allows us to identify some of the underlying links between biological processes and developmental stages of Drosophila melanogaster.
TCPD: A micropattern photon detector hybrid for RICH applications
NASA Astrophysics Data System (ADS)
Hamar, G.; Varga, D.
2017-03-01
A micropattern and wire chamber hybrid has been constructed for UV photon detection, and its performance evaluated. It is revealed that such combination retains some key advantages of both the Thick-GEM primary and CCC secondary amplification stages, and results in a high gain gaseous photon detector with outstanding stability. Key features such as MIP suppression, detection efficiency and photon cluster size are discussed. The capability of the detector for UV photon detection has been established and proven with Cherenkov photons in particle beam tests.
ECG Based Heart Arrhythmia Detection Using Wavelet Coherence and Bat Algorithm
NASA Astrophysics Data System (ADS)
Kora, Padmavathi; Sri Rama Krishna, K.
2016-12-01
Atrial fibrillation (AF) is a type of heart abnormality, during the AF electrical discharges in the atrium are rapid, results in abnormal heart beat. The morphology of ECG changes due to the abnormalities in the heart. This paper consists of three major steps for the detection of heart diseases: signal pre-processing, feature extraction and classification. Feature extraction is the key process in detecting the heart abnormality. Most of the ECG detection systems depend on the time domain features for cardiac signal classification. In this paper we proposed a wavelet coherence (WTC) technique for ECG signal analysis. The WTC calculates the similarity between two waveforms in frequency domain. Parameters extracted from WTC function is used as the features of the ECG signal. These features are optimized using Bat algorithm. The Levenberg Marquardt neural network classifier is used to classify the optimized features. The performance of the classifier can be improved with the optimized features.
NASA Technical Reports Server (NTRS)
1984-01-01
Kollmorgen Corporation's Mermaid II two person submersible is propeller-driven by a system of five DC brushless motors with new electronic controllers that originated in work performed in a NASA/DOE project managed by Lewis Research Center. A key feature of the system is electric commutation rather than mechanical commutation for converting AC current to DC.
NASA Astrophysics Data System (ADS)
Mohan, C.
In this paper, I survey briefly some of the recent and emerging trends in hardware and software features which impact high performance transaction processing and data analytics applications. These features include multicore processor chips, ultra large main memories, flash storage, storage class memories, database appliances, field programmable gate arrays, transactional memory, key-value stores, and cloud computing. While some applications, e.g., Web 2.0 ones, were initially built without traditional transaction processing functionality in mind, slowly system architects and designers are beginning to address such previously ignored issues. The availability, analytics and response time requirements of these applications were initially given more importance than ACID transaction semantics and resource consumption characteristics. A project at IBM Almaden is studying the implications of phase change memory on transaction processing, in the context of a key-value store. Bitemporal data management has also become an important requirement, especially for financial applications. Power consumption and heat dissipation properties are also major considerations in the emergence of modern software and hardware architectural features. Considerations relating to ease of configuration, installation, maintenance and monitoring, and improvement of total cost of ownership have resulted in database appliances becoming very popular. The MapReduce paradigm is now quite popular for large scale data analysis, in spite of the major inefficiencies associated with it.
Novel Noninvasive Brain Disease Detection System Using a Facial Image Sensor
Shu, Ting; Zhang, Bob; Tang, Yuan Yan
2017-01-01
Brain disease including any conditions or disabilities that affect the brain is fast becoming a leading cause of death. The traditional diagnostic methods of brain disease are time-consuming, inconvenient and non-patient friendly. As more and more individuals undergo examinations to determine if they suffer from any form of brain disease, developing noninvasive, efficient, and patient friendly detection systems will be beneficial. Therefore, in this paper, we propose a novel noninvasive brain disease detection system based on the analysis of facial colors. The system consists of four components. A facial image is first captured through a specialized sensor, where four facial key blocks are next located automatically from the various facial regions. Color features are extracted from each block to form a feature vector for classification via the Probabilistic Collaborative based Classifier. To thoroughly test the system and its performance, seven facial key block combinations were experimented. The best result was achieved using the second facial key block, where it showed that the Probabilistic Collaborative based Classifier is the most suitable. The overall performance of the proposed system achieves an accuracy −95%, a sensitivity −94.33%, a specificity −95.67%, and an average processing time (for one sample) of <1 min at brain disease detection. PMID:29292716
List, Susan M; Starks, Nykole; Baum, John; Greene, Carmine; Pardo, Scott; Parkes, Joan L; Schachner, Holly C; Cuddihy, Robert
2011-01-01
Background This study evaluated performance and product labeling of CONTOUR® USB, a new blood glucose monitoring system (BGMS) with integrated diabetes management software and a universal serial bus (USB) port, in the hands of untrained lay users and health care professionals (HCPs). Method Subjects and HCPs tested subject's finger stick capillary blood in parallel using CONTOUR USB meters; deep finger stick blood was tested on a Yellow Springs Instruments (YSI) glucose analyzer for reference. Duplicate results by both subjects and HCPs were obtained to assess system precision. System accuracy was assessed according to International Organization for Standardization (ISO) 15197:2003 guidelines [within ±15 mg/dl of mean YSI results (samples <75 mg/dl) and ±20% (samples ≥75 mg/dl)]. Clinical accuracy was determined by Parkes error grid analysis. Subject labeling comprehension was assessed by HCP ratings of subject proficiency. Key system features and ease-of-use were evaluated by subject questionnaires. Results All subjects who completed the study (N = 74) successfully performed blood glucose measurements, connected the meter to a laptop computer, and used key features of the system. The system was accurate; 98.6% (146/148) of subject results and 96.6% (143/148) of HCP results exceeded ISO 15197:2003 criteria. All subject and HCP results were clinically accurate (97.3%; zone A) or associated with benign errors (2.7%; zone B). The majority of subjects rated features of the BGMS as “very good” or “excellent.” Conclusions CONTOUR USB exceeded ISO 15197:2003 system performance criteria in the hands of untrained lay users. Subjects understood the product labeling, found the system easy to use, and successfully performed blood glucose testing. PMID:22027308
List, Susan M; Starks, Nykole; Baum, John; Greene, Carmine; Pardo, Scott; Parkes, Joan L; Schachner, Holly C; Cuddihy, Robert
2011-09-01
This study evaluated performance and product labeling of CONTOUR® USB, a new blood glucose monitoring system (BGMS) with integrated diabetes management software and a universal serial bus (USB) port, in the hands of untrained lay users and health care professionals (HCPs). Subjects and HCPs tested subject's finger stick capillary blood in parallel using CONTOUR USB meters; deep finger stick blood was tested on a Yellow Springs Instruments (YSI) glucose analyzer for reference. Duplicate results by both subjects and HCPs were obtained to assess system precision. System accuracy was assessed according to International Organization for Standardization (ISO) 15197:2003 guidelines [within ±15 mg/dl of mean YSI results (samples <75 mg/dl) and ±20% (samples ≥75 mg/dl)]. Clinical accuracy was determined by Parkes error grid analysis. Subject labeling comprehension was assessed by HCP ratings of subject proficiency. Key system features and ease-of-use were evaluated by subject questionnaires. All subjects who completed the study (N = 74) successfully performed blood glucose measurements, connected the meter to a laptop computer, and used key features of the system. The system was accurate; 98.6% (146/148) of subject results and 96.6% (143/148) of HCP results exceeded ISO 15197:2003 criteria. All subject and HCP results were clinically accurate (97.3%; zone A) or associated with benign errors (2.7%; zone B). The majority of subjects rated features of the BGMS as "very good" or "excellent." CONTOUR USB exceeded ISO 15197:2003 system performance criteria in the hands of untrained lay users. Subjects understood the product labeling, found the system easy to use, and successfully performed blood glucose testing. © 2011 Diabetes Technology Society.
Progress In Fresnel-Köhler Concentrators
NASA Astrophysics Data System (ADS)
Mohedano, Rubén; Cvetković, Aleksandra; Benítez, Pablo; Chaves, Julio; Miñano, Juan C.; Zamora, Pablo; Hernandez, Maikel; Vilaplana, Juan
2011-12-01
The Fresnel Köhler (FK) concentrator was first presented in 2008. Since then, various CPV companies have adopted this technology as base for their future commercial product. The key for this rapid penetration is a mixture of simplicity (the FK is essentially a Fresnel lens concentrator, a technology that dominates the market) and excellent performance: high concentration without giving up large manufacturing/aiming tolerances, enabling high efficiency even at the array level. All these features together have a great potential to lower energy costs. This work shows recent results and progress regarding this device, covering new design features, measurements and tests along with first performance achievements at the array level (pilot 6.5 Kwp plant). The work also discusses the potential impact of the FK enhanced performance on the Levelized Cost Of Electricity (LCOE).
Key Program Features to Enhance the School-to-Career Transition for Youth with Disabilities
ERIC Educational Resources Information Center
Doren, Bonnie; Yan, Min-Chi; Tu, Wei-Mo
2013-01-01
The purpose of the article was to identify key features within research-based school-to-career programs that were linked to positive employment outcomes for youth disabilities. Three key program features were identified and discussed that could be incorporated into the practices and programs of schools and communities to support the employment…
Zeid, Elias Abou; Sereshkeh, Alborz Rezazadeh; Chau, Tom
2016-12-01
In recent years, the readiness potential (RP), a type of pre-movement neural activity, has been investigated for asynchronous electroencephalogram (EEG)-based brain-computer interfaces (BCIs). Since the RP is attenuated for involuntary movements, a BCI driven by RP alone could facilitate intentional control amid a plethora of unintentional movements. Previous studies have attempted single trial classification of RP via spatial and temporal filtering methods, or by combining the RP with event-related desynchronization. However, RP feature extraction remains challenging due to the slow non-oscillatory nature of the potential, its variability among participants and the inherent noise in EEG signals. Here, we propose a participant-specific, individually optimized pipeline of spatio-temporal filtering (PSTF) to improve RP feature extraction for laterality prediction. PSTF applies band-pass filtering on RP signals, followed by Fisher criterion spatial filtering to maximize class separation, and finally temporal window averaging for feature dimension reduction. Optimal parameters are simultaneously found by cross-validation for each participant. Using EEG data from 14 participants performing self-initiated left or right key presses as well as two benchmark BCI datasets, we compared the performance of PSTF to two popular methods: common spatial subspace decomposition, and adaptive spatio-temporal filtering. On the BCI benchmark data sets, PSTF performed comparably to both existing methods. With the key press EEG data, PSTF extracted more discriminative features, thereby leading to more accurate (74.99% average accuracy) predictions of RP laterality than that achievable with existing methods. Naturalistic and volitional interaction with the world is an important capacity that is lost with traditional system-paced BCIs. We demonstrated a significant improvement in fine movement laterality prediction from RP features alone. Our work supports further study of RP-based BCI for intuitive asynchronous control of the environment, such as augmentative communication or wheelchair navigation.
NASA Astrophysics Data System (ADS)
Abou Zeid, Elias; Rezazadeh Sereshkeh, Alborz; Chau, Tom
2016-12-01
Objective. In recent years, the readiness potential (RP), a type of pre-movement neural activity, has been investigated for asynchronous electroencephalogram (EEG)-based brain-computer interfaces (BCIs). Since the RP is attenuated for involuntary movements, a BCI driven by RP alone could facilitate intentional control amid a plethora of unintentional movements. Previous studies have attempted single trial classification of RP via spatial and temporal filtering methods, or by combining the RP with event-related desynchronization. However, RP feature extraction remains challenging due to the slow non-oscillatory nature of the potential, its variability among participants and the inherent noise in EEG signals. Here, we propose a participant-specific, individually optimized pipeline of spatio-temporal filtering (PSTF) to improve RP feature extraction for laterality prediction. Approach. PSTF applies band-pass filtering on RP signals, followed by Fisher criterion spatial filtering to maximize class separation, and finally temporal window averaging for feature dimension reduction. Optimal parameters are simultaneously found by cross-validation for each participant. Using EEG data from 14 participants performing self-initiated left or right key presses as well as two benchmark BCI datasets, we compared the performance of PSTF to two popular methods: common spatial subspace decomposition, and adaptive spatio-temporal filtering. Main results. On the BCI benchmark data sets, PSTF performed comparably to both existing methods. With the key press EEG data, PSTF extracted more discriminative features, thereby leading to more accurate (74.99% average accuracy) predictions of RP laterality than that achievable with existing methods. Significance. Naturalistic and volitional interaction with the world is an important capacity that is lost with traditional system-paced BCIs. We demonstrated a significant improvement in fine movement laterality prediction from RP features alone. Our work supports further study of RP-based BCI for intuitive asynchronous control of the environment, such as augmentative communication or wheelchair navigation.
A Co-modeling Method Based on Component Features for Mechatronic Devices in Aero-engines
NASA Astrophysics Data System (ADS)
Wang, Bin; Zhao, Haocen; Ye, Zhifeng
2017-08-01
Data-fused and user-friendly design of aero-engine accessories is required because of their structural complexity and stringent reliability. This paper gives an overview of a typical aero-engine control system and the development process of key mechatronic devices used. Several essential aspects of modeling and simulation in the process are investigated. Considering the limitations of a single theoretic model, feature-based co-modeling methodology is suggested to satisfy the design requirements and compensate for diversity of component sub-models for these devices. As an example, a stepper motor controlled Fuel Metering Unit (FMU) is modeled in view of the component physical features using two different software tools. An interface is suggested to integrate the single discipline models into the synthesized one. Performance simulation of this device using the co-model and parameter optimization for its key components are discussed. Comparison between delivery testing and the simulation shows that the co-model for the FMU has a high accuracy and the absolute superiority over a single model. Together with its compatible interface with the engine mathematical model, the feature-based co-modeling methodology is proven to be an effective technical measure in the development process of the device.
Motor programming when sequencing multiple elements of the same duration.
Magnuson, Curt E; Robin, Donald A; Wright, David L
2008-11-01
Motor programming at the self-select paradigm was adopted in 2 experiments to examine the processing demands of independent processes. One process (INT) is responsible for organizing the internal features of the individual elements in a movement (e.g., response duration). The 2nd process (SEQ) is responsible for placing the elements into the proper serial order before execution. Participants in Experiment 1 performed tasks involving 1 key press or sequences of 4 key presses of the same duration. Implementing INT and SEQ was more time consuming for key-pressing sequences than for single key-press tasks. Experiment 2 examined whether the INT costs resulting from the increase in sequence length observed in Experiment 1 resulted from independent planning of each sequence element or via a separate "multiplier" process that handled repetitions of elements of the same duration. Findings from Experiment 2, in which participants performed single key presses or double or triple key sequences of the same duration, suggested that INT is involved with the independent organization of each element contained in the sequence. Researchers offer an elaboration of the 2-process account of motor programming to incorporate the present findings and the findings from other recent sequence-learning research.
Dynamic Metasurface Aperture as Smart Around-the-Corner Motion Detector.
Del Hougne, Philipp; F Imani, Mohammadreza; Sleasman, Timothy; Gollub, Jonah N; Fink, Mathias; Lerosey, Geoffroy; Smith, David R
2018-04-25
Detecting and analysing motion is a key feature of Smart Homes and the connected sensor vision they embrace. At present, most motion sensors operate in line-of-sight Doppler shift schemes. Here, we propose an alternative approach suitable for indoor environments, which effectively constitute disordered cavities for radio frequency (RF) waves; we exploit the fundamental sensitivity of modes of such cavities to perturbations, caused here by moving objects. We establish experimentally three key features of our proposed system: (i) ability to capture the temporal variations of motion and discern information such as periodicity ("smart"), (ii) non line-of-sight motion detection, and (iii) single-frequency operation. Moreover, we explain theoretically and demonstrate experimentally that the use of dynamic metasurface apertures can substantially enhance the performance of RF motion detection. Potential applications include accurately detecting human presence and monitoring inhabitants' vital signs.
3D model retrieval method based on mesh segmentation
NASA Astrophysics Data System (ADS)
Gan, Yuanchao; Tang, Yan; Zhang, Qingchen
2012-04-01
In the process of feature description and extraction, current 3D model retrieval algorithms focus on the global features of 3D models but ignore the combination of global and local features of the model. For this reason, they show less effective performance to the models with similar global shape and different local shape. This paper proposes a novel algorithm for 3D model retrieval based on mesh segmentation. The key idea is to exact the structure feature and the local shape feature of 3D models, and then to compares the similarities of the two characteristics and the total similarity between the models. A system that realizes this approach was built and tested on a database of 200 objects and achieves expected results. The results show that the proposed algorithm improves the precision and the recall rate effectively.
ERIC Educational Resources Information Center
Llamazares, Ivan
2005-01-01
This article explores how the interlocking of formal and informal political institutions has affected the dynamics and performance of the Argentine democracy. Key institutional features of the Argentine political system have been a competitive form of federalism, loosely structured and political parties that are not ideologically unified,…
Conceptualizing Teacher Professional Identity in Neoliberal Times: Resistance, Compliance and Reform
ERIC Educational Resources Information Center
Hall, David; McGinity, Ruth
2015-01-01
This article examines the dramatic implications of the turn towards neo-liberal education policies for teachers' professional identities. It begins with an analysis of some of the key features of this policy shift including marketization, metricization and managerialism and the accompanying elevation of performativity. This is followed by a…
Targeted Information Dissemination
2008-03-01
SETUP, RESULTS AND PERFORMANCE ANALYSIS .......................................... 36 APPENDIX C: TID API SPECIFICATION...are developed using FreePastry1. FreePastry provides an API for a structured P2P overlay network. Information Routing and address resolution is...the TID architecture to demonstrate its key features. TID interface API specifications are described in Appendix C. RSS feeds were used to obtain
Acceleration-Augmented LQG Control of an Active Magnetic Bearing
NASA Technical Reports Server (NTRS)
Feeley, Joseph J.
1993-01-01
A linear-quadratic-gaussian (LQG) regulator controller design for an acceleration-augmented active magnetic bearing (AMB) is outlined. Acceleration augmentation is a key feature in providing improved dynamic performance of the controller. The optimal control formulation provides a convenient method of trading-off fast transient response and force attenuation as control objectives.
Feasibility of an International Multiple Sclerosis Rehabilitation Data Repository
Bradford, Elissa Held; Baert, Ilse; Finlayson, Marcia; Feys, Peter
2018-01-01
Abstract Background: Multiple sclerosis (MS) rehabilitation evidence is limited due to methodological factors, which may be addressed by a data repository. We describe the perceived challenges of, motivators for, interest in participating in, and key features of an international MS rehabilitation data repository. Methods: A multimethod sequential investigation was performed with the results of two focus groups, using nominal group technique, and study aims informing the development of an online questionnaire. Percentage agreement and key quotations illustrated questionnaire findings. Subgroup comparisons were made between clinicians and researchers and between participants in North America and Europe. Results: Rehabilitation professionals from 25 countries participated (focus groups: n = 21; questionnaire: n = 166). The top ten challenges (C) and motivators (M) identified by the focus groups were database control/management (C); ethical/legal concerns (C); data quality (C); time, effort, and cost (C); best practice (M); uniformity (C); sustainability (C); deeper analysis (M); collaboration (M); and identifying research needs (M). Percentage agreement with questionnaire statements regarding challenges to, motivators for, interest in, and key features of a successful repository was at least 80%, 85%, 72%, and 83%, respectively, across each group of statements. Questionnaire subgroup analysis revealed a few differences (P < .05), including that clinicians more strongly identified with improving best practice as a motivator. Conclusions: Findings support clinician and researcher interest in and potential for success of an international MS rehabilitation data repository if prioritized challenges and motivators are addressed and key features are included. PMID:29507539
Bradford, Elissa Held; Baert, Ilse; Finlayson, Marcia; Feys, Peter; Wagner, Joanne
2018-01-01
Multiple sclerosis (MS) rehabilitation evidence is limited due to methodological factors, which may be addressed by a data repository. We describe the perceived challenges of, motivators for, interest in participating in, and key features of an international MS rehabilitation data repository. A multimethod sequential investigation was performed with the results of two focus groups, using nominal group technique, and study aims informing the development of an online questionnaire. Percentage agreement and key quotations illustrated questionnaire findings. Subgroup comparisons were made between clinicians and researchers and between participants in North America and Europe. Rehabilitation professionals from 25 countries participated (focus groups: n = 21; questionnaire: n = 166). The top ten challenges (C) and motivators (M) identified by the focus groups were database control/management (C); ethical/legal concerns (C); data quality (C); time, effort, and cost (C); best practice (M); uniformity (C); sustainability (C); deeper analysis (M); collaboration (M); and identifying research needs (M). Percentage agreement with questionnaire statements regarding challenges to, motivators for, interest in, and key features of a successful repository was at least 80%, 85%, 72%, and 83%, respectively, across each group of statements. Questionnaire subgroup analysis revealed a few differences (P < .05), including that clinicians more strongly identified with improving best practice as a motivator. Findings support clinician and researcher interest in and potential for success of an international MS rehabilitation data repository if prioritized challenges and motivators are addressed and key features are included.
A User Authentication Scheme Based on Elliptic Curves Cryptography for Wireless Ad Hoc Networks
Chen, Huifang; Ge, Linlin; Xie, Lei
2015-01-01
The feature of non-infrastructure support in a wireless ad hoc network (WANET) makes it suffer from various attacks. Moreover, user authentication is the first safety barrier in a network. A mutual trust is achieved by a protocol which enables communicating parties to authenticate each other at the same time and to exchange session keys. For the resource-constrained WANET, an efficient and lightweight user authentication scheme is necessary. In this paper, we propose a user authentication scheme based on the self-certified public key system and elliptic curves cryptography for a WANET. Using the proposed scheme, an efficient two-way user authentication and secure session key agreement can be achieved. Security analysis shows that our proposed scheme is resilient to common known attacks. In addition, the performance analysis shows that our proposed scheme performs similar or better compared with some existing user authentication schemes. PMID:26184224
A User Authentication Scheme Based on Elliptic Curves Cryptography for Wireless Ad Hoc Networks.
Chen, Huifang; Ge, Linlin; Xie, Lei
2015-07-14
The feature of non-infrastructure support in a wireless ad hoc network (WANET) makes it suffer from various attacks. Moreover, user authentication is the first safety barrier in a network. A mutual trust is achieved by a protocol which enables communicating parties to authenticate each other at the same time and to exchange session keys. For the resource-constrained WANET, an efficient and lightweight user authentication scheme is necessary. In this paper, we propose a user authentication scheme based on the self-certified public key system and elliptic curves cryptography for a WANET. Using the proposed scheme, an efficient two-way user authentication and secure session key agreement can be achieved. Security analysis shows that our proposed scheme is resilient to common known attacks. In addition, the performance analysis shows that our proposed scheme performs similar or better compared with some existing user authentication schemes.
Links between social environment and health care utilization and costs.
Brault, Marie A; Brewster, Amanda L; Bradley, Elizabeth H; Keene, Danya; Tan, Annabel X; Curry, Leslie A
2018-01-01
The social environment influences health outcomes for older adults and could be an important target for interventions to reduce costly medical care. We sought to understand which elements of the social environment distinguish communities that achieve lower health care utilization and costs from communities that experience higher health care utilization and costs for older adults with complex needs. We used a sequential explanatory mixed methods approach. We classified community performance based on three outcomes: rate of hospitalizations for ambulatory care sensitive conditions, all-cause risk-standardized hospital readmission rates, and Medicare spending per beneficiary. We conducted in-depth interviews with key informants (N = 245) from organizations providing health or social services. Higher performing communities were distinguished by several aspects of social environment, and these features were lacking in lower performing communities: 1) strong informal support networks; 2) partnerships between faith-based organizations and health care and social service organizations; and 3) grassroots organizing and advocacy efforts. Higher performing communities share similar social environmental features that complement the work of health care and social service organizations. Many of the supportive features and programs identified in the higher performing communities were developed locally and with limited governmental funding, providing opportunities for improvement.
Interictal epileptiform discharge characteristics underlying expert interrater agreement.
Bagheri, Elham; Dauwels, Justin; Dean, Brian C; Waters, Chad G; Westover, M Brandon; Halford, Jonathan J
2017-10-01
The presence of interictal epileptiform discharges (IED) in the electroencephalogram (EEG) is a key finding in the medical workup of a patient with suspected epilepsy. However, inter-rater agreement (IRA) regarding the presence of IED is imperfect, leading to incorrect and delayed diagnoses. An improved understanding of which IED attributes mediate expert IRA might help in developing automatic methods for IED detection able to emulate the abilities of experts. Therefore, using a set of IED scored by a large number of experts, we set out to determine which attributes of IED predict expert agreement regarding the presence of IED. IED were annotated on a 5-point scale by 18 clinical neurophysiologists within 200 30-s EEG segments from recordings of 200 patients. 5538 signal analysis features were extracted from the waveforms, including wavelet coefficients, morphological features, signal energy, nonlinear energy operator response, electrode location, and spectrogram features. Feature selection was performed by applying elastic net regression and support vector regression (SVR) was applied to predict expert opinion, with and without the feature selection procedure and with and without several types of signal normalization. Multiple types of features were useful for predicting expert annotations, but particular types of wavelet features performed best. Local EEG normalization also enhanced best model performance. As the size of the group of EEGers used to train the models was increased, the performance of the models leveled off at a group size of around 11. The features that best predict inter-rater agreement among experts regarding the presence of IED are wavelet features, using locally standardized EEG. Our models for predicting expert opinion based on EEGer's scores perform best with a large group of EEGers (more than 10). By examining a large group of EEG signal analysis features we found that wavelet features with certain wavelet basis functions performed best to identify IEDs. Local normalization also improves predictability, suggesting the importance of IED morphology over amplitude-based features. Although most IED detection studies in the past have used opinion from three or fewer experts, our study suggests a "wisdom of the crowd" effect, such that pooling over a larger number of expert opinions produces a better correlation between expert opinion and objectively quantifiable features of the EEG. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Feature inference with uncertain categorization: Re-assessing Anderson's rational model.
Konovalova, Elizaveta; Le Mens, Gaël
2017-09-18
A key function of categories is to help predictions about unobserved features of objects. At the same time, humans are often in situations where the categories of the objects they perceive are uncertain. In an influential paper, Anderson (Psychological Review, 98(3), 409-429, 1991) proposed a rational model for feature inferences with uncertain categorization. A crucial feature of this model is the conditional independence assumption-it assumes that the within category feature correlation is zero. In prior research, this model has been found to provide a poor fit to participants' inferences. This evidence is restricted to task environments inconsistent with the conditional independence assumption. Currently available evidence thus provides little information about how this model would fit participants' inferences in a setting with conditional independence. In four experiments based on a novel paradigm and one experiment based on an existing paradigm, we assess the performance of Anderson's model under conditional independence. We find that this model predicts participants' inferences better than competing models. One model assumes that inferences are based on just the most likely category. The second model is insensitive to categories but sensitive to overall feature correlation. The performance of Anderson's model is evidence that inferences were influenced not only by the more likely category but also by the other candidate category. Our findings suggest that a version of Anderson's model which relaxes the conditional independence assumption will likely perform well in environments characterized by within-category feature correlation.
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.
Ordóñez, Francisco Javier; Roggen, Daniel
2016-01-18
Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation.
Towards a taxonomy for integrated care: a mixed-methods study
Valentijn, Pim P.; Boesveld, Inge C.; van der Klauw, Denise M.; Ruwaard, Dirk; Struijs, Jeroen N.; Molema, Johanna J.W.; Bruijnzeels, Marc A.; Vrijhoef, Hubertus JM.
2015-01-01
Introduction Building integrated services in a primary care setting is considered an essential important strategy for establishing a high-quality and affordable health care system. The theoretical foundations of such integrated service models are described by the Rainbow Model of Integrated Care, which distinguishes six integration dimensions (clinical, professional, organisational, system, functional and normative integration). The aim of the present study is to refine the Rainbow Model of Integrated Care by developing a taxonomy that specifies the underlying key features of the six dimensions. Methods First, a literature review was conducted to identify features for achieving integrated service delivery. Second, a thematic analysis method was used to develop a taxonomy of key features organised into the dimensions of the Rainbow Model of Integrated Care. Finally, the appropriateness of the key features was tested in a Delphi study among Dutch experts. Results The taxonomy consists of 59 key features distributed across the six integration dimensions of the Rainbow Model of Integrated Care. Key features associated with the clinical, professional, organisational and normative dimensions were considered appropriate by the experts. Key features linked to the functional and system dimensions were considered less appropriate. Discussion This study contributes to the ongoing debate of defining the concept and typology of integrated care. This taxonomy provides a development agenda for establishing an accepted scientific framework of integrated care from an end-user, professional, managerial and policy perspective. PMID:25759607
Towards a taxonomy for integrated care: a mixed-methods study.
Valentijn, Pim P; Boesveld, Inge C; van der Klauw, Denise M; Ruwaard, Dirk; Struijs, Jeroen N; Molema, Johanna J W; Bruijnzeels, Marc A; Vrijhoef, Hubertus Jm
2015-01-01
Building integrated services in a primary care setting is considered an essential important strategy for establishing a high-quality and affordable health care system. The theoretical foundations of such integrated service models are described by the Rainbow Model of Integrated Care, which distinguishes six integration dimensions (clinical, professional, organisational, system, functional and normative integration). The aim of the present study is to refine the Rainbow Model of Integrated Care by developing a taxonomy that specifies the underlying key features of the six dimensions. First, a literature review was conducted to identify features for achieving integrated service delivery. Second, a thematic analysis method was used to develop a taxonomy of key features organised into the dimensions of the Rainbow Model of Integrated Care. Finally, the appropriateness of the key features was tested in a Delphi study among Dutch experts. The taxonomy consists of 59 key features distributed across the six integration dimensions of the Rainbow Model of Integrated Care. Key features associated with the clinical, professional, organisational and normative dimensions were considered appropriate by the experts. Key features linked to the functional and system dimensions were considered less appropriate. This study contributes to the ongoing debate of defining the concept and typology of integrated care. This taxonomy provides a development agenda for establishing an accepted scientific framework of integrated care from an end-user, professional, managerial and policy perspective.
Speech emotion recognition methods: A literature review
NASA Astrophysics Data System (ADS)
Basharirad, Babak; Moradhaseli, Mohammadreza
2017-10-01
Recently, attention of the emotional speech signals research has been boosted in human machine interfaces due to availability of high computation capability. There are many systems proposed in the literature to identify the emotional state through speech. Selection of suitable feature sets, design of a proper classifications methods and prepare an appropriate dataset are the main key issues of speech emotion recognition systems. This paper critically analyzed the current available approaches of speech emotion recognition methods based on the three evaluating parameters (feature set, classification of features, accurately usage). In addition, this paper also evaluates the performance and limitations of available methods. Furthermore, it highlights the current promising direction for improvement of speech emotion recognition systems.
Web Services Security - Implementation and Evaluation Issues
NASA Astrophysics Data System (ADS)
Pimenidis, Elias; Georgiadis, Christos K.; Bako, Peter; Zorkadis, Vassilis
Web services development is a key theme in the utilization the commercial exploitation of the semantic web. Paramount to the development and offering of such services is the issue of security features and they way these are applied in instituting trust amongst participants and recipients of the service. Implementing such security features is a major challenge to developers as they need to balance these with performance and interoperability requirements. Being able to evaluate the level of security offered is a desirable feature for any prospective participant. The authors attempt to address the issues of security requirements and evaluation criteria, while they discuss the challenges of security implementation through a simple web service application case.
2016-01-01
Understanding the relationship between physiological measurements from human subjects and their demographic data is important within both the biometric and forensic domains. In this paper we explore the relationship between measurements of the human hand and a range of demographic features. We assess the ability of linear regression and machine learning classifiers to predict demographics from hand features, thereby providing evidence on both the strength of relationship and the key features underpinning this relationship. Our results show that we are able to predict sex, height, weight and foot size accurately within various data-range bin sizes, with machine learning classification algorithms out-performing linear regression in most situations. In addition, we identify the features used to provide these relationships applicable across multiple applications. PMID:27806075
Miguel-Hurtado, Oscar; Guest, Richard; Stevenage, Sarah V; Neil, Greg J; Black, Sue
2016-01-01
Understanding the relationship between physiological measurements from human subjects and their demographic data is important within both the biometric and forensic domains. In this paper we explore the relationship between measurements of the human hand and a range of demographic features. We assess the ability of linear regression and machine learning classifiers to predict demographics from hand features, thereby providing evidence on both the strength of relationship and the key features underpinning this relationship. Our results show that we are able to predict sex, height, weight and foot size accurately within various data-range bin sizes, with machine learning classification algorithms out-performing linear regression in most situations. In addition, we identify the features used to provide these relationships applicable across multiple applications.
Data-Driven Neural Network Model for Robust Reconstruction of Automobile Casting
NASA Astrophysics Data System (ADS)
Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Lu
2017-09-01
In computer vision system, it is a challenging task to robustly reconstruct complex 3D geometries of automobile castings. However, 3D scanning data is usually interfered by noises, the scanning resolution is low, these effects normally lead to incomplete matching and drift phenomenon. In order to solve these problems, a data-driven local geometric learning model is proposed to achieve robust reconstruction of automobile casting. In order to relieve the interference of sensor noise and to be compatible with incomplete scanning data, a 3D convolution neural network is established to match the local geometric features of automobile casting. The proposed neural network combines the geometric feature representation with the correlation metric function to robustly match the local correspondence. We use the truncated distance field(TDF) around the key point to represent the 3D surface of casting geometry, so that the model can be directly embedded into the 3D space to learn the geometric feature representation; Finally, the training labels is automatically generated for depth learning based on the existing RGB-D reconstruction algorithm, which accesses to the same global key matching descriptor. The experimental results show that the matching accuracy of our network is 92.2% for automobile castings, the closed loop rate is about 74.0% when the matching tolerance threshold τ is 0.2. The matching descriptors performed well and retained 81.6% matching accuracy at 95% closed loop. For the sparse geometric castings with initial matching failure, the 3D matching object can be reconstructed robustly by training the key descriptors. Our method performs 3D reconstruction robustly for complex automobile castings.
Object-based benefits without object-based representations.
Fougnie, Daryl; Cormiea, Sarah M; Alvarez, George A
2013-08-01
Influential theories of visual working memory have proposed that the basic units of memory are integrated object representations. Key support for this proposal is provided by the same object benefit: It is easier to remember multiple features of a single object than the same set of features distributed across multiple objects. Here, we replicate the object benefit but demonstrate that features are not stored as single, integrated representations. Specifically, participants could remember 10 features better when arranged in 5 objects compared to 10 objects, yet memory for one object feature was largely independent of memory for the other object feature. These results rule out the possibility that integrated representations drive the object benefit and require a revision of the concept of object-based memory representations. We propose that working memory is object-based in regard to the factors that enhance performance but feature based in regard to the level of representational failure. PsycINFO Database Record (c) 2013 APA, all rights reserved.
2013-01-01
Background Immunoassays that employ multiplexed bead arrays produce high information content per sample. Such assays are now frequently used to evaluate humoral responses in clinical trials. Integrated software is needed for the analysis, quality control, and secure sharing of the high volume of data produced by such multiplexed assays. Software that facilitates data exchange and provides flexibility to perform customized analyses (including multiple curve fits and visualizations of assay performance over time) could increase scientists’ capacity to use these immunoassays to evaluate human clinical trials. Results The HIV Vaccine Trials Network and the Statistical Center for HIV/AIDS Research and Prevention collaborated with LabKey Software to enhance the open source LabKey Server platform to facilitate workflows for multiplexed bead assays. This system now supports the management, analysis, quality control, and secure sharing of data from multiplexed immunoassays that leverage Luminex xMAP® technology. These assays may be custom or kit-based. Newly added features enable labs to: (i) import run data from spreadsheets output by Bio-Plex Manager™ software; (ii) customize data processing, curve fits, and algorithms through scripts written in common languages, such as R; (iii) select script-defined calculation options through a graphical user interface; (iv) collect custom metadata for each titration, analyte, run and batch of runs; (v) calculate dose–response curves for titrations; (vi) interpolate unknown concentrations from curves for titrated standards; (vii) flag run data for exclusion from analysis; (viii) track quality control metrics across runs using Levey-Jennings plots; and (ix) automatically flag outliers based on expected values. Existing system features allow researchers to analyze, integrate, visualize, export and securely share their data, as well as to construct custom user interfaces and workflows. Conclusions Unlike other tools tailored for Luminex immunoassays, LabKey Server allows labs to customize their Luminex analyses using scripting while still presenting users with a single, graphical interface for processing and analyzing data. The LabKey Server system also stands out among Luminex tools for enabling smooth, secure transfer of data, quality control information, and analyses between collaborators. LabKey Server and its Luminex features are freely available as open source software at http://www.labkey.com under the Apache 2.0 license. PMID:23631706
Eckels, Josh; Nathe, Cory; Nelson, Elizabeth K; Shoemaker, Sara G; Nostrand, Elizabeth Van; Yates, Nicole L; Ashley, Vicki C; Harris, Linda J; Bollenbeck, Mark; Fong, Youyi; Tomaras, Georgia D; Piehler, Britt
2013-04-30
Immunoassays that employ multiplexed bead arrays produce high information content per sample. Such assays are now frequently used to evaluate humoral responses in clinical trials. Integrated software is needed for the analysis, quality control, and secure sharing of the high volume of data produced by such multiplexed assays. Software that facilitates data exchange and provides flexibility to perform customized analyses (including multiple curve fits and visualizations of assay performance over time) could increase scientists' capacity to use these immunoassays to evaluate human clinical trials. The HIV Vaccine Trials Network and the Statistical Center for HIV/AIDS Research and Prevention collaborated with LabKey Software to enhance the open source LabKey Server platform to facilitate workflows for multiplexed bead assays. This system now supports the management, analysis, quality control, and secure sharing of data from multiplexed immunoassays that leverage Luminex xMAP® technology. These assays may be custom or kit-based. Newly added features enable labs to: (i) import run data from spreadsheets output by Bio-Plex Manager™ software; (ii) customize data processing, curve fits, and algorithms through scripts written in common languages, such as R; (iii) select script-defined calculation options through a graphical user interface; (iv) collect custom metadata for each titration, analyte, run and batch of runs; (v) calculate dose-response curves for titrations; (vi) interpolate unknown concentrations from curves for titrated standards; (vii) flag run data for exclusion from analysis; (viii) track quality control metrics across runs using Levey-Jennings plots; and (ix) automatically flag outliers based on expected values. Existing system features allow researchers to analyze, integrate, visualize, export and securely share their data, as well as to construct custom user interfaces and workflows. Unlike other tools tailored for Luminex immunoassays, LabKey Server allows labs to customize their Luminex analyses using scripting while still presenting users with a single, graphical interface for processing and analyzing data. The LabKey Server system also stands out among Luminex tools for enabling smooth, secure transfer of data, quality control information, and analyses between collaborators. LabKey Server and its Luminex features are freely available as open source software at http://www.labkey.com under the Apache 2.0 license.
Summary of the key features of seven biomathematical models of human fatigue and performance.
Mallis, Melissa M; Mejdal, Sig; Nguyen, Tammy T; Dinges, David F
2004-03-01
Biomathematical models that quantify the effects of circadian and sleep/wake processes on the regulation of alertness and performance have been developed in an effort to predict the magnitude and timing of fatigue-related responses in a variety of contexts (e.g., transmeridian travel, sustained operations, shift work). This paper summarizes key features of seven biomathematical models reviewed as part of the Fatigue and Performance Modeling Workshop held in Seattle, WA, on June 13-14, 2002. The Workshop was jointly sponsored by the National Aeronautics and Space Administration, U.S. Department of Defense, U.S. Army Medical Research and Materiel Command, Office of Naval Research, Air Force Office of Scientific Research, and U.S. Department of Transportation. An invitation was sent to developers of seven biomathematical models that were commonly cited in scientific literature and/or supported by government funding. On acceptance of the invitation to attend the Workshop, developers were asked to complete a survey of the goals, capabilities, inputs, and outputs of their biomathematical models of alertness and performance. Data from the completed surveys were summarized and juxtaposed to provide a framework for comparing features of the seven models. Survey responses revealed that models varied greatly relative to their reported goals and capabilities. While all modelers reported that circadian factors were key components of their capabilities, they differed markedly with regard to the roles of sleep and work times as input factors for prediction: four of the seven models had work time as their sole input variable(s), while the other three models relied on various aspects of sleep timing for model input. Models also differed relative to outputs: five sought to predict results from laboratory experiments, field, and operational data, while two models were developed without regard to predicting laboratory experimental results. All modelers provided published papers describing their models, with three of the models being proprietary. Although all models appear to have been fundamentally influenced by the two-process model of sleep regulation by Borbély, there is considerable diversity among them in the number and type of input and output variables, and their stated goals and capabilities.
Summary of the key features of seven biomathematical models of human fatigue and performance
NASA Technical Reports Server (NTRS)
Mallis, Melissa M.; Mejdal, Sig; Nguyen, Tammy T.; Dinges, David F.
2004-01-01
BACKGROUND: Biomathematical models that quantify the effects of circadian and sleep/wake processes on the regulation of alertness and performance have been developed in an effort to predict the magnitude and timing of fatigue-related responses in a variety of contexts (e.g., transmeridian travel, sustained operations, shift work). This paper summarizes key features of seven biomathematical models reviewed as part of the Fatigue and Performance Modeling Workshop held in Seattle, WA, on June 13-14, 2002. The Workshop was jointly sponsored by the National Aeronautics and Space Administration, U.S. Department of Defense, U.S. Army Medical Research and Materiel Command, Office of Naval Research, Air Force Office of Scientific Research, and U.S. Department of Transportation. METHODS: An invitation was sent to developers of seven biomathematical models that were commonly cited in scientific literature and/or supported by government funding. On acceptance of the invitation to attend the Workshop, developers were asked to complete a survey of the goals, capabilities, inputs, and outputs of their biomathematical models of alertness and performance. Data from the completed surveys were summarized and juxtaposed to provide a framework for comparing features of the seven models. RESULTS: Survey responses revealed that models varied greatly relative to their reported goals and capabilities. While all modelers reported that circadian factors were key components of their capabilities, they differed markedly with regard to the roles of sleep and work times as input factors for prediction: four of the seven models had work time as their sole input variable(s), while the other three models relied on various aspects of sleep timing for model input. Models also differed relative to outputs: five sought to predict results from laboratory experiments, field, and operational data, while two models were developed without regard to predicting laboratory experimental results. All modelers provided published papers describing their models, with three of the models being proprietary. CONCLUSIONS: Although all models appear to have been fundamentally influenced by the two-process model of sleep regulation by Borbely, there is considerable diversity among them in the number and type of input and output variables, and their stated goals and capabilities.
Privacy protection schemes for fingerprint recognition systems
NASA Astrophysics Data System (ADS)
Marasco, Emanuela; Cukic, Bojan
2015-05-01
The deployment of fingerprint recognition systems has always raised concerns related to personal privacy. A fingerprint is permanently associated with an individual and, generally, it cannot be reset if compromised in one application. Given that fingerprints are not a secret, potential misuses besides personal recognition represent privacy threats and may lead to public distrust. Privacy mechanisms control access to personal information and limit the likelihood of intrusions. In this paper, image- and feature-level schemes for privacy protection in fingerprint recognition systems are reviewed. Storing only key features of a biometric signature can reduce the likelihood of biometric data being used for unintended purposes. In biometric cryptosystems and biometric-based key release, the biometric component verifies the identity of the user, while the cryptographic key protects the communication channel. Transformation-based approaches only a transformed version of the original biometric signature is stored. Different applications can use different transforms. Matching is performed in the transformed domain which enable the preservation of low error rates. Since such templates do not reveal information about individuals, they are referred to as cancelable templates. A compromised template can be re-issued using a different transform. At image-level, de-identification schemes can remove identifiers disclosed for objectives unrelated to the original purpose, while permitting other authorized uses of personal information. Fingerprint images can be de-identified by, for example, mixing fingerprints or removing gender signature. In both cases, degradation of matching performance is minimized.
Efficient iris recognition by characterizing key local variations.
Ma, Li; Tan, Tieniu; Wang, Yunhong; Zhang, Dexin
2004-06-01
Unlike other biometrics such as fingerprints and face, the distinct aspect of iris comes from randomly distributed features. This leads to its high reliability for personal identification, and at the same time, the difficulty in effectively representing such details in an image. This paper describes an efficient algorithm for iris recognition by characterizing key local variations. The basic idea is that local sharp variation points, denoting the appearing or vanishing of an important image structure, are utilized to represent the characteristics of the iris. The whole procedure of feature extraction includes two steps: 1) a set of one-dimensional intensity signals is constructed to effectively characterize the most important information of the original two-dimensional image; 2) using a particular class of wavelets, a position sequence of local sharp variation points in such signals is recorded as features. We also present a fast matching scheme based on exclusive OR operation to compute the similarity between a pair of position sequences. Experimental results on 2255 iris images show that the performance of the proposed method is encouraging and comparable to the best iris recognition algorithm found in the current literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dowell, Larry Jonathan
Disclosed is a method and device for aligning at least two digital images. An embodiment may use frequency-domain transforms of small tiles created from each image to identify substantially similar, "distinguishing" features within each of the images, and then align the images together based on the location of the distinguishing features. To accomplish this, an embodiment may create equal sized tile sub-images for each image. A "key" for each tile may be created by performing a frequency-domain transform calculation on each tile. A information-distance difference between each possible pair of tiles on each image may be calculated to identify distinguishingmore » features. From analysis of the information-distance differences of the pairs of tiles, a subset of tiles with high discrimination metrics in relation to other tiles may be located for each image. The subset of distinguishing tiles for each image may then be compared to locate tiles with substantially similar keys and/or information-distance metrics to other tiles of other images. Once similar tiles are located for each image, the images may be aligned in relation to the identified similar tiles.« less
CALiPER Report 23: Photometric Testing of White Tunable LED Luminaires
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
2016-01-01
This report documents an initial investigation of photometric testing procedures for white-tunable LED luminaires and summarizes the key features of those products. Goals of the study include understanding the amount of testing required to characterize a white-tunable product, and documenting the performance of available color-tunable luminaires that are intended for architectural lighting.
K-12 Education in Germany: Curriculum and PISA 2015
ERIC Educational Resources Information Center
Atmacasoy, Abdullah
2017-01-01
Against the backdrop of PISA 2015 results, the aim of this study is to review basic structures of German education system by exploring curriculum development process, key features of each educational level and teacher education in order to grasp how Germany has amended her poor performance after PISA 2000 and persistently improved the quality of…
The Effects of Complexity, Accuracy, and Fluency on Communicative Adequacy in Oral Task Performance
ERIC Educational Resources Information Center
Révész, Andrea; Ekiert, Monika; Torgersen, Eivind Nessa
2016-01-01
Communicative adequacy is a key construct in second language research, as the primary goal of most language learners is to communicate successfully in real-world situations. Nevertheless, little is known about what linguistic features contribute to communicatively adequate speech. This study fills this gap by investigating the extent to which…
ERIC Educational Resources Information Center
National Alliance of Business, Inc., Washington, DC.
This booklet provides business leaders and coalitions with information and resources they can use to support charter schools in their own communities. Section 1 provides a brief overview of the charter school movement and discusses the key features of charter schools, which are self-managed public schools that operate through performance contracts…
ERIC Educational Resources Information Center
Education Trust, Washington, DC.
This annual report features national data on academic progress in U.S. public schools, showing student achievement and opportunity patterns from kindergarten through college, by race, ethnicity and family income. It focuses on academic achievement (reading performance on the most recent adminstration of the National Asssessment of Educational…
NIMEFI: gene regulatory network inference using multiple ensemble feature importance algorithms.
Ruyssinck, Joeri; Huynh-Thu, Vân Anh; Geurts, Pierre; Dhaene, Tom; Demeester, Piet; Saeys, Yvan
2014-01-01
One of the long-standing open challenges in computational systems biology is the topology inference of gene regulatory networks from high-throughput omics data. Recently, two community-wide efforts, DREAM4 and DREAM5, have been established to benchmark network inference techniques using gene expression measurements. In these challenges the overall top performer was the GENIE3 algorithm. This method decomposes the network inference task into separate regression problems for each gene in the network in which the expression values of a particular target gene are predicted using all other genes as possible predictors. Next, using tree-based ensemble methods, an importance measure for each predictor gene is calculated with respect to the target gene and a high feature importance is considered as putative evidence of a regulatory link existing between both genes. The contribution of this work is twofold. First, we generalize the regression decomposition strategy of GENIE3 to other feature importance methods. We compare the performance of support vector regression, the elastic net, random forest regression, symbolic regression and their ensemble variants in this setting to the original GENIE3 algorithm. To create the ensemble variants, we propose a subsampling approach which allows us to cast any feature selection algorithm that produces a feature ranking into an ensemble feature importance algorithm. We demonstrate that the ensemble setting is key to the network inference task, as only ensemble variants achieve top performance. As second contribution, we explore the effect of using rankwise averaged predictions of multiple ensemble algorithms as opposed to only one. We name this approach NIMEFI (Network Inference using Multiple Ensemble Feature Importance algorithms) and show that this approach outperforms all individual methods in general, although on a specific network a single method can perform better. An implementation of NIMEFI has been made publicly available.
Mammographic phenotypes of breast cancer risk driven by breast anatomy
NASA Astrophysics Data System (ADS)
Gastounioti, Aimilia; Oustimov, Andrew; Hsieh, Meng-Kang; Pantalone, Lauren; Conant, Emily F.; Kontos, Despina
2017-03-01
Image-derived features of breast parenchymal texture patterns have emerged as promising risk factors for breast cancer, paving the way towards personalized recommendations regarding women's cancer risk evaluation and screening. The main steps to extract texture features of the breast parenchyma are the selection of regions of interest (ROIs) where texture analysis is performed, the texture feature calculation and the texture feature summarization in case of multiple ROIs. In this study, we incorporate breast anatomy in these three key steps by (a) introducing breast anatomical sampling for the definition of ROIs, (b) texture feature calculation aligned with the structure of the breast and (c) weighted texture feature summarization considering the spatial position and the underlying tissue composition of each ROI. We systematically optimize this novel framework for parenchymal tissue characterization in a case-control study with digital mammograms from 424 women. We also compare the proposed approach with a conventional methodology, not considering breast anatomy, recently shown to enhance the case-control discriminatory capacity of parenchymal texture analysis. The case-control classification performance is assessed using elastic-net regression with 5-fold cross validation, where the evaluation measure is the area under the curve (AUC) of the receiver operating characteristic. Upon optimization, the proposed breast-anatomy-driven approach demonstrated a promising case-control classification performance (AUC=0.87). In the same dataset, the performance of conventional texture characterization was found to be significantly lower (AUC=0.80, DeLong's test p-value<0.05). Our results suggest that breast anatomy may further leverage the associations of parenchymal texture features with breast cancer, and may therefore be a valuable addition in pipelines aiming to elucidate quantitative mammographic phenotypes of breast cancer risk.
Utilizing feedback in adaptive SAR ATR systems
NASA Astrophysics Data System (ADS)
Horsfield, Owen; Blacknell, David
2009-05-01
Existing SAR ATR systems are usually trained off-line with samples of target imagery or CAD models, prior to conducting a mission. If the training data is not representative of mission conditions, then poor performance may result. In addition, it is difficult to acquire suitable training data for the many target types of interest. The Adaptive SAR ATR Problem Set (AdaptSAPS) program provides a MATLAB framework and image database for developing systems that adapt to mission conditions, meaning less reliance on accurate training data. A key function of an adaptive system is the ability to utilise truth feedback to improve performance, and it is this feature which AdaptSAPS is intended to exploit. This paper presents a new method for SAR ATR that does not use training data, based on supervised learning. This is achieved by using feature-based classification, and several new shadow features have been developed for this purpose. These features allow discrimination of vehicles from clutter, and classification of vehicles into two classes: targets, comprising military combat types, and non-targets, comprising bulldozers and trucks. The performance of the system is assessed using three baseline missions provided with AdaptSAPS, as well as three additional missions. All performance metrics indicate a distinct learning trend over the course of a mission, with most third and fourth quartile performance levels exceeding 85% correct classification. It has been demonstrated that these performance levels can be maintained even when truth feedback rates are reduced by up to 55% over the course of a mission.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez, Juan; Liefer, Nathan C.; Busho, Colin R.
Here, the need for improved Critical Infrastructure and Key Resource (CIKR) security is unquestioned and there has been minimal emphasis on Level-0 (PHY Process) improvements. Wired Signal Distinct Native Attribute (WS-DNA) Fingerprinting is investigated here as a non-intrusive PHY-based security augmentation to support an envisioned layered security strategy. Results are based on experimental response collections from Highway Addressable Remote Transducer (HART) Differential Pressure Transmitter (DPT) devices from three manufacturers (Yokogawa, Honeywell, Endress+Hauer) installed in an automated process control system. Device discrimination is assessed using Time Domain (TD) and Slope-Based FSK (SB-FSK) fingerprints input to Multiple Discriminant Analysis, Maximum Likelihood (MDA/ML)more » and Random Forest (RndF) classifiers. For 12 different classes (two devices per manufacturer at two distinct set points), both classifiers performed reliably and achieved an arbitrary performance benchmark of average cross-class percent correct of %C > 90%. The least challenging cross-manufacturer results included near-perfect %C ≈ 100%, while the more challenging like-model (serial number) discrimination results included 90%< %C < 100%, with TD Fingerprinting marginally outperforming SB-FSK Fingerprinting; SB-FSK benefits from having less stringent response alignment and registration requirements. The RndF classifier was most beneficial and enabled reliable selection of dimensionally reduced fingerprint subsets that minimize data storage and computational requirements. The RndF selected feature sets contained 15% of the full-dimensional feature sets and only suffered a worst case %CΔ = 3% to 4% performance degradation.« less
Anomalous Cases of Astronaut Helmet Detection
NASA Technical Reports Server (NTRS)
Dolph, Chester; Moore, Andrew J.; Schubert, Matthew; Woodell, Glenn
2015-01-01
An astronaut's helmet is an invariant, rigid image element that is well suited for identification and tracking using current machine vision technology. Future space exploration will benefit from the development of astronaut detection software for search and rescue missions based on EVA helmet identification. However, helmets are solid white, except for metal brackets to attach accessories such as supplementary lights. We compared the performance of a widely used machine vision pipeline on a standard-issue NASA helmet with and without affixed experimental feature-rich patterns. Performance on the patterned helmet was far more robust. We found that four different feature-rich patterns are sufficient to identify a helmet and determine orientation as it is rotated about the yaw, pitch, and roll axes. During helmet rotation the field of view changes to frames containing parts of two or more feature-rich patterns. We took reference images in these locations to fill in detection gaps. These multiple feature-rich patterns references added substantial benefit to detection, however, they generated the majority of the anomalous cases. In these few instances, our algorithm keys in on one feature-rich pattern of the multiple feature-rich pattern reference and makes an incorrect prediction of the location of the other feature-rich patterns. We describe and make recommendations on ways to mitigate anomalous cases in which detection of one or more feature-rich patterns fails. While the number of cases is only a small percentage of the tested helmet orientations, they illustrate important design considerations for future spacesuits. In addition to our four successful feature-rich patterns, we present unsuccessful patterns and discuss the cause of their poor performance from a machine vision perspective. Future helmets designed with these considerations will enable automated astronaut detection and thereby enhance mission operations and extraterrestrial search and rescue.
Mixing console design for telematic applications in live performance and remote recording
NASA Astrophysics Data System (ADS)
Samson, David J.
The development of a telematic mixing console addresses audio engineers' need for a fully integrated system architecture that improves efficiency and control for applications such as distributed performance and remote recording. Current systems used in state of the art telematic performance rely on software-based interconnections with complex routing schemes that offer minimal flexibility or control over key parameters needed to achieve a professional workflow. The lack of hardware-based control in the current model limits the full potential of both the engineer and the system. The new architecture provides a full-featured platform that, alongside customary features, integrates (1) surround panning capability for motorized, binaural manikin heads, as well as all sources in the included auralization module, (2) self-labelling channel strips, responsive to change at all remote sites, (3) onboard roundtrip latency monitoring, (4) synchronized remote audio recording and monitoring, and (5) flexible routing. These features combined with robust parameter automation and precise analog control will raise the standard for telematic systems as well as advance the development of networked audio systems for both research and professional audio markets.
Multi-rate DPSK optical transceivers for free-space applications
NASA Astrophysics Data System (ADS)
Caplan, D. O.; Carney, J. J.; Fitzgerald, J. J.; Gaschits, I.; Kaminsky, R.; Lund, G.; Hamilton, S. A.; Magliocco, R. J.; Murphy, R. J.; Rao, H. G.; Spellmeyer, N. W.; Wang, J. P.
2014-03-01
We describe a flexible high-sensitivity laser communication transceiver design that can significantly benefit performance and cost of NASA's satellite-based Laser Communications Relay Demonstration. Optical communications using differential phase shift keying, widely deployed for use in long-haul fiber-optic networks, is well known for its superior sensitivity and link performance over on-off keying, while maintaining a relatively straightforward design. However, unlike fiber-optic links, free-space applications often require operation over a wide dynamic range of power due to variations in link distance and channel conditions, which can include rapid kHz-class fading when operating through the turbulent atmosphere. Here we discuss the implementation of a robust, near-quantum-limited multi-rate DPSK transceiver, co-located transmitter and receiver subsystems that can operate efficiently over the highly-variable free-space channel. Key performance features will be presented on the master oscillator power amplifier (MOPA) based TX, including a wavelength-stabilized master laser, high-extinction-ratio burst-mode modulator, and 0.5 W single polarization power amplifier, as well as low-noise optically preamplified DSPK receiver and built-in test capabilities.
Applications of artificial intelligence to digital photogrammetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kretsch, J.L.
1988-01-01
The aim of this research was to explore the application of expert systems to digital photogrammetry, specifically to photogrammetric triangulation, feature extraction, and photogrammetric problem solving. In 1987, prototype expert systems were developed for doing system startup, interior orientation, and relative orientation in the mensuration stage. The system explored means of performing diagnostics during the process. In the area of feature extraction, the relationship of metric uncertainty to symbolic uncertainty was the topic of research. Error propagation through the Dempster-Shafer formalism for representing evidence was performed in order to find the variance in the calculated belief values due to errorsmore » in measurements made together the initial evidence needed to being labeling of observed image features with features in an object model. In photogrammetric problem solving, an expert system is under continuous development which seeks to solve photogrammetric problems using mathematical reasoning. The key to the approach used is the representation of knowledge directly in the form of equations, rather than in the form of if-then rules. Then each variable in the equations is treated as a goal to be solved.« less
Carpenter, Gail A; Gaddam, Sai Chaitanya
2010-04-01
Memories in Adaptive Resonance Theory (ART) networks are based on matched patterns that focus attention on those portions of bottom-up inputs that match active top-down expectations. While this learning strategy has proved successful for both brain models and applications, computational examples show that attention to early critical features may later distort memory representations during online fast learning. For supervised learning, biased ARTMAP (bARTMAP) solves the problem of over-emphasis on early critical features by directing attention away from previously attended features after the system makes a predictive error. Small-scale, hand-computed analog and binary examples illustrate key model dynamics. Two-dimensional simulation examples demonstrate the evolution of bARTMAP memories as they are learned online. Benchmark simulations show that featural biasing also improves performance on large-scale examples. One example, which predicts movie genres and is based, in part, on the Netflix Prize database, was developed for this project. Both first principles and consistent performance improvements on all simulation studies suggest that featural biasing should be incorporated by default in all ARTMAP systems. Benchmark datasets and bARTMAP code are available from the CNS Technology Lab Website: http://techlab.bu.edu/bART/. Copyright 2009 Elsevier Ltd. All rights reserved.
Unconscious analyses of visual scenes based on feature conjunctions.
Tachibana, Ryosuke; Noguchi, Yasuki
2015-06-01
To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. (c) 2015 APA, all rights reserved).
A Deep Learning Approach for Fault Diagnosis of Induction Motors in Manufacturing
NASA Astrophysics Data System (ADS)
Shao, Si-Yu; Sun, Wen-Jun; Yan, Ru-Qiang; Wang, Peng; Gao, Robert X.
2017-11-01
Extracting features from original signals is a key procedure for traditional fault diagnosis of induction motors, as it directly influences the performance of fault recognition. However, high quality features need expert knowledge and human intervention. In this paper, a deep learning approach based on deep belief networks (DBN) is developed to learn features from frequency distribution of vibration signals with the purpose of characterizing working status of induction motors. It combines feature extraction procedure with classification task together to achieve automated and intelligent fault diagnosis. The DBN model is built by stacking multiple-units of restricted Boltzmann machine (RBM), and is trained using layer-by-layer pre-training algorithm. Compared with traditional diagnostic approaches where feature extraction is needed, the presented approach has the ability of learning hierarchical representations, which are suitable for fault classification, directly from frequency distribution of the measurement data. The structure of the DBN model is investigated as the scale and depth of the DBN architecture directly affect its classification performance. Experimental study conducted on a machine fault simulator verifies the effectiveness of the deep learning approach for fault diagnosis of induction motors. This research proposes an intelligent diagnosis method for induction motor which utilizes deep learning model to automatically learn features from sensor data and realize working status recognition.
Key features of an EU health information system: a concept mapping study.
Rosenkötter, Nicole; Achterberg, Peter W; van Bon-Martens, Marja J H; Michelsen, Kai; van Oers, Hans A M; Brand, Helmut
2016-02-01
Despite the acknowledged value of an EU health information system (EU-HISys) and the many achievements in this field, the landscape is still heavily fragmented and incomplete. Through a systematic analysis of the opinions and valuations of public health stakeholders, this study aims to conceptualize key features of an EU-HISys. Public health professionals and policymakers were invited to participate in a concept mapping procedure. First, participants (N = 34) formulated statements that reflected their vision of an EU-HISys. Second, participants (N = 28) rated the relative importance of each statement and grouped conceptually similar ones. Principal Component and cluster analyses were used to condense these results to EU-HISys key features in a concept map. The number of key features and the labelling of the concept map were determined by expert consensus. The concept map contains 10 key features that summarize 93 statements. The map consists of a horizontal axis that represents the relevance of an 'organizational strategy', which deals with the 'efforts' to design and develop an EU-HISys and the 'achievements' gained by a functioning EU-HISys. The vertical axis represents the 'professional orientation' of the EU-HISys, ranging from the 'scientific' through to the 'policy' perspective. The top ranking statement expressed the need to establish a system that is permanent and sustainable. The top ranking key feature focuses on data and information quality. This study provides insights into key features of an EU-HISys. The results can be used to guide future planning and to support the development of a health information system for Europe. © The Author 2015. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.
Defining competency-based evaluation objectives in family medicine
Lawrence, Kathrine; Allen, Tim; Brailovsky, Carlos; Crichton, Tom; Bethune, Cheri; Donoff, Michel; Laughlin, Tom; Wetmore, Stephen; Carpentier, Marie-Pierre; Visser, Shaun
2011-01-01
Abstract Objective To develop key features for priority topics previously identified by the College of Family Physicians of Canada that, together with skill dimensions and phases of the clinical encounter, broadly describe competence in family medicine. Design Modified nominal group methodology, which was used to develop key features for each priority topic through an iterative process. Setting The College of Family Physicians of Canada. Participants An expert group of 7 family physicians and 1 educational consultant, all of whom had experience in assessing competence in family medicine. Group members represented the Canadian family medicine context with respect to region, sex, language, community type, and experience. Methods The group used a modified Delphi process to derive a detailed operational definition of competence, using multiple iterations until consensus was achieved for the items under discussion. The group met 3 to 4 times a year from 2000 to 2007. Main findings The group analyzed 99 topics and generated 773 key features. There were 2 to 20 (average 7.8) key features per topic; 63% of the key features focused on the diagnostic phase of the clinical encounter. Conclusion This project expands previous descriptions of the process of generating key features for assessment, and removes this process from the context of written examinations. A key-features analysis of topics focuses on higher-order cognitive processes of clinical competence. The project did not define all the skill dimensions of competence to the same degree, but it clearly identified those requiring further definition. This work generates part of a discipline-specific, competency-based definition of family medicine for assessment purposes. It limits the domain for assessment purposes, which is an advantage for the teaching and assessment of learners. A validation study on the content of this work would ensure that it truly reflects competence in family medicine. PMID:21998245
NASA Technical Reports Server (NTRS)
1990-01-01
Evaluations are summarized directed towards defining optimal instrumentation for performing planetary polarization measurements from a spacecraft platform. An overview of the science rationale for polarimetric measurements is given to point out the importance of such measurements for future studies and exploration of the outer planets. The key instrument features required to perform the needed measurements are discussed and applied to the requirements for the Cassini mission to Saturn. The resultant conceptual design of a spectro-polarimeter photometer for Cassini is described in detail.
Human action recognition based on spatial-temporal descriptors using key poses
NASA Astrophysics Data System (ADS)
Hu, Shuo; Chen, Yuxin; Wang, Huaibao; Zuo, Yaqing
2014-11-01
Human action recognition is an important area of pattern recognition today due to its direct application and need in various occasions like surveillance and virtual reality. In this paper, a simple and effective human action recognition method is presented based on the key poses of human silhouette and the spatio-temporal feature. Firstly, the contour points of human silhouette have been gotten, and the key poses are learned by means of K-means clustering based on the Euclidean distance between each contour point and the centre point of the human silhouette, and then the type of each action is labeled for further match. Secondly, we obtain the trajectories of centre point of each frame, and create a spatio-temporal feature value represented by W to describe the motion direction and speed of each action. The value W contains the information of location and temporal order of each point on the trajectories. Finally, the matching stage is performed by comparing the key poses and W between training sequences and test sequences, the nearest neighbor sequences is found and its label supplied the final result. Experiments on the public available Weizmann datasets show the proposed method can improve accuracy by distinguishing amphibious poses and increase suitability for real-time applications by reducing the computational cost.
SIFT optimization and automation for matching images from multiple temporal sources
NASA Astrophysics Data System (ADS)
Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio
2017-05-01
Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.
Kjellstrom, Tord; Briggs, David; Freyberg, Chris; Lemke, Bruno; Otto, Matthias; Hyatt, Olivia
2016-01-01
Ambient heat exposure is a well-known health hazard, which reduces human performance and work capacity at heat levels already common in tropical and subtropical areas. Various health problems have been reported. Increasing heat exposure during the hottest seasons of each year is a key feature of global climate change. Heat exhaustion and reduced human performance are often overlooked in climate change health impact analysis. Later this century, many among the four billion people who live in hot areas worldwide will experience significantly reduced work capacity owing to climate change. In some areas, 30-40% of annual daylight hours will become too hot for work to be carried out. The social and economic impacts will be considerable, with global gross domestic product (GDP) losses greater than 20% by 2100. The analysis to date is piecemeal. More analysis of climate change-related occupational health impact assessments is greatly needed.
Improving Diaper Performance for Extremely Low-Birth-Weight Infants.
Sanchez, Veronica; Maladen-Percy, Michelle; Gustin, Jennifer; Tally, Amy; Gibb, Roger; Ogle, Julie; Kenneally, Dianna C; Carr, Andrew N
2018-06-01
Extremely low-birth-weight (ELBW) infants face significant diapering challenges compared with their full-term peers, due to immature musculature, nervous system, and skin development. Advances in medical care has increased an ELBW infant's rate of survival, which creates a growing need for diapers to better serve these infants. Aim of research. The objective of this study was to identify and confirm the requirements for optimal diaper performance from the neonatal intensive care unit nurses' perspective, as well as to assess in-hospital performance to determine if new features improved key developmental care parameters. Two surveys were shared among nurses to address study objectives. Study 1 (N = 151) was designed for neonatal intensive care unit nurses to identify key requirements for ELBW diapers and rate the performance of existing ELBW diapers. Study 2 (N = 99) assessed in-hospital performance of the test diaper compared with the usual diaper, under normal usage conditions. Findings/results. The majority of nurses agreed that ELBW diapers must fit appropriately between the legs so that hips and legs are not spread apart and that ELBW diapers need to be flexible between the legs for positioning. Of the nurses-infant pair responses, 93% ( P < .0001) preferred the test ELBW diaper over their usual diaper. Findings suggest that nurses should be included in the product design process to ensure both their needs and the needs of an infant are being met. Nurses are considering how diaper features may affect both acute and long-term medical outcomes, and this information provides necessary guidance to diaper manufacturers and designers when developing better-performing diapers.
Biometrics based key management of double random phase encoding scheme using error control codes
NASA Astrophysics Data System (ADS)
Saini, Nirmala; Sinha, Aloka
2013-08-01
In this paper, an optical security system has been proposed in which key of the double random phase encoding technique is linked to the biometrics of the user to make it user specific. The error in recognition due to the biometric variation is corrected by encoding the key using the BCH code. A user specific shuffling key is used to increase the separation between genuine and impostor Hamming distance distribution. This shuffling key is then further secured using the RSA public key encryption to enhance the security of the system. XOR operation is performed between the encoded key and the feature vector obtained from the biometrics. The RSA encoded shuffling key and the data obtained from the XOR operation are stored into a token. The main advantage of the present technique is that the key retrieval is possible only in the simultaneous presence of the token and the biometrics of the user which not only authenticates the presence of the original input but also secures the key of the system. Computational experiments showed the effectiveness of the proposed technique for key retrieval in the decryption process by using the live biometrics of the user.
Forsyth, J R; Riddiford-Harland, D L; Whitting, J W; Sheppard, J M; Steele, J R
2018-05-01
Although performing aerial maneuvers can increase wave score and winning potential in competitive surfing, the critical features underlying successful aerial performance have not been systematically investigated. This study aimed to analyze highly skilled aerial maneuver performance and to identify the critical features associated with successful or unsuccessful landing. Using video recordings of the World Surf League's Championship Tour, every aerial performed during the quarterfinal, semifinal, and final heats from the 11 events in the 2015 season was viewed. From this, 121 aerials were identified with the Frontside Air (n = 15) and Frontside Air Reverse (n = 67) being selected to be qualitatively assessed. Using chi-squared analyses, a series of key critical features, including landing over the center of the surfboard (FS Air χ 2 = 14.00, FS Air Reverse χ 2 = 26.61; P < .001) and landing with the lead ankle in dorsiflexion (FS Air χ 2 = 3.90, FS Air Reverse χ 2 = 13.64; P < .05), were found to be associated with successful landings. These critical features help surfers land in a stable position, while maintaining contact with the surfboard. The results of this study provide coaches with evidence to adjust the technique of their athletes to improve their winning potential. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Decoding visual object categories from temporal correlations of ECoG signals.
Majima, Kei; Matsuo, Takeshi; Kawasaki, Keisuke; Kawai, Kensuke; Saito, Nobuhito; Hasegawa, Isao; Kamitani, Yukiyasu
2014-04-15
How visual object categories are represented in the brain is one of the key questions in neuroscience. Studies on low-level visual features have shown that relative timings or phases of neural activity between multiple brain locations encode information. However, whether such temporal patterns of neural activity are used in the representation of visual objects is unknown. Here, we examined whether and how visual object categories could be predicted (or decoded) from temporal patterns of electrocorticographic (ECoG) signals from the temporal cortex in five patients with epilepsy. We used temporal correlations between electrodes as input features, and compared the decoding performance with features defined by spectral power and phase from individual electrodes. While using power or phase alone, the decoding accuracy was significantly better than chance, correlations alone or those combined with power outperformed other features. Decoding performance with correlations was degraded by shuffling the order of trials of the same category in each electrode, indicating that the relative time series between electrodes in each trial is critical. Analysis using a sliding time window revealed that decoding performance with correlations began to rise earlier than that with power. This earlier increase in performance was replicated by a model using phase differences to encode categories. These results suggest that activity patterns arising from interactions between multiple neuronal units carry additional information on visual object categories. Copyright © 2013 Elsevier Inc. All rights reserved.
Physical activity classification with dynamic discriminative methods.
Ray, Evan L; Sasaki, Jeffer E; Freedson, Patty S; Staudenmayer, John
2018-06-19
A person's physical activity has important health implications, so it is important to be able to measure aspects of physical activity objectively. One approach to doing that is to use data from an accelerometer to classify physical activity according to activity type (e.g., lying down, sitting, standing, or walking) or intensity (e.g., sedentary, light, moderate, or vigorous). This can be formulated as a labeled classification problem, where the model relates a feature vector summarizing the accelerometer signal in a window of time to the activity type or intensity in that window. These data exhibit two key characteristics: (1) the activity classes in different time windows are not independent, and (2) the accelerometer features have moderately high dimension and follow complex distributions. Through a simulation study and applications to three datasets, we demonstrate that a model's classification performance is related to how it addresses these aspects of the data. Dynamic methods that account for temporal dependence achieve better performance than static methods that do not. Generative methods that explicitly model the distribution of the accelerometer signal features do not perform as well as methods that take a discriminative approach to establishing the relationship between the accelerometer signal and the activity class. Specifically, Conditional Random Fields consistently have better performance than commonly employed methods that ignore temporal dependence or attempt to model the accelerometer features. © 2018, The International Biometric Society.
Is filtering difficulty the basis of attentional deficits in schizophrenia?
Ravizza, Susan M; Robertson, Lynn C; Carter, Cameron S; Nordahl, Thomas E; Salo, Ruth E
2007-06-30
The distractibility that schizophrenia patients display may be the result of a deficiency in filtering out irrelevant information. The aim of the current study was to assess whether patients with schizophrenia exhibit greater difficulty when task-irrelevant features change compared to healthy participants. Thirteen medicated outpatients with a diagnosis of schizophrenia and thirteen age- and parental education-matched controls performed a target selection task in which the task-relevant letter or the task-irrelevant features of color, and/or location repeated or switched. Participants were required to respond by pressing the appropriate key associated with the target letter. These patients with schizophrenia were slower when the task-relevant target letter switched than when it repeated. In contrast, schizophrenia patients performed similarly to controls when task-irrelevant information changed. Thus, we found no evidence that patients with schizophrenia were impaired in inhibiting irrelevant perceptual features. In contrast, changes in task-relevant features were problematic for patients relative to control participants. These results suggest that medicated outpatients who are mild to moderately symptomatic do not exhibit global impairments of feature processing. Instead, impairments are restricted to situations when task-relevant features vary. The current findings also suggest that when a course of action is not implied by an irrelevant feature, outpatients' behavior is not modulated by extraneous visual information any more than in healthy controls.
Nanoscale Morphology to Macroscopic Performance in Ultra High Molecular Weight Polyethylene Fibers
NASA Astrophysics Data System (ADS)
McDaniel, Preston B.
Ultra high molecular weight polyethylene (UHMWPE) fibers are increasingly used in high -performance applications where strength, stiffness, and the ability to dissipate energy are of critical importance. Despite their use in a variety of applications, the influence of morphological features at the meso/nanoscale on the macroscopic performance of the fibers has not been well understood. There is particular interest in gaining a better understanding of the nanoscale structure-property relationships in UHMWPE fibers used in ballistics applications. In order to accurately model and predict failure in the fiber, a more complete understanding of the complex load pathways that dictate the ways in which load is transferred through the fiber, across interfaces and length scales is required. The goal of the work discussed herein is to identify key meso/nanostructural features evolved in high performance fibers and determine how these features influence the performance of the fiber through a variety of different loading mechanisms. The important structural features in high-performance UHMWPE fibers are first identified through examination of the meso/nanostructure of a series of fibers with different processing conditions. This is achieved primarily through the use of wide-angle x-ray diffraction (WAXD) and atomic force microscopy (AFM). Analysis of AFM images and WAXD data allows identification and quantifications of important structural features at these length scales. Key meso/nanostructural features are then examined with respect to their influence on the transverse compression behavior of single fibers. Through post-mortem AFM analysis of samples at incremental compressive strains, the evolution of damage is examined and compared with macroscopic fiber mechanical response. It was found that collapse of mesoscale voids, followed by nanoscale fibrillation and reorganization of a fibrillar network has a significant influence on the mechanical response of the fiber. Through this work, the importance of nanoscale fibril adhesive interactions is highlighted. However, very little information exists in the literature as to the nature and magnitude of these interactions. Examination of nanoscale fibrillar adhesive interactions is experimentally difficult, and necessitated the development of an AFM based nanoscale splitting technique to quantify the interactions between fibrils. Through analysis of split geometry and careful partitioning of energies, the adhesive energy between fibrils in UHMWPE fibers are determined. The calculated average adhesive energies are significantly larger than the estimated energy due to van der Waals interactions, suggesting that there are physical connections (e.g., tie chains, tie fibrils, and lamellar crystalline bridges) that influence the interactions between fibrils. The interactions identified through this work are believed to be responsible for the creation of load pathways across fibril interfaces where load may be translated through the fiber in tension, compression, and shear. Finally, the nature of the mesoscale fibrillar network is explored through the development of a variable angle, single fiber peel test. This peel test enables the quantification of Mode I and Mode II peel energies. The modes of deformation observed in the peel test are representative of the mechanisms experienced during tensile and transverse compression loading. The quantification of peel energies in both Mode I and Mode II failure highlight the importance of the fibrillar network as a key mechanism for the translation of load through the fiber. In both modes of failure, the fibril network acts as a framework for the orientation and subsequent failure of nanoscale fibrils.
ERIC Educational Resources Information Center
Rollock, Nicola
2007-01-01
The continued low academic attainment of Black pupils is now a well-established, familiar feature of the annual statistics of educational attainment. Black pupils tend to consistently perform below their white counterparts and below the national average. Key debates, examining how to address the difference in attainment gap, have tended to focus…
Motivation, Satisfaction, and Morale in Army Careers: A Review of Theory and Measurement
1976-12-01
subjective goali on performance. Their model of "task motivation" has the following key features (Locke, Cartledge, & Knerr, 1968, p. 135): I. The... pulling himself up in the world and should work hard with the hope of being promoted to a higher level job. "* A man should choose the Job which pays the
ERIC Educational Resources Information Center
Nasser, Ramzi; Carifio, James
The purpose of this study was to find out whether students perform differently on algebra word problems that have certain key context features and entail proportional reasoning, relative to their level of logical reasoning and their degree of field dependence/independence. Field-independent students tend to restructure and break stimuli into parts…
ERIC Educational Resources Information Center
Davies, Peter
The key features of student achievement were examined at twelve sixth form colleges (SFCs) in the United Kingdom with an emphasis on strategies to maintain and improve performance. It was found that students at SFCs have on average higher prior attainment and suffer less deprivation than their counterparts in General Further Education and Tertiary…
Enabling interspecies epigenomic comparison with CEpBrowser.
Cao, Xiaoyi; Zhong, Sheng
2013-05-01
We developed the Comparative Epigenome Browser (CEpBrowser) to allow the public to perform multi-species epigenomic analysis. The web-based CEpBrowser integrates, manages and visualizes sequencing-based epigenomic datasets. Five key features were developed to maximize the efficiency of interspecies epigenomic comparisons. CEpBrowser is a web application implemented with PHP, MySQL, C and Apache. URL: http://www.cepbrowser.org/.
Knowledge Discovery for Transonic Regional-Jet Wing through Multidisciplinary Design Exploration
NASA Astrophysics Data System (ADS)
Chiba, Kazuhisa; Obayashi, Shigeru; Morino, Hiroyuki
Data mining is an important facet of solving multi-objective optimization problem. Because it is one of the effective manner to discover the design knowledge in the multi-objective optimization problem which obtains large data. In the present study, data mining has been performed for a large-scale and real-world multidisciplinary design optimization (MDO) to provide knowledge regarding the design space. The MDO among aerodynamics, structures, and aeroelasticity of the regional-jet wing was carried out using high-fidelity evaluation models on the adaptive range multi-objective genetic algorithm. As a result, nine non-dominated solutions were generated and used for tradeoff analysis among three objectives. All solutions evaluated during the evolution were analyzed for the tradeoffs and influence of design variables using a self-organizing map to extract key features of the design space. Although the MDO results showed the inverted gull-wings as non-dominated solutions, one of the key features found by data mining was the non-gull wing geometry. When this knowledge was applied to one optimum solution, the resulting design was found to have better performance compared with the original geometry designed in the conventional manner.
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition
Ordóñez, Francisco Javier; Roggen, Daniel
2016-01-01
Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612
Bou Chakra, Elie; Hannes, Benjamin; Vieillard, Julien; Mansfield, Colin D.; Mazurczyk, Radoslav; Bouchard, Aude; Potempa, Jan; Krawczyk, Stanislas; Cabrera, Michel
2009-01-01
A novel approach to integrating biochip and microfluidic devices is reported in which microcontact printing is a key fabrication technique. The process is performed using an automated microcontact printer that has been developed as an application-specific tool. As proof-of-concept the instrument is used to consecutively and selectively graft patterns of antibodies at the bottom of a glass channel for use in microfluidic immunoassays. Importantly, feature collapse due to over compression of the PDMS stamp is avoided by fine control of the stamp’s compression during contact. The precise alignment of biomolecules at the intersection of microfluidic channel and integrated optical waveguides has been achieved, with antigen detection performed via fluorescence excitation. Thus, it has been demonstrated that this technology permits sequential microcontact printing of isolated features consisting of functional biomolecules at any position along a microfluidic channel and also that it is possible to precisely align these features with existing components. PMID:20161128
CNN-BLPred: a Convolutional neural network based predictor for β-Lactamases (BL) and their classes.
White, Clarence; Ismail, Hamid D; Saigo, Hiroto; Kc, Dukka B
2017-12-28
The β-Lactamase (BL) enzyme family is an important class of enzymes that plays a key role in bacterial resistance to antibiotics. As the newly identified number of BL enzymes is increasing daily, it is imperative to develop a computational tool to classify the newly identified BL enzymes into one of its classes. There are two types of classification of BL enzymes: Molecular Classification and Functional Classification. Existing computational methods only address Molecular Classification and the performance of these existing methods is unsatisfactory. We addressed the unsatisfactory performance of the existing methods by implementing a Deep Learning approach called Convolutional Neural Network (CNN). We developed CNN-BLPred, an approach for the classification of BL proteins. The CNN-BLPred uses Gradient Boosted Feature Selection (GBFS) in order to select the ideal feature set for each BL classification. Based on the rigorous benchmarking of CCN-BLPred using both leave-one-out cross-validation and independent test sets, CCN-BLPred performed better than the other existing algorithms. Compared with other architectures of CNN, Recurrent Neural Network, and Random Forest, the simple CNN architecture with only one convolutional layer performs the best. After feature extraction, we were able to remove ~95% of the 10,912 features using Gradient Boosted Trees. During 10-fold cross validation, we increased the accuracy of the classic BL predictions by 7%. We also increased the accuracy of Class A, Class B, Class C, and Class D performance by an average of 25.64%. The independent test results followed a similar trend. We implemented a deep learning algorithm known as Convolutional Neural Network (CNN) to develop a classifier for BL classification. Combined with feature selection on an exhaustive feature set and using balancing method such as Random Oversampling (ROS), Random Undersampling (RUS) and Synthetic Minority Oversampling Technique (SMOTE), CNN-BLPred performs significantly better than existing algorithms for BL classification.
Quantum Tunneling Affects Engine Performance.
Som, Sibendu; Liu, Wei; Zhou, Dingyu D Y; Magnotti, Gina M; Sivaramakrishnan, Raghu; Longman, Douglas E; Skodje, Rex T; Davis, Michael J
2013-06-20
We study the role of individual reaction rates on engine performance, with an emphasis on the contribution of quantum tunneling. It is demonstrated that the effect of quantum tunneling corrections for the reaction HO2 + HO2 = H2O2 + O2 can have a noticeable impact on the performance of a high-fidelity model of a compression-ignition (e.g., diesel) engine, and that an accurate prediction of ignition delay time for the engine model requires an accurate estimation of the tunneling correction for this reaction. The three-dimensional model includes detailed descriptions of the chemistry of a surrogate for a biodiesel fuel, as well as all the features of the engine, such as the liquid fuel spray and turbulence. This study is part of a larger investigation of how the features of the dynamics and potential energy surfaces of key reactions, as well as their reaction rate uncertainties, affect engine performance, and results in these directions are also presented here.
Preliminary design for a reverse Brayton cycle cryogenic cooler
NASA Technical Reports Server (NTRS)
Swift, Walter L.
1993-01-01
A long life, single stage, reverse Brayton cycle cryogenic cooler is being developed for applications in space. The system is designed to provide 5 W of cooling at a temperature of 65 Kelvin with a total cycle input power of less than 200 watts. Key features of the approach include high speed, miniature turbomachines; an all metal, high performance, compact heat exchanger; and a simple, high frequency, three phase motor drive. In Phase 1, a preliminary design of the system was performed. Analyses and trade studies were used to establish the thermodynamic performance of the system and the performance specifications for individual components. Key mechanical features for components were defined and assembly layouts for the components and the system were prepared. Critical materials and processes were identified. Component and brassboard system level tests were conducted at cryogenic temperatures. The system met the cooling requirement of 5 W at 65 K. The system was also operated over a range of cooling loads from 0.5 W at 37 K to 10 W at 65 K. Input power to the system was higher than target values. The heat exchanger and inverter met or exceeded their respective performance targets. The compresssor/motor assembly was marginally below its performance target. The turboexpander met its aerodynamic efficiency target, but overall performance was below target because of excessive heat leak. The heat leak will be reduced to an acceptable value in the engineering model. The results of Phase 1 indicate that the 200 watt input power requirement can be met with state-of-the-art technology in a system which has very flexible integration requirements and negligible vibration levels.
Preliminary design for a reverse Brayton cycle cryogenic cooler
NASA Astrophysics Data System (ADS)
Swift, Walter L.
1993-12-01
A long life, single stage, reverse Brayton cycle cryogenic cooler is being developed for applications in space. The system is designed to provide 5 W of cooling at a temperature of 65 Kelvin with a total cycle input power of less than 200 watts. Key features of the approach include high speed, miniature turbomachines; an all metal, high performance, compact heat exchanger; and a simple, high frequency, three phase motor drive. In Phase 1, a preliminary design of the system was performed. Analyses and trade studies were used to establish the thermodynamic performance of the system and the performance specifications for individual components. Key mechanical features for components were defined and assembly layouts for the components and the system were prepared. Critical materials and processes were identified. Component and brassboard system level tests were conducted at cryogenic temperatures. The system met the cooling requirement of 5 W at 65 K. The system was also operated over a range of cooling loads from 0.5 W at 37 K to 10 W at 65 K. Input power to the system was higher than target values. The heat exchanger and inverter met or exceeded their respective performance targets. The compresssor/motor assembly was marginally below its performance target. The turboexpander met its aerodynamic efficiency target, but overall performance was below target because of excessive heat leak. The heat leak will be reduced to an acceptable value in the engineering model. The results of Phase 1 indicate that the 200 watt input power requirement can be met with state-of-the-art technology in a system which has very flexible integration requirements and negligible vibration levels.
A fast image matching algorithm based on key points
NASA Astrophysics Data System (ADS)
Wang, Huilin; Wang, Ying; An, Ru; Yan, Peng
2014-05-01
Image matching is a very important technique in image processing. It has been widely used for object recognition and tracking, image retrieval, three-dimensional vision, change detection, aircraft position estimation, and multi-image registration. Based on the requirements of matching algorithm for craft navigation, such as speed, accuracy and adaptability, a fast key point image matching method is investigated and developed. The main research tasks includes: (1) Developing an improved celerity key point detection approach using self-adapting threshold of Features from Accelerated Segment Test (FAST). A method of calculating self-adapting threshold was introduced for images with different contrast. Hessian matrix was adopted to eliminate insecure edge points in order to obtain key points with higher stability. This approach in detecting key points has characteristics of small amount of computation, high positioning accuracy and strong anti-noise ability; (2) PCA-SIFT is utilized to describe key point. 128 dimensional vector are formed based on the SIFT method for the key points extracted. A low dimensional feature space was established by eigenvectors of all the key points, and each eigenvector was projected onto the feature space to form a low dimensional eigenvector. These key points were re-described by dimension-reduced eigenvectors. After reducing the dimension by the PCA, the descriptor was reduced to 20 dimensions from the original 128. This method can reduce dimensions of searching approximately near neighbors thereby increasing overall speed; (3) Distance ratio between the nearest neighbour and second nearest neighbour searching is regarded as the measurement criterion for initial matching points from which the original point pairs matched are obtained. Based on the analysis of the common methods (e.g. RANSAC (random sample consensus) and Hough transform cluster) used for elimination false matching point pairs, a heuristic local geometric restriction strategy is adopted to discard false matched point pairs further; and (4) Affine transformation model is introduced to correct coordinate difference between real-time image and reference image. This resulted in the matching of the two images. SPOT5 Remote sensing images captured at different date and airborne images captured with different flight attitude were used to test the performance of the method from matching accuracy, operation time and ability to overcome rotation. Results show the effectiveness of the approach.
Poly(A) code analyses reveal key determinants for tissue-specific mRNA alternative polyadenylation
Weng, Lingjie; Li, Yi; Xie, Xiaohui; Shi, Yongsheng
2016-01-01
mRNA alternative polyadenylation (APA) is a critical mechanism for post-transcriptional gene regulation and is often regulated in a tissue- and/or developmental stage-specific manner. An ultimate goal for the APA field has been to be able to computationally predict APA profiles under different physiological or pathological conditions. As a first step toward this goal, we have assembled a poly(A) code for predicting tissue-specific poly(A) sites (PASs). Based on a compendium of over 600 features that have known or potential roles in PAS selection, we have generated and refined a machine-learning algorithm using multiple high-throughput sequencing-based data sets of tissue-specific and constitutive PASs. This code can predict tissue-specific PASs with >85% accuracy. Importantly, by analyzing the prediction performance based on different RNA features, we found that PAS context, including the distance between alternative PASs and the relative position of a PAS within the gene, is a key feature for determining the susceptibility of a PAS to tissue-specific regulation. Our poly(A) code provides a useful tool for not only predicting tissue-specific APA regulation, but also for studying its underlying molecular mechanisms. PMID:27095026
A Probabilistic Palimpsest Model of Visual Short-term Memory
Matthey, Loic; Bays, Paul M.; Dayan, Peter
2015-01-01
Working memory plays a key role in cognition, and yet its mechanisms remain much debated. Human performance on memory tasks is severely limited; however, the two major classes of theory explaining the limits leave open questions about key issues such as how multiple simultaneously-represented items can be distinguished. We propose a palimpsest model, with the occurrent activity of a single population of neurons coding for several multi-featured items. Using a probabilistic approach to storage and recall, we show how this model can account for many qualitative aspects of existing experimental data. In our account, the underlying nature of a memory item depends entirely on the characteristics of the population representation, and we provide analytical and numerical insights into critical issues such as multiplicity and binding. We consider representations in which information about individual feature values is partially separate from the information about binding that creates single items out of multiple features. An appropriate balance between these two types of information is required to capture fully the different types of error seen in human experimental data. Our model provides the first principled account of misbinding errors. We also suggest a specific set of stimuli designed to elucidate the representations that subjects actually employ. PMID:25611204
A probabilistic palimpsest model of visual short-term memory.
Matthey, Loic; Bays, Paul M; Dayan, Peter
2015-01-01
Working memory plays a key role in cognition, and yet its mechanisms remain much debated. Human performance on memory tasks is severely limited; however, the two major classes of theory explaining the limits leave open questions about key issues such as how multiple simultaneously-represented items can be distinguished. We propose a palimpsest model, with the occurrent activity of a single population of neurons coding for several multi-featured items. Using a probabilistic approach to storage and recall, we show how this model can account for many qualitative aspects of existing experimental data. In our account, the underlying nature of a memory item depends entirely on the characteristics of the population representation, and we provide analytical and numerical insights into critical issues such as multiplicity and binding. We consider representations in which information about individual feature values is partially separate from the information about binding that creates single items out of multiple features. An appropriate balance between these two types of information is required to capture fully the different types of error seen in human experimental data. Our model provides the first principled account of misbinding errors. We also suggest a specific set of stimuli designed to elucidate the representations that subjects actually employ.
Experimental quantum key distribution with source flaws
NASA Astrophysics Data System (ADS)
Xu, Feihu; Wei, Kejin; Sajeed, Shihan; Kaiser, Sarah; Sun, Shihai; Tang, Zhiyuan; Qian, Li; Makarov, Vadim; Lo, Hoi-Kwong
2015-09-01
Decoy-state quantum key distribution (QKD) is a standard technique in current quantum cryptographic implementations. Unfortunately, existing experiments have two important drawbacks: the state preparation is assumed to be perfect without errors and the employed security proofs do not fully consider the finite-key effects for general attacks. These two drawbacks mean that existing experiments are not guaranteed to be proven to be secure in practice. Here, we perform an experiment that shows secure QKD with imperfect state preparations over long distances and achieves rigorous finite-key security bounds for decoy-state QKD against coherent attacks in the universally composable framework. We quantify the source flaws experimentally and demonstrate a QKD implementation that is tolerant to channel loss despite the source flaws. Our implementation considers more real-world problems than most previous experiments, and our theory can be applied to general discrete-variable QKD systems. These features constitute a step towards secure QKD with imperfect devices.
NASA Astrophysics Data System (ADS)
Jiang, Xue-Qin; Huang, Peng; Huang, Duan; Lin, Dakai; Zeng, Guihua
2017-02-01
Achieving information theoretic security with practical complexity is of great interest to continuous-variable quantum key distribution in the postprocessing procedure. In this paper, we propose a reconciliation scheme based on the punctured low-density parity-check (LDPC) codes. Compared to the well-known multidimensional reconciliation scheme, the present scheme has lower time complexity. Especially when the chosen punctured LDPC code achieves the Shannon capacity, the proposed reconciliation scheme can remove the information that has been leaked to an eavesdropper in the quantum transmission phase. Therefore, there is no information leaked to the eavesdropper after the reconciliation stage. This indicates that the privacy amplification algorithm of the postprocessing procedure is no more needed after the reconciliation process. These features lead to a higher secret key rate, optimal performance, and availability for the involved quantum key distribution scheme.
Simulating the Past, Present and Future of the Upper Troposphere and Lower Stratosphere
NASA Astrophysics Data System (ADS)
Gettelman, Andrew; Hegglin, Michaela
2010-05-01
A comprehensive assessment of coupled chemistry climate model (CCM) performance in the upper troposphere and lower stratosphere has been conducted with 18 models. Both qualitative and quantitative comparisons of model representation of UTLS dynamical, radiative and chemical structure have been conducted, using a collection of quantitative grading techniques. The models are able to reproduce the observed climatology of dynamical, radiative and chemical structure in the tropical and extratropical UTLS, despite relatively coarse vertical and horizontal resolution. Diagnostics of the Tropical Tropopause Layer (TTL), Tropopause Inversion Layer (TIL) and Extra-tropical Transition Layer (ExTL) are analyzed. The results provide new insight into the key processes that govern the dynamics and transport in the tropics and extra-tropicsa. The presentation will explain how models are able to reproduce key features of the UTLS, what features they do not reproduce, and why. Model trends over the historical period are also assessed and interannual variability is included in the metrics. Finally, key trends in the UTLS for the future with a given halogen and greenhouse gas scenario are presented, indicating significant changes in tropopause height and temperature, as well as UTLS ozone concentrations in the 21st century due to climate change and ozone recovery.
Shared periodic performer movements coordinate interactions in duo improvisations.
Eerola, Tuomas; Jakubowski, Kelly; Moran, Nikki; Keller, Peter E; Clayton, Martin
2018-02-01
Human interaction involves the exchange of temporally coordinated, multimodal cues. Our work focused on interaction in the visual domain, using music performance as a case for analysis due to its temporally diverse and hierarchical structures. We made use of two improvising duo datasets-(i) performances of a jazz standard with a regular pulse and (ii) non-pulsed, free improvizations-to investigate whether human judgements of moments of interaction between co-performers are influenced by body movement coordination at multiple timescales. Bouts of interaction in the performances were manually annotated by experts and the performers' movements were quantified using computer vision techniques. The annotated interaction bouts were then predicted using several quantitative movement and audio features. Over 80% of the interaction bouts were successfully predicted by a broadband measure of the energy of the cross-wavelet transform of the co-performers' movements in non-pulsed duos. A more complex model, with multiple predictors that captured more specific, interacting features of the movements, was needed to explain a significant amount of variance in the pulsed duos. The methods developed here have key implications for future work on measuring visual coordination in musical ensemble performances, and can be easily adapted to other musical contexts, ensemble types and traditions.
EMDS 3.0: A modeling framework for coping with complexity in environmental assessment and planning.
K.M. Reynolds
2006-01-01
EMDS 3.0 is implemented as an ArcMap® extension and integrates the logic engine of NetWeaver® to perform landscape evaluations, and the decision modeling engine of Criterium DecisionPlus® for evaluating management priorities. Key features of the system's evaluation component include abilities to (1) reason about large, abstract, multifaceted ecosystem management...
High temperature arc-track resistant aerospace insulation
NASA Technical Reports Server (NTRS)
Dorogy, William
1994-01-01
The topics are presented in viewgraph form and include the following: high temperature aerospace insulation; Foster-Miller approach to develop a 300 C rated, arc-track resistant aerospace insulation; advantages and disadvantages of key structural features; summary goals and achievements of the phase 1 program; performance goals for selected materials; materials under evaluation; molecular structures of candidate polymers; candidate polymer properties; film properties; and a detailed program plan.
AE (Acoustic Emission) for Flip-Chip CGA/FCBGA Defect Detection
NASA Technical Reports Server (NTRS)
Ghaffarian, Reza
2014-01-01
C-mode scanning acoustic microscopy (C-SAM) is a nondestructive inspection technique that uses ultrasound to show the internal feature of a specimen. A very high or ultra-high-frequency ultrasound passes through a specimen to produce a visible acoustic microimage (AMI) of its inner features. As ultrasound travels into a specimen, the wave is absorbed, scattered or reflected. The response is highly sensitive to the elastic properties of the materials and is especially sensitive to air gaps. This specific characteristic makes AMI the preferred method for finding "air gaps" such as delamination, cracks, voids, and porosity. C-SAM analysis, which is a type of AMI, was widely used in the past for evaluation of plastic microelectronic circuits, especially for detecting delamination of direct die bonding. With the introduction of the flip-chip die attachment in a package; its use has been expanded to nondestructive characterization of the flip-chip solder bumps and underfill. Figure 1.1 compares visual and C-SAM inspection approaches for defect detection, especially for solder joint interconnections and hidden defects. C-SAM is specifically useful for package features like internal cracks and delamination. C-SAM not only allows for the visualization of the interior features, it has the ability to produce images on layer-by-layer basis. Visual inspection; however, is only superior to C-SAM for the exposed features including solder dewetting, microcracks, and contamination. Ideally, a combination of various inspection techniques - visual, optical and SEM microscopy, C-SAM, and X-ray - need to be performed in order to assure quality at part, package, and system levels. This reports presents evaluations performed on various advanced packages/assemblies, especially the flip-chip die version of ball grid array/column grid array (BGA/CGA) using C-SAM equipment. Both external and internal equipment was used for evaluation. The outside facility provided images of the key features that could be detected using the most advanced C-SAM equipment with a skilled operator. Investigation continued using in-house equipment with its limitations. For comparison, representative X-rays of the assemblies were also gathered to show key defect detection features of these non-destructive techniques. Key images gathered and compared are: Compared the images of 2D X-ray and C-SAM for a plastic LGA assembly showing features that could be detected by either NDE technique. For this specific case, X-ray was a clear winner. Evaluated flip-chip CGA and FCBGA assemblies with and without heat sink by C-SAM. Only the FCCGA package that had no heat sink could be fully analyzed for underfill and bump quality. Cross-sectional microscopy did not revealed peripheral delamination features detected by C-SAM. Analyzed a number of fine pitch PBGA assemblies by C-SAM. Even though the internal features of the package assemblies could be detected, C-SAM was unable to detect solder joint failure at either the package or board level. Twenty times touch ups by solder iron with 700degF tip temperature, each with about 5 second duration, did not induce defects to be detected by C-SAM images. Other techniques need to be considered to induce known defects for characterization. Given NASA's emphasis on the use of microelectronic packages and assemblies and quality assurance on workmanship defect detection, understanding key features of various inspection systems that detect defects in the early stages of package and assembly is critical to developing approaches that will minimize future failures. Additional specific, tailored non-destructive inspection approaches could enable low-risk insertion of these advanced electronic packages having hidden and fine features.
A Novel Multi-Class Ensemble Model for Classifying Imbalanced Biomedical Datasets
NASA Astrophysics Data System (ADS)
Bikku, Thulasi; Sambasiva Rao, N., Dr; Rao, Akepogu Ananda, Dr
2017-08-01
This paper mainly focuseson developing aHadoop based framework for feature selection and classification models to classify high dimensionality data in heterogeneous biomedical databases. Wide research has been performing in the fields of Machine learning, Big data and Data mining for identifying patterns. The main challenge is extracting useful features generated from diverse biological systems. The proposed model can be used for predicting diseases in various applications and identifying the features relevant to particular diseases. There is an exponential growth of biomedical repositories such as PubMed and Medline, an accurate predictive model is essential for knowledge discovery in Hadoop environment. Extracting key features from unstructured documents often lead to uncertain results due to outliers and missing values. In this paper, we proposed a two phase map-reduce framework with text preprocessor and classification model. In the first phase, mapper based preprocessing method was designed to eliminate irrelevant features, missing values and outliers from the biomedical data. In the second phase, a Map-Reduce based multi-class ensemble decision tree model was designed and implemented in the preprocessed mapper data to improve the true positive rate and computational time. The experimental results on the complex biomedical datasets show that the performance of our proposed Hadoop based multi-class ensemble model significantly outperforms state-of-the-art baselines.
NASA Astrophysics Data System (ADS)
Maier, Oskar; Wilms, Matthias; von der Gablentz, Janina; Krämer, Ulrike; Handels, Heinz
2014-03-01
Automatic segmentation of ischemic stroke lesions in magnetic resonance (MR) images is important in clinical practice and for neuroscientific trials. The key problem is to detect largely inhomogeneous regions of varying sizes, shapes and locations. We present a stroke lesion segmentation method based on local features extracted from multi-spectral MR data that are selected to model a human observer's discrimination criteria. A support vector machine classifier is trained on expert-segmented examples and then used to classify formerly unseen images. Leave-one-out cross validation on eight datasets with lesions of varying appearances is performed, showing our method to compare favourably with other published approaches in terms of accuracy and robustness. Furthermore, we compare a number of feature selectors and closely examine each feature's and MR sequence's contribution.
Metric Learning for Hyperspectral Image Segmentation
NASA Technical Reports Server (NTRS)
Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca
2011-01-01
We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.
A biometric method to secure telemedicine systems.
Zhang, G H; Poon, Carmen C Y; Li, Ye; Zhang, Y T
2009-01-01
Security and privacy are among the most crucial issues for data transmission in telemedicine systems. This paper proposes a solution for securing wireless data transmission in telemedicine systems, i.e. within a body sensor network (BSN), between the BSN and server as well as between the server and professionals who have assess to the server. A unique feature of this solution is the generation of random keys by physiological data (i.e. a biometric approach) for securing communication at all 3 levels. In the performance analysis, inter-pulse interval of photoplethysmogram is used as an example to generate these biometric keys to protect wireless data transmission. The results of statistical analysis and computational complexity suggest that this type of key is random enough to make telemedicine systems resistant to attacks.
NASA Astrophysics Data System (ADS)
Cong, Chao; Liu, Dingsheng; Zhao, Lingjun
2008-12-01
This paper discusses a new method for the automatic matching of ground control points (GCPs) between satellite remote sensing Image and digital raster graphic (DRG) in urban areas. The key of this method is to automatically extract tie point pairs according to geographic characters from such heterogeneous images. Since there are big differences between such heterogeneous images respect to texture and corner features, more detail analyzations are performed to find similarities and differences between high resolution remote sensing Image and (DRG). Furthermore a new algorithms based on the fuzzy-c means (FCM) method is proposed to extract linear feature in remote sensing Image. Based on linear feature, crossings and corners extracted from these features are chosen as GCPs. On the other hand, similar method was used to find same features from DRGs. Finally, Hausdorff Distance was adopted to pick matching GCPs from above two GCP groups. Experiences shown the method can extract GCPs from such images with a reasonable RMS error.
Do the kinematics of a baulked take-off in springboard diving differ from those of a completed dive.
Barris, Sian; Farrow, Damian; Davids, Keith
2013-01-01
Consistency and invariance in movements are traditionally viewed as essential features of skill acquisition and elite sports performance. This emphasis on the stabilization of action has resulted in important processes of adaptation in movement coordination during performance being overlooked in investigations of elite sport performance. Here we investigate whether differences exist between the movement kinematics displayed by five, elite springboard divers (age 17 ± 2.4 years) in the preparation phases of baulked and completed take-offs. The two-dimensional kinematic characteristics of the reverse somersault take-off phases (approach and hurdle) were recorded during normal training sessions and used for intra-individual analysis. All participants displayed observable differences in movement patterns at key events during the approach phase; however, the presence of similar global topological characteristics suggested that, overall, participants did not perform distinctly different movement patterns during completed and baulked dives. These findings provide a powerful rationale for coaches to consider assessing functional variability or adaptability of motor behaviour as a key criterion of successful performance in sports such as diving.
An Extended Chaotic Maps-Based Three-Party Password-Authenticated Key Agreement with User Anonymity
Lu, Yanrong; Li, Lixiang; Zhang, Hao; Yang, Yixian
2016-01-01
User anonymity is one of the key security features of an authenticated key agreement especially for communicating messages via an insecure network. Owing to the better properties and higher performance of chaotic theory, the chaotic maps have been introduced into the security schemes, and hence numerous key agreement schemes have been put forward under chaotic-maps. Recently, Xie et al. released an enhanced scheme under Farash et al.’s scheme and claimed their improvements could withstand the security loopholes pointed out in the scheme of Farash et al., i.e., resistance to the off-line password guessing and user impersonation attacks. Nevertheless, through our careful analysis, the improvements were released by Xie et al. still could not solve the problems troubled in Farash et al‥ Besides, Xie et al.’s improvements failed to achieve the user anonymity and the session key security. With the purpose of eliminating the security risks of the scheme of Xie et al., we design an anonymous password-based three-party authenticated key agreement under chaotic maps. Both the formal analysis and the formal security verification using AVISPA are presented. Also, BAN logic is used to show the correctness of the enhancements. Furthermore, we also demonstrate that the design thwarts most of the common attacks. We also make a comparison between the recent chaotic-maps based schemes and our enhancements in terms of performance. PMID:27101305
Fast Localization in Large-Scale Environments Using Supervised Indexing of Binary Features.
Youji Feng; Lixin Fan; Yihong Wu
2016-01-01
The essence of image-based localization lies in matching 2D key points in the query image and 3D points in the database. State-of-the-art methods mostly employ sophisticated key point detectors and feature descriptors, e.g., Difference of Gaussian (DoG) and Scale Invariant Feature Transform (SIFT), to ensure robust matching. While a high registration rate is attained, the registration speed is impeded by the expensive key point detection and the descriptor extraction. In this paper, we propose to use efficient key point detectors along with binary feature descriptors, since the extraction of such binary features is extremely fast. The naive usage of binary features, however, does not lend itself to significant speedup of localization, since existing indexing approaches, such as hierarchical clustering trees and locality sensitive hashing, are not efficient enough in indexing binary features and matching binary features turns out to be much slower than matching SIFT features. To overcome this, we propose a much more efficient indexing approach for approximate nearest neighbor search of binary features. This approach resorts to randomized trees that are constructed in a supervised training process by exploiting the label information derived from that multiple features correspond to a common 3D point. In the tree construction process, node tests are selected in a way such that trees have uniform leaf sizes and low error rates, which are two desired properties for efficient approximate nearest neighbor search. To further improve the search efficiency, a probabilistic priority search strategy is adopted. Apart from the label information, this strategy also uses non-binary pixel intensity differences available in descriptor extraction. By using the proposed indexing approach, matching binary features is no longer much slower but slightly faster than matching SIFT features. Consequently, the overall localization speed is significantly improved due to the much faster key point detection and descriptor extraction. It is empirically demonstrated that the localization speed is improved by an order of magnitude as compared with state-of-the-art methods, while comparable registration rate and localization accuracy are still maintained.
Homographic Patch Feature Transform: A Robustness Registration for Gastroscopic Surgery.
Hu, Weiling; Zhang, Xu; Wang, Bin; Liu, Jiquan; Duan, Huilong; Dai, Ning; Si, Jianmin
2016-01-01
Image registration is a key component of computer assistance in image guided surgery, and it is a challenging topic in endoscopic environments. In this study, we present a method for image registration named Homographic Patch Feature Transform (HPFT) to match gastroscopic images. HPFT can be used for tracking lesions and augmenting reality applications during gastroscopy. Furthermore, an overall evaluation scheme is proposed to validate the precision, robustness and uniformity of the registration results, which provides a standard for rejection of false matching pairs from corresponding results. Finally, HPFT is applied for processing in vivo gastroscopic data. The experimental results show that HPFT has stable performance in gastroscopic applications.
A flexible continuous-variable QKD system using off-the-shelf components
NASA Astrophysics Data System (ADS)
Comandar, Lucian C.; Brunner, Hans H.; Bettelli, Stefano; Fung, Fred; Karinou, Fotini; Hillerkuss, David; Mikroulis, Spiros; Wang, Dawei; Kuschnerov, Maxim; Xie, Changsong; Poppe, Andreas; Peev, Momtchil
2017-10-01
We present the development of a robust and versatile CV-QKD architecture based on commercially available optical and electronic components. The system uses a pilot tone for phase synchronization with a local oscillator, as well as local feedback loops to mitigate frequency and polarization drifts. Transmit and receive-side digital signal processing is performed fully in software, allowing for rapid protocol reconfiguration. The quantum link is complemented with a software stack for secure-key processing, key storage and encrypted communication. All these features allow for the system to be at the same time a prototype for a future commercial product and a research platform.
Advanced reactors and associated fuel cycle facilities: safety and environmental impacts.
Hill, R N; Nutt, W M; Laidler, J J
2011-01-01
The safety and environmental impacts of new technology and fuel cycle approaches being considered in current U.S. nuclear research programs are contrasted to conventional technology options in this paper. Two advanced reactor technologies, the sodium-cooled fast reactor (SFR) and the very high temperature gas-cooled reactor (VHTR), are being developed. In general, the new reactor technologies exploit inherent features for enhanced safety performance. A key distinction of advanced fuel cycles is spent fuel recycle facilities and new waste forms. In this paper, the performance of existing fuel cycle facilities and applicable regulatory limits are reviewed. Technology options to improve recycle efficiency, restrict emissions, and/or improve safety are identified. For a closed fuel cycle, potential benefits in waste management are significant, and key waste form technology alternatives are described. Copyright © 2010 Health Physics Society
Automatic Beam Path Analysis of Laser Wakefield Particle Acceleration Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, Oliver; Geddes, Cameron G.R.; Cormier-Michel, Estelle
2009-10-19
Numerical simulations of laser wakefield particle accelerators play a key role in the understanding of the complex acceleration process and in the design of expensive experimental facilities. As the size and complexity of simulation output grows, an increasingly acute challenge is the practical need for computational techniques that aid in scientific knowledge discovery. To that end, we present a set of data-understanding algorithms that work in concert in a pipeline fashion to automatically locate and analyze high energy particle bunches undergoing acceleration in very large simulation datasets. These techniques work cooperatively by first identifying features of interest in individual timesteps,more » then integrating features across timesteps, and based on the information derived perform analysis of temporally dynamic features. This combination of techniques supports accurate detection of particle beams enabling a deeper level of scientific understanding of physical phenomena than hasbeen possible before. By combining efficient data analysis algorithms and state-of-the-art data management we enable high-performance analysis of extremely large particle datasets in 3D. We demonstrate the usefulness of our methods for a variety of 2D and 3D datasets and discuss the performance of our analysis pipeline.« less
Nuclear thermal propulsion engine system design analysis code development
NASA Astrophysics Data System (ADS)
Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.; Ivanenok, Joseph F.
1992-01-01
A Nuclear Thermal Propulsion (NTP) Engine System Design Analyis Code has recently been developed to characterize key NTP engine system design features. Such a versatile, standalone NTP system performance and engine design code is required to support ongoing and future engine system and vehicle design efforts associated with proposed Space Exploration Initiative (SEI) missions of interest. Key areas of interest in the engine system modeling effort were the reactor, shielding, and inclusion of an engine multi-redundant propellant pump feed system design option. A solid-core nuclear thermal reactor and internal shielding code model was developed to estimate the reactor's thermal-hydraulic and physical parameters based on a prescribed thermal output which was integrated into a state-of-the-art engine system design model. The reactor code module has the capability to model graphite, composite, or carbide fuels. Key output from the model consists of reactor parameters such as thermal power, pressure drop, thermal profile, and heat generation in cooled structures (reflector, shield, and core supports), as well as the engine system parameters such as weight, dimensions, pressures, temperatures, mass flows, and performance. The model's overall analysis methodology and its key assumptions and capabilities are summarized in this paper.
The change in critical technologies for computational physics
NASA Technical Reports Server (NTRS)
Watson, Val
1990-01-01
It is noted that the types of technology required for computational physics are changing as the field matures. Emphasis has shifted from computer technology to algorithm technology and, finally, to visual analysis technology as areas of critical research for this field. High-performance graphical workstations tied to a supercommunicator with high-speed communications along with the development of especially tailored visualization software has enabled analysis of highly complex fluid-dynamics simulations. Particular reference is made here to the development of visual analysis tools at NASA's Numerical Aerodynamics Simulation Facility. The next technology which this field requires is one that would eliminate visual clutter by extracting key features of simulations of physics and technology in order to create displays that clearly portray these key features. Research in the tuning of visual displays to human cognitive abilities is proposed. The immediate transfer of technology to all levels of computers, specifically the inclusion of visualization primitives in basic software developments for all work stations and PCs, is recommended.
Modeling sports highlights using a time-series clustering framework and model interpretation
NASA Astrophysics Data System (ADS)
Radhakrishnan, Regunathan; Otsuka, Isao; Xiong, Ziyou; Divakaran, Ajay
2005-01-01
In our past work on sports highlights extraction, we have shown the utility of detecting audience reaction using an audio classification framework. The audio classes in the framework were chosen based on intuition. In this paper, we present a systematic way of identifying the key audio classes for sports highlights extraction using a time series clustering framework. We treat the low-level audio features as a time series and model the highlight segments as "unusual" events in a background of an "usual" process. The set of audio classes to characterize the sports domain is then identified by analyzing the consistent patterns in each of the clusters output from the time series clustering framework. The distribution of features from the training data so obtained for each of the key audio classes, is parameterized by a Minimum Description Length Gaussian Mixture Model (MDL-GMM). We also interpret the meaning of each of the mixture components of the MDL-GMM for the key audio class (the "highlight" class) that is correlated with highlight moments. Our results show that the "highlight" class is a mixture of audience cheering and commentator's excited speech. Furthermore, we show that the precision-recall performance for highlights extraction based on this "highlight" class is better than that of our previous approach which uses only audience cheering as the key highlight class.
NIMEFI: Gene Regulatory Network Inference using Multiple Ensemble Feature Importance Algorithms
Ruyssinck, Joeri; Huynh-Thu, Vân Anh; Geurts, Pierre; Dhaene, Tom; Demeester, Piet; Saeys, Yvan
2014-01-01
One of the long-standing open challenges in computational systems biology is the topology inference of gene regulatory networks from high-throughput omics data. Recently, two community-wide efforts, DREAM4 and DREAM5, have been established to benchmark network inference techniques using gene expression measurements. In these challenges the overall top performer was the GENIE3 algorithm. This method decomposes the network inference task into separate regression problems for each gene in the network in which the expression values of a particular target gene are predicted using all other genes as possible predictors. Next, using tree-based ensemble methods, an importance measure for each predictor gene is calculated with respect to the target gene and a high feature importance is considered as putative evidence of a regulatory link existing between both genes. The contribution of this work is twofold. First, we generalize the regression decomposition strategy of GENIE3 to other feature importance methods. We compare the performance of support vector regression, the elastic net, random forest regression, symbolic regression and their ensemble variants in this setting to the original GENIE3 algorithm. To create the ensemble variants, we propose a subsampling approach which allows us to cast any feature selection algorithm that produces a feature ranking into an ensemble feature importance algorithm. We demonstrate that the ensemble setting is key to the network inference task, as only ensemble variants achieve top performance. As second contribution, we explore the effect of using rankwise averaged predictions of multiple ensemble algorithms as opposed to only one. We name this approach NIMEFI (Network Inference using Multiple Ensemble Feature Importance algorithms) and show that this approach outperforms all individual methods in general, although on a specific network a single method can perform better. An implementation of NIMEFI has been made publicly available. PMID:24667482
Development of a Stochastically-driven, Forward Predictive Performance Model for PEMFCs
NASA Astrophysics Data System (ADS)
Harvey, David Benjamin Paul
A one-dimensional multi-scale coupled, transient, and mechanistic performance model for a PEMFC membrane electrode assembly has been developed. The model explicitly includes each of the 5 layers within a membrane electrode assembly and solves for the transport of charge, heat, mass, species, dissolved water, and liquid water. Key features of the model include the use of a multi-step implementation of the HOR reaction on the anode, agglomerate catalyst sub-models for both the anode and cathode catalyst layers, a unique approach that links the composition of the catalyst layer to key properties within the agglomerate model and the implementation of a stochastic input-based approach for component material properties. The model employs a new methodology for validation using statistically varying input parameters and statistically-based experimental performance data; this model represents the first stochastic input driven unit cell performance model. The stochastic input driven performance model was used to identify optimal ionomer content within the cathode catalyst layer, demonstrate the role of material variation in potential low performing MEA materials, provide explanation for the performance of low-Pt loaded MEAs, and investigate the validity of transient-sweep experimental diagnostic methods.
Zhu, Jianwei; Zhang, Haicang; Li, Shuai Cheng; Wang, Chao; Kong, Lupeng; Sun, Shiwei; Zheng, Wei-Mou; Bu, Dongbo
2017-12-01
Accurate recognition of protein fold types is a key step for template-based prediction of protein structures. The existing approaches to fold recognition mainly exploit the features derived from alignments of query protein against templates. These approaches have been shown to be successful for fold recognition at family level, but usually failed at superfamily/fold levels. To overcome this limitation, one of the key points is to explore more structurally informative features of proteins. Although residue-residue contacts carry abundant structural information, how to thoroughly exploit these information for fold recognition still remains a challenge. In this study, we present an approach (called DeepFR) to improve fold recognition at superfamily/fold levels. The basic idea of our approach is to extract fold-specific features from predicted residue-residue contacts of proteins using deep convolutional neural network (DCNN) technique. Based on these fold-specific features, we calculated similarity between query protein and templates, and then assigned query protein with fold type of the most similar template. DCNN has showed excellent performance in image feature extraction and image recognition; the rational underlying the application of DCNN for fold recognition is that contact likelihood maps are essentially analogy to images, as they both display compositional hierarchy. Experimental results on the LINDAHL dataset suggest that even using the extracted fold-specific features alone, our approach achieved success rate comparable to the state-of-the-art approaches. When further combining these features with traditional alignment-related features, the success rate of our approach increased to 92.3%, 82.5% and 78.8% at family, superfamily and fold levels, respectively, which is about 18% higher than the state-of-the-art approach at fold level, 6% higher at superfamily level and 1% higher at family level. An independent assessment on SCOP_TEST dataset showed consistent performance improvement, indicating robustness of our approach. Furthermore, bi-clustering results of the extracted features are compatible with fold hierarchy of proteins, implying that these features are fold-specific. Together, these results suggest that the features extracted from predicted contacts are orthogonal to alignment-related features, and the combination of them could greatly facilitate fold recognition at superfamily/fold levels and template-based prediction of protein structures. Source code of DeepFR is freely available through https://github.com/zhujianwei31415/deepfr, and a web server is available through http://protein.ict.ac.cn/deepfr. zheng@itp.ac.cn or dbu@ict.ac.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Guo, Dongwei; Wang, Zhe
2018-05-01
Convolutional neural networks (CNN) achieve great success in computer vision, it can learn hierarchical representation from raw pixels and has outstanding performance in various image recognition tasks [1]. However, CNN is easy to be fraudulent in terms of it is possible to produce images totally unrecognizable to human eyes that CNNs believe with near certainty are familiar objects. [2]. In this paper, an associative memory model based on multiple features is proposed. Within this model, feature extraction and classification are carried out by CNN, T-SNE and exponential bidirectional associative memory neural network (EBAM). The geometric features extracted from CNN and the digital features extracted from T-SNE are associated by EBAM. Thus we ensure the recognition of robustness by a comprehensive assessment of the two features. In our model, we can get only 8% error rate with fraudulent data. In systems that require a high safety factor or some key areas, strong robustness is extremely important, if we can ensure the image recognition robustness, network security will be greatly improved and the social production efficiency will be extremely enhanced.
ERIC Educational Resources Information Center
Dusenbury, Linda; Yoder, Nick
2017-01-01
The current document serves two purposes. First, it provides an overview of six key features of a high-quality, comprehensive package of policies and guidance to support student social and emotional learning (SEL). These features are based on Collaborative for Academic Social, and Emotional Learning's (CASEL's) review of the research literature on…
Loveday, Thomas; Wiggins, Mark W; Searle, Ben J; Festa, Marino; Schell, David
2013-02-01
The authors describe the development of a new, more objective method of distinguishing experienced competent nonexpert from expert practitioners within pediatric intensive care. Expert performance involves the acquisition and use of refined feature-event associations (cues) in the operational environment. Competent non-experts, although experienced, possess rudimentary cue associations in memory. Thus, they cannot respond as efficiently or as reliably as their expert counterparts, particularly when key diagnostic information is unavailable, such as that provided by dynamic cues. This study involved the application of four distinct tasks in which the use of relevant cues could be expected to increase both the accuracy and the efficiency of diagnostic performance. These tasks included both static and dynamic stimuli that were varied systematically. A total of 50 experienced pediatric intensive staff took part in the study. The sample clustered into two levels across the tasks: Participants who performed at a consistently high level throughout the four tasks were labeled experts, and participants who performed at a lower level throughout the tasks were labeled competent nonexperts. The groups differed in their responses to the diagnostic scenarios presented in two of the tasks and their ability to maintain performance in the absence of dynamic features. Experienced pediatricians can be decomposed into two groups on the basis of their capacity to acquire and use cues; these groups differ in their diagnostic accuracy and in their ability to maintain performance in the absence of dynamic features. The tasks may be used to identify practitioners who are failing to acquire expertise at a rate consistent with their experience, position, or training. This information may be used to guide targeted training efforts.
ERIC Educational Resources Information Center
Mirzeoglu, Ayse Dilsad
2014-01-01
This study is related to one of the teaching models, peer teaching which is used in physical education courses. The fundamental feature of peer teaching is defined "to structure a learning environment in which some students assume and carry out many of the key operations of instruction to assist other students in the learning process".…
Overview of AMS (CCSDS Asynchronous Message Service)
NASA Technical Reports Server (NTRS)
Burleigh, Scott
2006-01-01
This viewgraph presentation gives an overview of the Consultative Committee for Space Data Systems (CCSDS) Asynchronous Message Service (AMS). The topics include: 1) Key Features; 2) A single AMS continuum; 3) The AMS Protocol Suite; 4) A multi-continuum venture; 5) Constraining transmissions; 6) Security; 7) Fault Tolerance; 8) Performance of Reference Implementation; 9) AMS vs Multicast (1); 10) AMS vs Multicast (2); 11) RAMS testing exercise; and 12) Results.
Resolving Phase Ambiguities In OQPSK
NASA Technical Reports Server (NTRS)
Nguyen, Tien M.
1991-01-01
Improved design for modulator and demodulator in offset-quaternary-phase-key-shifting (OQPSK) communication system enables receiver to resolve ambiguity in estimated phase of received signal. Features include unique-code-word modulation and detection and digital implementation of Costas loop in carrier-recovery subsystem. Enchances performance of carrier-recovery subsystem, reduces complexity of receiver by removing redundant circuits from previous design, and eliminates dependence of timing in receiver upon parallel-to-serial-conversion clock.
A high performance linear equation solver on the VPP500 parallel supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakanishi, Makoto; Ina, Hiroshi; Miura, Kenichi
1994-12-31
This paper describes the implementation of two high performance linear equation solvers developed for the Fujitsu VPP500, a distributed memory parallel supercomputer system. The solvers take advantage of the key architectural features of VPP500--(1) scalability for an arbitrary number of processors up to 222 processors, (2) flexible data transfer among processors provided by a crossbar interconnection network, (3) vector processing capability on each processor, and (4) overlapped computation and transfer. The general linear equation solver based on the blocked LU decomposition method achieves 120.0 GFLOPS performance with 100 processors in the LIN-PACK Highly Parallel Computing benchmark.
Ion conducting membranes for aqueous flow battery systems.
Yuan, Zhizhang; Zhang, Huamin; Li, Xianfeng
2018-06-07
Flow batteries, aqueous flow batteries in particular, are the most promising candidates for stationary energy storage to realize the wide utilization of renewable energy sources. To meet the requirement of large-scale energy storage, there has been a growing interest in aqueous flow batteries, especially in novel redox couples and flow-type systems. However, the development of aqueous flow battery technologies is at an early stage and their performance can be further improved. As a key component of a flow battery, the membrane has a significant effect on battery performance. Currently, the membranes used in aqueous flow battery technologies are very limited. In this feature article, we first cover the application of porous membranes in vanadium flow battery technology, and then the membranes in most recently reported aqueous flow battery systems. Meanwhile, we hope that this feature article will inspire more efforts to design and prepare membranes with outstanding performance and stability, and then accelerate the development of flow batteries for large scale energy storage applications.
Using distances between Top-n-gram and residue pairs for protein remote homology detection.
Liu, Bin; Xu, Jinghao; Zou, Quan; Xu, Ruifeng; Wang, Xiaolong; Chen, Qingcai
2014-01-01
Protein remote homology detection is one of the central problems in bioinformatics, which is important for both basic research and practical application. Currently, discriminative methods based on Support Vector Machines (SVMs) achieve the state-of-the-art performance. Exploring feature vectors incorporating the position information of amino acids or other protein building blocks is a key step to improve the performance of the SVM-based methods. Two new methods for protein remote homology detection were proposed, called SVM-DR and SVM-DT. SVM-DR is a sequence-based method, in which the feature vector representation for protein is based on the distances between residue pairs. SVM-DT is a profile-based method, which considers the distances between Top-n-gram pairs. Top-n-gram can be viewed as a profile-based building block of proteins, which is calculated from the frequency profiles. These two methods are position dependent approaches incorporating the sequence-order information of protein sequences. Various experiments were conducted on a benchmark dataset containing 54 families and 23 superfamilies. Experimental results showed that these two new methods are very promising. Compared with the position independent methods, the performance improvement is obvious. Furthermore, the proposed methods can also provide useful insights for studying the features of protein families. The better performance of the proposed methods demonstrates that the position dependant approaches are efficient for protein remote homology detection. Another advantage of our methods arises from the explicit feature space representation, which can be used to analyze the characteristic features of protein families. The source code of SVM-DT and SVM-DR is available at http://bioinformatics.hitsz.edu.cn/DistanceSVM/index.jsp.
Bone age assessment meets SIFT
NASA Astrophysics Data System (ADS)
Kashif, Muhammad; Jonas, Stephan; Haak, Daniel; Deserno, Thomas M.
2015-03-01
Bone age assessment (BAA) is a method of determining the skeletal maturity and finding the growth disorder in the skeleton of a person. BAA is frequently used in pediatric medicine but also a time-consuming and cumbersome task for a radiologist. Conventionally, the Greulich and Pyle and the Tanner and Whitehouse methods are used for bone age assessment, which are based on visual comparison of left hand radiographs with a standard atlas. We present a novel approach for automated bone age assessment, combining scale invariant feature transform (SIFT) features and support vector machine (SVM) classification. In this approach, (i) data is grouped into 30 classes to represent the age range of 0- 18 years, (ii) 14 epiphyseal ROIs are extracted from left hand radiographs, (iii) multi-level image thresholding, using Otsu method, is applied to specify key points on bone and osseous tissues of eROIs, (iv) SIFT features are extracted for specified key points for each eROI of hand radiograph, and (v) classification is performed using a multi-class extension of SVM. A total of 1101 radiographs of University of Southern California are used in training and testing phases using 5- fold cross-validation. Evaluation is performed for two age ranges (0-18 years and 2-17 years) for comparison with previous work and the commercial product BoneXpert, respectively. Results were improved significantly, where the mean errors of 0.67 years and 0.68 years for the age ranges 0-18 years and 2-17 years, respectively, were obtained. Accuracy of 98.09 %, within the range of two years was achieved.
Junior doctors' extended work hours and the effects on their performance: the Irish case.
Flinn, Fiona; Armstrong, Claire
2011-04-01
To explore the relationship between junior doctors' long working hours and their performance in a variety of cognitive and clinical decision-making tests. Also, to consider the implications of performance decrements in such tests for healthcare quality. A within-subject design was used to eliminate variation related to individual differences. Each participant was tested twice, once post call and once rested. At each session, participants were tested on cognitive functioning and clinical decision-making. The study was based on six acute Irish hospitals during 2008. Thirty junior hospital doctors, ages ranged from 23 to 30 years; of them, 17 of the participants were female and 13 were male. Measures Cognitive functioning was measured by the MindStreams Global Assessment Battery (NeuroTrax Corp., NY, USA). This is a set of computerized tests, designed for use in medical settings, that assesses performance in memory, executive function, visual spatial perception, verbal function, attention, information processing speed and motor skills. Clinical decision-making was tested using Key Features Problems. Each Key Features Problem consists of a case scenario and then three to four questions about this scenario. In an effort to make it more realistic, the speed with which participants completed the three problems was also recorded. Participants' global cognitive scores, attention, information processing speed and motor skills were significantly worse post call than when rested. They also took longer to complete clinical decision-making questions in the post-call condition and obtained lower scores than when rested. There are significant negative changes in doctors' cognitive functioning and clinical decision-making performance that appear to be attributable to long working hours. This therefore raises the important question of whether working long hours decreases healthcare quality and compromises patient safety.
The MetOp second generation 3MI instrument
NASA Astrophysics Data System (ADS)
Manolis, Ilias; Grabarnik, Semen; Caron, Jérôme; Bézy, Jean-Loup; Loiselet, Marc; Betto, Maurizio; Barré, Hubert; Mason, Graeme; Meynart, Roland
2013-10-01
The MetOp-SG programme is a joint Programme of EUMETSAT and ESA. ESA develops the prototype MetOp-SG satellites (including associated instruments) and procures, on behalf of EUMETSAT, the recurrent satellites (and associated instruments). Two parallel, competitive phase A/B1 studies for MetOp Second Generation (MetOp-SG) have been concluded in May 2013. The implementation phases (B2/C/D/E) are planned to start the first quarter of 2014. ESA is responsible for instrument design of six missions, namely Microwave Sounding Mission (MWS), Scatterometer mission (SCA), Radio Occultation mission (RO), Microwave Imaging mission (MWI), Ice Cloud Imager (ICI) and Multi-viewing, Multi-channel, Multi-polarisation imaging mission (3MI). The paper will present the main performances of the 3MI instrument and will highlight the performance improvements with respect to its heritage derived by the POLDER instrument, such as number of spectral channels and spectral range coverage, swath and ground spatial resolution. The engineering of some key performance requirements (multi-viewing, polarisation sensitivity, straylight etc.) will also be discussed. The results of the feasibility studies will be presented together with the programmatics for the instrument development. Several pre-development activities have been initiated to retire highest risks and to demonstrate the ultimate performances of the 3MI optics. The scope, objectives and current status of those activities will be presented. Key technologies involved in the 3MI instrument design and implementation are considered to be: the optical design featuring aspheric optics, the implementation of broadband Anti Reflection coatings featuring low polarisation and low de-phasing properties, the development and qualification of polarisers with acceptable performances as well as spectral filters with good uniformities over a large clear aperture.
Missouri Program Highlights How Standards Make a Difference
ERIC Educational Resources Information Center
Killion, Joellen
2017-01-01
Professional development designed to integrate key features of research-based professional learning has positive and significant effects on teacher practice and student achievement in mathematics when implemented in schools that meet specified technology-readiness criteria. Key features of research-based professional learning include intensive…
NASA Astrophysics Data System (ADS)
Kawata, Y.; Niki, N.; Ohmatsu, H.; Aokage, K.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.
2015-03-01
Advantages of CT scanners with high resolution have allowed the improved detection of lung cancers. In the recent release of positive results from the National Lung Screening Trial (NLST) in the US showing that CT screening does in fact have a positive impact on the reduction of lung cancer related mortality. While this study does show the efficacy of CT based screening, physicians often face the problems of deciding appropriate management strategies for maximizing patient survival and for preserving lung function. Several key manifold-learning approaches efficiently reveal intrinsic low-dimensional structures latent in high-dimensional data spaces. This study was performed to investigate whether the dimensionality reduction can identify embedded structures from the CT histogram feature of non-small-cell lung cancer (NSCLC) space to improve the performance in predicting the likelihood of RFS for patients with NSCLC.
Defect generation in electronic devices under plasma exposure: Plasma-induced damage
NASA Astrophysics Data System (ADS)
Eriguchi, Koji
2017-06-01
The increasing demand for higher performance of ULSI circuits requires aggressive shrinkage of device feature sizes in accordance with Moore’s law. Plasma processing plays an important role in achieving fine patterns with anisotropic features in metal-oxide-semiconductor field-effect transistors (MOSFETs). This article comprehensively addresses the negative aspect of plasma processing — plasma-induced damage (PID). PID naturally not only modifies the surface morphology of materials but also degrades the performance and reliability of MOSFETs as a result of defect generation in the materials. Three key mechanisms of PID, i.e., physical, electrical, and photon-irradiation interactions, are overviewed in terms of modeling, characterization techniques, and experimental evidence reported so far. In addition, some of the emerging topics — control of parameter variability in ULSI circuits caused by PID and recovery of PID — are discussed as future perspectives.
What makes an automated teller machine usable by blind users?
Manzke, J M; Egan, D H; Felix, D; Krueger, H
1998-07-01
Fifteen blind and sighted subjects, who featured as a control group for acceptance, were asked for their requirements for automated teller machines (ATMs). Both groups also tested the usability of a partially operational ATM mock-up. This machine was based on an existing cash dispenser, providing natural speech output, different function menus and different key arrangements. Performance and subjective evaluation data of blind and sighted subjects were collected. All blind subjects were able to operate the ATM successfully. The implemented speech output was the main usability factor for them. The different interface designs did not significantly affect performance and subjective evaluation. Nevertheless, design recommendations can be derived from the requirement assessment. The sighted subjects were rather open for design modifications, especially the implementation of speech output. However, there was also a mismatch of the requirements of the two subject groups, mainly concerning the key arrangement.
Automated real-time search and analysis algorithms for a non-contact 3D profiling system
NASA Astrophysics Data System (ADS)
Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.
2013-04-01
The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time provides significant opportunities in cost savings in both equipment protection and waste minimization.
Dynamic deformable models for 3D MRI heart segmentation
NASA Astrophysics Data System (ADS)
Zhukov, Leonid; Bao, Zhaosheng; Gusikov, Igor; Wood, John; Breen, David E.
2002-05-01
Automated or semiautomated segmentation of medical images decreases interstudy variation, observer bias, and postprocessing time as well as providing clincally-relevant quantitative data. In this paper we present a new dynamic deformable modeling approach to 3D segmentation. It utilizes recently developed dynamic remeshing techniques and curvature estimation methods to produce high-quality meshes. The approach has been implemented in an interactive environment that allows a user to specify an initial model and identify key features in the data. These features act as hard constraints that the model must not pass through as it deforms. We have employed the method to perform semi-automatic segmentation of heart structures from cine MRI data.
NASA Technical Reports Server (NTRS)
Darcy, Eric; Davies, Frank
2009-01-01
Charger design that is 2-fault tolerant to catastrophic has been achieved for the Spacesuit Li-ion Battery with key features. Power supply control circuit and 2 microprocessors independently control against overcharge. 3 microprocessor control against undercharge (false positive: Go for EVA) conditions. 2 independent channels provide functional redundancy. Capable of charge balancing cell banks in series. Cell manufacturing and performance uniformity is excellent with both designs. Once a few outliers are removed, LV cells are slightly more uniform than MoliJ cells. If cell balance feature of charger is ever invoked, it will be an indication of a significant degradation issue, not a nominal condition.
Features and selection of vascular access devices.
Sansivero, Gail Egan
2010-05-01
To review venous anatomy and physiology, discuss assessment parameters before vascular access device (VAD) placement, and review VAD options. Journal articles, personal experience. A number of VAD options are available in clinical practice. Access planning should include comprehensive assessment, with attention to patient participation in the planning and selection process. Careful consideration should be given to long-term access needs and preservation of access sites. Oncology nurses are uniquely suited to perform a key role in VAD planning and placement. With knowledge of infusion therapy, anatomy and physiology, device options, and community resources, nurses can be key leaders in preserving vascular access and improving the safety and comfort of infusion therapy. Copyright 2010 Elsevier Inc. All rights reserved.
Evaluation of the effectiveness of color attributes for video indexing
NASA Astrophysics Data System (ADS)
Chupeau, Bertrand; Forest, Ronan
2001-10-01
Color features are reviewed and their effectiveness assessed in the application framework of key-frame clustering for abstracting unconstrained video. Existing color spaces and associated quantization schemes are first studied. Description of global color distribution by means of histograms is then detailed. In our work, 12 combinations of color space and quantization were selected, together with 12 histogram metrics. Their respective effectiveness with respect to picture similarity measurement was evaluated through a query-by-example scenario. For that purpose, a set of still-picture databases was built by extracting key frames from several video clips, including news, documentaries, sports and cartoons. Classical retrieval performance evaluation criteria were adapted to the specificity of our testing methodology.
Evaluation of the effectiveness of color attributes for video indexing
NASA Astrophysics Data System (ADS)
Chupeau, Bertrand; Forest, Ronan
2001-01-01
Color features are reviewed and their effectiveness assessed in the application framework of key-frame clustering for abstracting unconstrained video. Existing color spaces and associated quantization schemes are first studied. Description of global color distribution by means of histograms is then detailed. In our work, twelve combinations of color space and quantization were selected, together with twelve histogram metrics. Their respective effectiveness with respect to picture similarity measurement was evaluated through a query-be-example scenario. For that purpose, a set of still-picture databases was built by extracting key-frames from several video clips, including news, documentaries, sports and cartoons. Classical retrieval performance evaluation criteria were adapted to the specificity of our testing methodology.
Evaluation of the effectiveness of color attributes for video indexing
NASA Astrophysics Data System (ADS)
Chupeau, Bertrand; Forest, Ronan
2000-12-01
Color features are reviewed and their effectiveness assessed in the application framework of key-frame clustering for abstracting unconstrained video. Existing color spaces and associated quantization schemes are first studied. Description of global color distribution by means of histograms is then detailed. In our work, twelve combinations of color space and quantization were selected, together with twelve histogram metrics. Their respective effectiveness with respect to picture similarity measurement was evaluated through a query-be-example scenario. For that purpose, a set of still-picture databases was built by extracting key-frames from several video clips, including news, documentaries, sports and cartoons. Classical retrieval performance evaluation criteria were adapted to the specificity of our testing methodology.
Maximising safety in the boiler house.
Derry, Carr
2013-03-01
Last month's HEJ featured an article, the second in our new series of guidance pieces aimed principally at Technician-level engineers, highlighting some of the key steps that boiler operators can take to maximise system performance and efficiency, and thus reduce running both costs and carbon footprint. In the third such article, Derry Carr, C.Env, I.Eng, BSc (Hons), M.I.Plant.E., M.S.O.E., technical manager & group gas manager at Dalkia, who is vice-chairman of the Combustion Engineering Association, examines the key regulatory and safety obligations for hospital energy managers and boiler technicians, a number of which have seen changes in recent years with revision to guidance and other documentation.
Satellite Imagery Assisted Road-Based Visual Navigation System
NASA Astrophysics Data System (ADS)
Volkova, A.; Gibbens, P. W.
2016-06-01
There is a growing demand for unmanned aerial systems as autonomous surveillance, exploration and remote sensing solutions. Among the key concerns for robust operation of these systems is the need to reliably navigate the environment without reliance on global navigation satellite system (GNSS). This is of particular concern in Defence circles, but is also a major safety issue for commercial operations. In these circumstances, the aircraft needs to navigate relying only on information from on-board passive sensors such as digital cameras. An autonomous feature-based visual system presented in this work offers a novel integral approach to the modelling and registration of visual features that responds to the specific needs of the navigation system. It detects visual features from Google Earth* build a feature database. The same algorithm then detects features in an on-board cameras video stream. On one level this serves to localise the vehicle relative to the environment using Simultaneous Localisation and Mapping (SLAM). On a second level it correlates them with the database to localise the vehicle with respect to the inertial frame. The performance of the presented visual navigation system was compared using the satellite imagery from different years. Based on comparison results, an analysis of the effects of seasonal, structural and qualitative changes of the imagery source on the performance of the navigation algorithm is presented. * The algorithm is independent of the source of satellite imagery and another provider can be used
Moreno, Andrew; Froehlig, John R; Bachas, Sharrol; Gunio, Drew; Alexander, Teressa; Vanya, Aaron; Wade, Herschel
2016-08-30
Multidrug resistance (MDR) refers to the acquired ability of cells to tolerate a broad range of toxic compounds. One mechanism cells employ is to increase the level of expression of efflux pumps for the expulsion of xenobiotics. A key feature uniting efflux-related mechanisms is multidrug (MD) recognition, either by efflux pumps themselves or by their transcriptional regulators. However, models describing MD binding by MDR effectors are incomplete, underscoring the importance of studies focused on the recognition elements and key motifs that dictate polyspecific binding. One such motif is the GyrI-like domain, which is found in several MDR proteins and is postulated to have been adapted for small-molecule binding and signaling. Here we report the solution binding properties and crystal structures of two proteins containing GyrI-like domains, SAV2435 and CTR107, bound to various ligands. Furthermore, we provide a comparison with deposited crystal structures of GyrI-like proteins, revealing key features of GyrI-like domains that not only support polyspecific binding but also are conserved among GyrI-like domains. Together, our studies suggest that GyrI-like domains perform evolutionarily conserved functions connected to multidrug binding and highlight the utility of these types of studies for elucidating mechanisms of MDR.
Feature weight estimation for gene selection: a local hyperlinear learning approach
2014-01-01
Background Modeling high-dimensional data involving thousands of variables is particularly important for gene expression profiling experiments, nevertheless,it remains a challenging task. One of the challenges is to implement an effective method for selecting a small set of relevant genes, buried in high-dimensional irrelevant noises. RELIEF is a popular and widely used approach for feature selection owing to its low computational cost and high accuracy. However, RELIEF based methods suffer from instability, especially in the presence of noisy and/or high-dimensional outliers. Results We propose an innovative feature weighting algorithm, called LHR, to select informative genes from highly noisy data. LHR is based on RELIEF for feature weighting using classical margin maximization. The key idea of LHR is to estimate the feature weights through local approximation rather than global measurement, which is typically used in existing methods. The weights obtained by our method are very robust in terms of degradation of noisy features, even those with vast dimensions. To demonstrate the performance of our method, extensive experiments involving classification tests have been carried out on both synthetic and real microarray benchmark datasets by combining the proposed technique with standard classifiers, including the support vector machine (SVM), k-nearest neighbor (KNN), hyperplane k-nearest neighbor (HKNN), linear discriminant analysis (LDA) and naive Bayes (NB). Conclusion Experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed feature selection method combined with supervised learning in three aspects: 1) high classification accuracy, 2) excellent robustness to noise and 3) good stability using to various classification algorithms. PMID:24625071
Krishna, B. Suresh; Treue, Stefan
2016-01-01
Paying attention to a sensory feature improves its perception and impairs that of others. Recent work has shown that a Normalization Model of Attention (NMoA) can account for a wide range of physiological findings and the influence of different attentional manipulations on visual performance. A key prediction of the NMoA is that attention to a visual feature like an orientation or a motion direction will increase the response of neurons preferring the attended feature (response gain) rather than increase the sensory input strength of the attended stimulus (input gain). This effect of feature-based attention on neuronal responses should translate to similar patterns of improvement in behavioral performance, with psychometric functions showing response gain rather than input gain when attention is directed to the task-relevant feature. In contrast, we report here that when human subjects are cued to attend to one of two motion directions in a transparent motion display, attentional effects manifest as a combination of input and response gain. Further, the impact on input gain is greater when attention is directed towards a narrow range of motion directions than when it is directed towards a broad range. These results are captured by an extended NMoA, which either includes a stimulus-independent attentional contribution to normalization or utilizes direction-tuned normalization. The proposed extensions are consistent with the feature-similarity gain model of attention and the attentional modulation in extrastriate area MT, where neuronal responses are enhanced and suppressed by attention to preferred and non-preferred motion directions respectively. PMID:27977679
Element-topology-independent preconditioners for parallel finite element computations
NASA Technical Reports Server (NTRS)
Park, K. C.; Alexander, Scott
1992-01-01
A family of preconditioners for the solution of finite element equations are presented, which are element-topology independent and thus can be applicable to element order-free parallel computations. A key feature of the present preconditioners is the repeated use of element connectivity matrices and their left and right inverses. The properties and performance of the present preconditioners are demonstrated via beam and two-dimensional finite element matrices for implicit time integration computations.
Experiment-scale molecular simulation study of liquid crystal thin films
NASA Astrophysics Data System (ADS)
Nguyen, Trung Dac; Carrillo, Jan-Michael Y.; Matheson, Michael A.; Brown, W. Michael
2014-03-01
Supercomputers have now reached a performance level adequate for studying thin films with molecular detail at the relevant scales. By exploiting the power of GPU accelerators on Titan, we have been able to perform simulations of characteristic liquid crystal films that provide remarkable qualitative agreement with experimental images. We have demonstrated that key features of spinodal instability can only be observed with sufficiently large system sizes, which were not accessible with previous simulation studies. Our study emphasizes the capability and significance of petascale simulations in providing molecular-level insights in thin film systems as well as other interfacial phenomena.
Public Reporting of Hospital Patient Satisfaction: The Rhode Island Experience
Barr, Judith K.; Boni, Cathy E.; Kochurka, Kimberly A.; Nolan, Patricia; Petrillo, Marcia; Sofaer, Shoshanna; Waters, William
2002-01-01
This article describes a collaborative process for legislatively mandated public reporting of health care performance in Rhode Island that began with hospital patient satisfaction. The goals of the report were both quality improvement and public accountability. Key features addressed include: the legislative context for public reporting; widespread participation of stakeholders; the structure for decisionmaking; and the use of formative testing with cognitive interviews to get responses of consumers and others about the report's readability and comprehensibility. This experience and the lessons learned can guide other States considering public reporting on health care performance. PMID:12500470
Teede, Helena; Gibson-Helm, Melanie; Norman, Robert J; Boyle, Jacqueline
2014-01-01
Polycystic ovary syndrome (PCOS) is an under-recognized, common, and complex endocrinopathy. The name PCOS is a misnomer, and there have been calls for a change to reflect the broader clinical syndrome. The aim of the study was to determine perceptions held by women and primary health care physicians around key clinical features of PCOS and attitudes toward current and alternative names for the syndrome. We conducted a cross-sectional study utilizing a devised questionnaire. Participants were recruited throughout Australia via professional associations, women's health organizations, and a PCOS support group. Fifty-seven women with PCOS and 105 primary care physicians participated in the study. Perceptions of key clinical PCOS features and attitudes toward current and alternative syndrome names were investigated. Irregular periods were identified as a key clinical feature of PCOS by 86% of the women with PCOS and 90% of the primary care physicians. In both groups, 60% also identified hormone imbalance as a key feature. Among women with PCOS, 47% incorrectly identified ovarian cysts as key, 48% felt the current name is confusing, and 51% supported a change. Most primary care physicians agreed that the name is confusing (74%) and needs changing (81%); however, opinions on specific alternative names were divided. The name "polycystic ovary syndrome" is perceived as confusing, and there is general support for a change to reflect the broader clinical syndrome. Engagement of primary health care physicians and consumers is strongly recommended to ensure that an alternative name enhances understanding and recognition of the syndrome and its complex features.
Predicting Key Events in the Popularity Evolution of Online Information.
Hu, Ying; Hu, Changjun; Fu, Shushen; Fang, Mingzhe; Xu, Wenwen
2017-01-01
The popularity of online information generally experiences a rising and falling evolution. This paper considers the "burst", "peak", and "fade" key events together as a representative summary of popularity evolution. We propose a novel prediction task-predicting when popularity undergoes these key events. It is of great importance to know when these three key events occur, because doing so helps recommendation systems, online marketing, and containment of rumors. However, it is very challenging to solve this new prediction task due to two issues. First, popularity evolution has high variation and can follow various patterns, so how can we identify "burst", "peak", and "fade" in different patterns of popularity evolution? Second, these events usually occur in a very short time, so how can we accurately yet promptly predict them? In this paper we address these two issues. To handle the first one, we use a simple moving average to smooth variation, and then a universal method is presented for different patterns to identify the key events in popularity evolution. To deal with the second one, we extract different types of features that may have an impact on the key events, and then a correlation analysis is conducted in the feature selection step to remove irrelevant and redundant features. The remaining features are used to train a machine learning model. The feature selection step improves prediction accuracy, and in order to emphasize prediction promptness, we design a new evaluation metric which considers both accuracy and promptness to evaluate our prediction task. Experimental and comparative results show the superiority of our prediction solution.
Predicting Key Events in the Popularity Evolution of Online Information
Fu, Shushen; Fang, Mingzhe; Xu, Wenwen
2017-01-01
The popularity of online information generally experiences a rising and falling evolution. This paper considers the “burst”, “peak”, and “fade” key events together as a representative summary of popularity evolution. We propose a novel prediction task—predicting when popularity undergoes these key events. It is of great importance to know when these three key events occur, because doing so helps recommendation systems, online marketing, and containment of rumors. However, it is very challenging to solve this new prediction task due to two issues. First, popularity evolution has high variation and can follow various patterns, so how can we identify “burst”, “peak”, and “fade” in different patterns of popularity evolution? Second, these events usually occur in a very short time, so how can we accurately yet promptly predict them? In this paper we address these two issues. To handle the first one, we use a simple moving average to smooth variation, and then a universal method is presented for different patterns to identify the key events in popularity evolution. To deal with the second one, we extract different types of features that may have an impact on the key events, and then a correlation analysis is conducted in the feature selection step to remove irrelevant and redundant features. The remaining features are used to train a machine learning model. The feature selection step improves prediction accuracy, and in order to emphasize prediction promptness, we design a new evaluation metric which considers both accuracy and promptness to evaluate our prediction task. Experimental and comparative results show the superiority of our prediction solution. PMID:28046121
Communication target object recognition for D2D connection with feature size limit
NASA Astrophysics Data System (ADS)
Ok, Jiheon; Kim, Soochang; Kim, Young-hoon; Lee, Chulhee
2015-03-01
Recently, a new concept of device-to-device (D2D) communication, which is called "point-and-link communication" has attracted great attentions due to its intuitive and simple operation. This approach enables user to communicate with target devices without any pre-identification information such as SSIDs, MAC addresses by selecting the target image displayed on the user's own device. In this paper, we present an efficient object matching algorithm that can be applied to look(point)-and-link communications for mobile services. Due to the limited channel bandwidth and low computational power of mobile terminals, the matching algorithm should satisfy low-complexity, low-memory and realtime requirements. To meet these requirements, we propose fast and robust feature extraction by considering the descriptor size and processing time. The proposed algorithm utilizes a HSV color histogram, SIFT (Scale Invariant Feature Transform) features and object aspect ratios. To reduce the descriptor size under 300 bytes, a limited number of SIFT key points were chosen as feature points and histograms were binarized while maintaining required performance. Experimental results show the robustness and the efficiency of the proposed algorithm.
A feature selection approach towards progressive vector transmission over the Internet
NASA Astrophysics Data System (ADS)
Miao, Ru; Song, Jia; Feng, Min
2017-09-01
WebGIS has been applied for visualizing and sharing geospatial information popularly over the Internet. In order to improve the efficiency of the client applications, the web-based progressive vector transmission approach is proposed. Important features should be selected and transferred firstly, and the methods for measuring the importance of features should be further considered in the progressive transmission. However, studies on progressive transmission for large-volume vector data have mostly focused on map generalization in the field of cartography, but rarely discussed on the selection of geographic features quantitatively. This paper applies information theory for measuring the feature importance of vector maps. A measurement model for the amount of information of vector features is defined based upon the amount of information for dealing with feature selection issues. The measurement model involves geometry factor, spatial distribution factor and thematic attribute factor. Moreover, a real-time transport protocol (RTP)-based progressive transmission method is then presented to improve the transmission of vector data. To clearly demonstrate the essential methodology and key techniques, a prototype for web-based progressive vector transmission is presented, and an experiment of progressive selection and transmission for vector features is conducted. The experimental results indicate that our approach clearly improves the performance and end-user experience of delivering and manipulating large vector data over the Internet.
Feature highlighting enhances learning of a complex natural-science category.
Miyatsu, Toshiya; Gouravajhala, Reshma; Nosofsky, Robert M; McDaniel, Mark A
2018-04-26
Learning naturalistic categories, which tend to have fuzzy boundaries and vary on many dimensions, can often be harder than learning well defined categories. One method for facilitating the category learning of naturalistic stimuli may be to provide explicit feature descriptions that highlight the characteristic features of each category. Although this method is commonly used in textbooks and classrooms, theoretically it remains uncertain whether feature descriptions should advantage learning complex natural-science categories. In three experiments, participants were trained on 12 categories of rocks, either without or with a brief description highlighting key features of each category. After training, they were tested on their ability to categorize both old and new rocks from each of the categories. Providing feature descriptions as a caption under a rock image failed to improve category learning relative to providing only the rock image with its category label (Experiment 1). However, when these same feature descriptions were presented such that they were explicitly linked to the relevant parts of the rock image (feature highlighting), participants showed significantly higher performance on both immediate generalization to new rocks (Experiment 2) and generalization after a 2-day delay (Experiment 3). Theoretical and practical implications are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Ensemble of Chaotic and Naive Approaches for Performance Enhancement in Video Encryption.
Chandrasekaran, Jeyamala; Thiruvengadam, S J
2015-01-01
Owing to the growth of high performance network technologies, multimedia applications over the Internet are increasing exponentially. Applications like video conferencing, video-on-demand, and pay-per-view depend upon encryption algorithms for providing confidentiality. Video communication is characterized by distinct features such as large volume, high redundancy between adjacent frames, video codec compliance, syntax compliance, and application specific requirements. Naive approaches for video encryption encrypt the entire video stream with conventional text based cryptographic algorithms. Although naive approaches are the most secure for video encryption, the computational cost associated with them is very high. This research work aims at enhancing the speed of naive approaches through chaos based S-box design. Chaotic equations are popularly known for randomness, extreme sensitivity to initial conditions, and ergodicity. The proposed methodology employs two-dimensional discrete Henon map for (i) generation of dynamic and key-dependent S-box that could be integrated with symmetric algorithms like Blowfish and Data Encryption Standard (DES) and (ii) generation of one-time keys for simple substitution ciphers. The proposed design is tested for randomness, nonlinearity, avalanche effect, bit independence criterion, and key sensitivity. Experimental results confirm that chaos based S-box design and key generation significantly reduce the computational cost of video encryption with no compromise in security.
Ensemble of Chaotic and Naive Approaches for Performance Enhancement in Video Encryption
Chandrasekaran, Jeyamala; Thiruvengadam, S. J.
2015-01-01
Owing to the growth of high performance network technologies, multimedia applications over the Internet are increasing exponentially. Applications like video conferencing, video-on-demand, and pay-per-view depend upon encryption algorithms for providing confidentiality. Video communication is characterized by distinct features such as large volume, high redundancy between adjacent frames, video codec compliance, syntax compliance, and application specific requirements. Naive approaches for video encryption encrypt the entire video stream with conventional text based cryptographic algorithms. Although naive approaches are the most secure for video encryption, the computational cost associated with them is very high. This research work aims at enhancing the speed of naive approaches through chaos based S-box design. Chaotic equations are popularly known for randomness, extreme sensitivity to initial conditions, and ergodicity. The proposed methodology employs two-dimensional discrete Henon map for (i) generation of dynamic and key-dependent S-box that could be integrated with symmetric algorithms like Blowfish and Data Encryption Standard (DES) and (ii) generation of one-time keys for simple substitution ciphers. The proposed design is tested for randomness, nonlinearity, avalanche effect, bit independence criterion, and key sensitivity. Experimental results confirm that chaos based S-box design and key generation significantly reduce the computational cost of video encryption with no compromise in security. PMID:26550603
Kim, Jonghoon
2014-06-01
Information gathering ability had been evaluated mainly via checklists in clinical performance examinations (CPX). But, it is not proved yet if students write the information correctly in postencounter note (PN), although they asked questions or performed physical examinations (PE) about the information when they interacted with standardized patients in CPX. This study addressed the necessity of introducing PN to evaluate the ability in CPX. After patient encounters, students were instructed to write the findings of history taking and physical examination that they considered as important information in approaching the patient's problems in PN. PNs were scored using answer keys selected from checklist items, which were considered to be recorded in PN by CPX experts. PNs of six CPX cases from 54 students were analyzed. Correlation coefficients between the key-checklist scores and PN scores of six cases were moderate to high (0.52 to 0.79). However, students frequently neglected some cardinal features of chief complains, pertinent findings of past/social history and PE, and pertinent negative findings of associated symptoms in PNs, which were checked as 'done' in the keys of checklists. It is necessary to introduce PN in CPX to evaluate the students' ability of synthesis and integration of patient information.
Evaluation of Image Segmentation and Object Recognition Algorithms for Image Parsing
2013-09-01
generation of the features from the key points. OpenCV uses Euclidean distance to match the key points and has the option to use Manhattan distance...feature vector includes polarity and intensity information. Final step is matching the key points. In OpenCV , Euclidean distance or Manhattan...the code below is one way and OpenCV offers the function radiusMatch (a pair must have a distance less than a given maximum distance). OpenCV’s
Impact and Crashworthiness Characteristics of Venera Type Landers for Future Venus Missions
NASA Technical Reports Server (NTRS)
Schroeder, Kevin; Bayandor, Javid; Samareh, Jamshid
2016-01-01
In this paper an in-depth investigation of the structural design of the Venera 9-14 landers is explored. A complete reverse engineering of the Venera lander was required. The lander was broken down into its fundamental components and analyzed. This provided in-sights into the hidden features of the design. A trade study was performed to find the sensitivity of the lander's overall mass to the variation of several key parameters. For the lander's legs, the location, length, configuration, and number are all parameterized. The size of the impact ring, the radius of the drag plate, and other design features are also parameterized, and all of these features were correlated to the change of mass of the lander. A multi-fidelity design tool used for further investigation of the parameterized lander was developed. As a design was passed down from one level to the next, the fidelity, complexity, accuracy, and run time of the model increased. The low-fidelity model was a highly nonlinear analytical model developed to rapidly predict the mass of each design. The medium and high fidelity models utilized an explicit finite element framework to investigate the performance of various landers upon impact with the surface under a range of landing conditions. This methodology allowed for a large variety of designs to be investigated by the analytical model, which identified designs with the optimum structural mass to payload ratio. As promising designs emerged, investigations in the following higher fidelity models were focused on establishing their reliability and crashworthiness. The developed design tool efficiently modelled and tested the best concepts for any scenario based on critical Venusian mission requirements and constraints. Through this program, the strengths and weaknesses inherent in the Venera-Type landers were thoroughly investigated. Key features identified for the design of robust landers will be used as foundations for the development of the next generation of landers for future exploration missions to Venus.
The Light Microscopy Module Design and Performance Demonstrations
NASA Technical Reports Server (NTRS)
Motil, Susan M.; Snead, John H.; Griffin, DeVon W.; Hovenac, Edward A.
2003-01-01
The Light Microscopy Module (LMM) is a state-of-the-art space station payload to provide investigations in the fields of fluids, condensed matter physics, and biological sciences. The LMM hardware will reside inside the Fluids Integrated Rack (FIR), a multi-user facility class payload that will provide fundamental services for the LMM and future payloads. LMM and FIR will be launched in 2005 and both will reside in the Destiny module of the International Space Station (ISS). There are five experiments to be performed within the LMM. This paper will provide a description of the initial five experiments: the supporting FIR subsystems; LMM design; capabilities and key features; and a summary of performance demonstrations.
NASA Astrophysics Data System (ADS)
Li, Jing; Xie, Weixin; Pei, Jihong
2018-03-01
Sea-land segmentation is one of the key technologies of sea target detection in remote sensing images. At present, the existing algorithms have the problems of low accuracy, low universality and poor automatic performance. This paper puts forward a sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image removing island. Firstly, the coastline data is extracted and all of land area is labeled by using the geographic information in large-field remote sensing image. Secondly, three features (local entropy, local texture and local gradient mean) is extracted in the sea-land border area, and the three features combine a 3D feature vector. And then the MultiGaussian model is adopted to describe 3D feature vectors of sea background in the edge of the coastline. Based on this multi-gaussian sea background model, the sea pixels and land pixels near coastline are classified more precise. Finally, the coarse segmentation result and the fine segmentation result are fused to obtain the accurate sea-land segmentation. Comparing and analyzing the experimental results by subjective vision, it shows that the proposed method has high segmentation accuracy, wide applicability and strong anti-disturbance ability.
ICAN: Integrated composites analyzer
NASA Technical Reports Server (NTRS)
Murthy, P. L. N.; Chamis, C. C.
1984-01-01
The ICAN computer program performs all the essential aspects of mechanics/analysis/design of multilayered fiber composites. Modular, open-ended and user friendly, the program can handle a variety of composite systems having one type of fiber and one matrix as constituents as well as intraply and interply hybrid composite systems. It can also simulate isotropic layers by considering a primary composite system with negligible fiber volume content. This feature is specifically useful in modeling thin interply matrix layers. Hygrothermal conditions and various combinations of in-plane and bending loads can also be considered. Usage of this code is illustrated with a sample input and the generated output. Some key features of output are stress concentration factors around a circular hole, locations of probable delamination, a summary of the laminate failure stress analysis, free edge stresses, microstresses and ply stress/strain influence coefficients. These features make ICAN a powerful, cost-effective tool to analyze/design fiber composite structures and components.
Bag of Lines (BoL) for Improved Aerial Scene Representation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sridharan, Harini; Cheriyadat, Anil M.
2014-09-22
Feature representation is a key step in automated visual content interpretation. In this letter, we present a robust feature representation technique, referred to as bag of lines (BoL), for high-resolution aerial scenes. The proposed technique involves extracting and compactly representing low-level line primitives from the scene. The compact scene representation is generated by counting the different types of lines representing various linear structures in the scene. Through extensive experiments, we show that the proposed scene representation is invariant to scale changes and scene conditions and can discriminate urban scene categories accurately. We compare the BoL representation with the popular scalemore » invariant feature transform (SIFT) and Gabor wavelets for their classification and clustering performance on an aerial scene database consisting of images acquired by sensors with different spatial resolutions. The proposed BoL representation outperforms the SIFT- and Gabor-based representations.« less
Automating the generation of finite element dynamical cores with Firedrake
NASA Astrophysics Data System (ADS)
Ham, David; Mitchell, Lawrence; Homolya, Miklós; Luporini, Fabio; Gibson, Thomas; Kelly, Paul; Cotter, Colin; Lange, Michael; Kramer, Stephan; Shipton, Jemma; Yamazaki, Hiroe; Paganini, Alberto; Kärnä, Tuomas
2017-04-01
The development of a dynamical core is an increasingly complex software engineering undertaking. As the equations become more complete, the discretisations more sophisticated and the hardware acquires ever more fine-grained parallelism and deeper memory hierarchies, the problem of building, testing and modifying dynamical cores becomes increasingly complex. Here we present Firedrake, a code generation system for the finite element method with specialist features designed to support the creation of geoscientific models. Using Firedrake, the dynamical core developer writes the partial differential equations in weak form in a high level mathematical notation. Appropriate function spaces are chosen and time stepping loops written at the same high level. When the programme is run, Firedrake generates high performance C code for the resulting numerics which are executed in parallel. Models in Firedrake typically take a tiny fraction of the lines of code required by traditional hand-coding techniques. They support more sophisticated numerics than are easily achieved by hand, and the resulting code is frequently higher performance. Critically, debugging, modifying and extending a model written in Firedrake is vastly easier than by traditional methods due to the small, highly mathematical code base. Firedrake supports a wide range of key features for dynamical core creation: A vast range of discretisations, including both continuous and discontinuous spaces and mimetic (C-grid-like) elements which optimally represent force balances in geophysical flows. High aspect ratio layered meshes suitable for ocean and atmosphere domains. Curved elements for high accuracy representations of the sphere. Support for non-finite element operators, such as parametrisations. Access to PETSc, a world-leading library of programmable linear and nonlinear solvers. High performance adjoint models generated automatically by symbolically reasoning about the forward model. This poster will present the key features of the Firedrake system, as well as those of Gusto, an atmospheric dynamical core, and Thetis, a coastal ocean model, both of which are written in Firedrake.
Secure and Privacy Enhanced Gait Authentication on Smart Phone
Choi, Deokjai
2014-01-01
Smart environments established by the development of mobile technology have brought vast benefits to human being. However, authentication mechanisms on portable smart devices, particularly conventional biometric based approaches, still remain security and privacy concerns. These traditional systems are mostly based on pattern recognition and machine learning algorithms, wherein original biometric templates or extracted features are stored under unconcealed form for performing matching with a new biometric sample in the authentication phase. In this paper, we propose a novel gait based authentication using biometric cryptosystem to enhance the system security and user privacy on the smart phone. Extracted gait features are merely used to biometrically encrypt a cryptographic key which is acted as the authentication factor. Gait signals are acquired by using an inertial sensor named accelerometer in the mobile device and error correcting codes are adopted to deal with the natural variation of gait measurements. We evaluate our proposed system on a dataset consisting of gait samples of 34 volunteers. We achieved the lowest false acceptance rate (FAR) and false rejection rate (FRR) of 3.92% and 11.76%, respectively, in terms of key length of 50 bits. PMID:24955403
NASA Astrophysics Data System (ADS)
Huschauer, A.; Blas, A.; Borburgh, J.; Damjanovic, S.; Gilardoni, S.; Giovannozzi, M.; Hourican, M.; Kahle, K.; Le Godec, G.; Michels, O.; Sterbini, G.; Hernalsteens, C.
2017-06-01
Following a successful commissioning period, the multiturn extraction (MTE) at the CERN Proton Synchrotron (PS) has been applied for the fixed-target physics programme at the Super Proton Synchrotron (SPS) since September 2015. This exceptional extraction technique was proposed to replace the long-serving continuous transfer (CT) extraction, which has the drawback of inducing high activation in the ring. MTE exploits the principles of nonlinear beam dynamics to perform loss-free beam splitting in the horizontal phase space. Over multiple turns, the resulting beamlets are then transferred to the downstream accelerator. The operational deployment of MTE was rendered possible by the full understanding and mitigation of different hardware limitations and by redesigning the extraction trajectories and nonlinear optics, which was required due to the installation of a dummy septum to reduce the activation of the magnetic extraction septum. This paper focuses on these key features including the use of the transverse damper and the septum shadowing, which allowed a transition from the MTE study to a mature operational extraction scheme.
Robust efficient video fingerprinting
NASA Astrophysics Data System (ADS)
Puri, Manika; Lubin, Jeffrey
2009-02-01
We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.
How Physician Perspectives on E-Prescribing Evolve over Time
Patel, Vaishali; Pfoh, Elizabeth R.; Kaushal, Rainu
2016-01-01
Summary Background Physicians are expending tremendous resources transitioning to new electronic health records (EHRs), with electronic prescribing as a key functionality of most systems. Physician dissatisfaction post-transition can be quite marked, especially initially. However, little is known about how physicians’ experiences using new EHRs for e-prescribing evolve over time. We previously published a qualitative case study about the early physician experience transitioning from an older to a newer, more robust EHR, in the outpatient setting, focusing on their perceptions of the electronic prescribing functionality. Objective Our current objective was to examine how perceptions about using the new HER evolved over time, again with a focus on electronic prescribing. Methods We interviewed thirteen internists at an academic medical center-affiliated ambulatory care clinic who transitioned to the new EHR two years prior. We used a grounded theory approach to analyze semi-structured interviews and generate key themes. Results We identified five themes: efficiency and usability, effects on safety, ongoing training requirements, customization, and competing priorities for the EHR. We found that for even experienced e-prescribers, achieving prior levels of perceived prescribing efficiency took nearly two years. Despite the fact that speed in performing prescribing-related tasks was highly important, most were still not utilizing system short cuts or customization features designed to maximize efficiency. Alert fatigue remained common. However, direct transmission of prescriptions to pharmacies was highly valued and its benefits generally outweighed the other features considered poorly designed for physician workflow. Conclusions Ensuring that physicians are able to do key prescribing tasks efficiently is critical to the perceived value of e-prescribing applications. However, successful transitions may take longer than expected and e-prescribing system features that do not support workflow or require constant upgrades may further prolong the process. Additionally, as system features continually evolve, physicians may need ongoing training and support to maintain efficiency. PMID:27786335
Single Channel EEG Artifact Identification Using Two-Dimensional Multi-Resolution Analysis.
Taherisadr, Mojtaba; Dehzangi, Omid; Parsaei, Hossein
2017-12-13
As a diagnostic monitoring approach, electroencephalogram (EEG) signals can be decoded by signal processing methodologies for various health monitoring purposes. However, EEG recordings are contaminated by other interferences, particularly facial and ocular artifacts generated by the user. This is specifically an issue during continuous EEG recording sessions, and is therefore a key step in using EEG signals for either physiological monitoring and diagnosis or brain-computer interface to identify such artifacts from useful EEG components. In this study, we aim to design a new generic framework in order to process and characterize EEG recording as a multi-component and non-stationary signal with the aim of localizing and identifying its component (e.g., artifact). In the proposed method, we gather three complementary algorithms together to enhance the efficiency of the system. Algorithms include time-frequency (TF) analysis and representation, two-dimensional multi-resolution analysis (2D MRA), and feature extraction and classification. Then, a combination of spectro-temporal and geometric features are extracted by combining key instantaneous TF space descriptors, which enables the system to characterize the non-stationarities in the EEG dynamics. We fit a curvelet transform (as a MRA method) to 2D TF representation of EEG segments to decompose the given space to various levels of resolution. Such a decomposition efficiently improves the analysis of the TF spaces with different characteristics (e.g., resolution). Our experimental results demonstrate that the combination of expansion to TF space, analysis using MRA, and extracting a set of suitable features and applying a proper predictive model is effective in enhancing the EEG artifact identification performance. We also compare the performance of the designed system with another common EEG signal processing technique-namely, 1D wavelet transform. Our experimental results reveal that the proposed method outperforms 1D wavelet.
Common Bolted Joint Analysis Tool
NASA Technical Reports Server (NTRS)
Imtiaz, Kauser
2011-01-01
Common Bolted Joint Analysis Tool (comBAT) is an Excel/VB-based bolted joint analysis/optimization program that lays out a systematic foundation for an inexperienced or seasoned analyst to determine fastener size, material, and assembly torque for a given design. Analysts are able to perform numerous what-if scenarios within minutes to arrive at an optimal solution. The program evaluates input design parameters, performs joint assembly checks, and steps through numerous calculations to arrive at several key margins of safety for each member in a joint. It also checks for joint gapping, provides fatigue calculations, and generates joint diagrams for a visual reference. Optimum fastener size and material, as well as correct torque, can then be provided. Analysis methodology, equations, and guidelines are provided throughout the solution sequence so that this program does not become a "black box:" for the analyst. There are built-in databases that reduce the legwork required by the analyst. Each step is clearly identified and results are provided in number format, as well as color-coded spelled-out words to draw user attention. The three key features of the software are robust technical content, innovative and user friendly I/O, and a large database. The program addresses every aspect of bolted joint analysis and proves to be an instructional tool at the same time. It saves analysis time, has intelligent messaging features, and catches operator errors in real time.
Dissociation between melodic and rhythmic processing during piano performance from musical scores.
Bengtsson, Sara L; Ullén, Fredrik
2006-03-01
When performing or perceiving music, we experience the melodic (spatial) and rhythmic aspects as a unified whole. Moreover, the motor program theory stipulates that the relative timing and the serial order of the movement are invariant features of a motor program. Still, clinical and psychophysical observations suggest independent processing of these two aspects, in both production and perception. Here, we used functional magnetic resonance imaging to dissociate between brain areas processing the melodic and the rhythmic aspects during piano playing from musical scores. This behavior requires that the pianist decodes two types of information from the score in order to produce the desired piece of music. The spatial location of a note head determines which piano key to strike, and the various features of the note, such as the stem and flags determine the timing of each key stroke. We found that the medial occipital lobe, the superior temporal lobe, the rostral cingulate cortex, the putamen and the cerebellum process the melodic information, whereas the lateral occipital and the inferior temporal cortex, the left supramarginal gyrus, the left inferior and ventral frontal gyri, the caudate nucleus, and the cerebellum process the rhythmic information. Thus, we suggest a dissociate involvement of the dorsal visual stream in the spatial pitch processing and the ventral visual stream in temporal movement preparation. We propose that this dissociate organization may be important for fast learning and flexibility in motor control.
NASA Astrophysics Data System (ADS)
Mendel, Kayla R.; Li, Hui; Sheth, Deepa; Giger, Maryellen L.
2018-02-01
With growing adoption of digital breast tomosynthesis (DBT) in breast cancer screening protocols, it is important to compare the performance of computer-aided diagnosis (CAD) in the diagnosis of breast lesions on DBT images compared to conventional full-field digital mammography (FFDM). In this study, we retrospectively collected FFDM and DBT images of 78 lesions from 76 patients, each containing lesions that were biopsy-proven as either malignant or benign. A square region of interest (ROI) was placed to fully cover the lesion on each FFDM, DBT synthesized 2D images, and DBT key slice images in the cranial-caudal (CC) and mediolateral-oblique (MLO) views. Features were extracted on each ROI using a pre-trained convolutional neural network (CNN). These features were then input to a support vector machine (SVM) classifier, and area under the ROC curve (AUC) was used as the figure of merit. We found that in both the CC view and MLO view, the synthesized 2D image performed best (AUC = 0.814, AUC = 0.881 respectively) in the task of lesion characterization. Small database size was a key limitation in this study, and could lead to overfitting in the application of the SVM classifier. In future work, we plan to expand this dataset and to explore more robust deep learning methodology such as fine-tuning.
Scaling of Performance in Liquid Propellant Rocket Engine Combustors
NASA Technical Reports Server (NTRS)
Hulka, James R.
2007-01-01
This paper discusses scaling of combustion and combustion performance in liquid propellant rocket engine combustion devices. In development of new combustors, comparisons are often made between predicted performance in a new combustor and measured performance in another combustor with different geometric and thermodynamic characteristics. Without careful interpretation of some key features, the comparison can be misinterpreted and erroneous information used in the design of the new device. This paper provides a review of this performance comparison, including a brief review of the initial liquid rocket scaling research conducted during the 1950s and 1960s, a review of the typical performance losses encountered and how they scale, a description of the typical scaling procedures used in development programs today, and finally a review of several historical development programs to see what insight they can bring to the questions at hand.
Scaling of Performance in Liquid Propellant Rocket Engine Combustion Devices
NASA Technical Reports Server (NTRS)
Hulka, James R.
2008-01-01
This paper discusses scaling of combustion and combustion performance in liquid propellant rocket engine combustion devices. In development of new combustors, comparisons are often made between predicted performance in a new combustor and measured performance in another combustor with different geometric and thermodynamic characteristics. Without careful interpretation of some key features, the comparison can be misinterpreted and erroneous information used in the design of the new device. This paper provides a review of this performance comparison, including a brief review of the initial liquid rocket scaling research conducted during the 1950s and 1960s, a review of the typical performance losses encountered and how they scale, a description of the typical scaling procedures used in development programs today, and finally a review of several historical development programs to see what insight they can bring to the questions at hand.
Shared periodic performer movements coordinate interactions in duo improvisations
Jakubowski, Kelly; Moran, Nikki; Keller, Peter E.
2018-01-01
Human interaction involves the exchange of temporally coordinated, multimodal cues. Our work focused on interaction in the visual domain, using music performance as a case for analysis due to its temporally diverse and hierarchical structures. We made use of two improvising duo datasets—(i) performances of a jazz standard with a regular pulse and (ii) non-pulsed, free improvizations—to investigate whether human judgements of moments of interaction between co-performers are influenced by body movement coordination at multiple timescales. Bouts of interaction in the performances were manually annotated by experts and the performers’ movements were quantified using computer vision techniques. The annotated interaction bouts were then predicted using several quantitative movement and audio features. Over 80% of the interaction bouts were successfully predicted by a broadband measure of the energy of the cross-wavelet transform of the co-performers’ movements in non-pulsed duos. A more complex model, with multiple predictors that captured more specific, interacting features of the movements, was needed to explain a significant amount of variance in the pulsed duos. The methods developed here have key implications for future work on measuring visual coordination in musical ensemble performances, and can be easily adapted to other musical contexts, ensemble types and traditions. PMID:29515867
Building intelligent communication systems for handicapped aphasiacs.
Fu, Yu-Fen; Ho, Cheng-Seen
2010-01-01
This paper presents an intelligent system allowing handicapped aphasiacs to perform basic communication tasks. It has the following three key features: (1) A 6-sensor data glove measures the finger gestures of a patient in terms of the bending degrees of his fingers. (2) A finger language recognition subsystem recognizes language components from the finger gestures. It employs multiple regression analysis to automatically extract proper finger features so that the recognition model can be fast and correctly constructed by a radial basis function neural network. (3) A coordinate-indexed virtual keyboard allows the users to directly access the letters on the keyboard at a practical speed. The system serves as a viable tool for natural and affordable communication for handicapped aphasiacs through continuous finger language input.
Subatomic Features on the Silicon (111)-(7x7) Surface Observed by Atomic Force Microscopy.
Giessibl; Hembacher; Bielefeldt; Mannhart
2000-07-21
The atomic force microscope images surfaces by sensing the forces between a sharp tip and a sample. If the tip-sample interaction is dominated by short-range forces due to the formation of covalent bonds, the image of an individual atom should reflect the angular symmetry of the interaction. Here, we report on a distinct substructure in the images of individual adatoms on silicon (111)-(7x7), two crescents with a spherical envelope. The crescents are interpreted as images of two atomic orbitals of the front atom of the tip. Key for the observation of these subatomic features is a force-detection scheme with superior noise performance and enhanced sensitivity to short-range forces.
Support vector machine for automatic pain recognition
NASA Astrophysics Data System (ADS)
Monwar, Md Maruf; Rezaei, Siamak
2009-02-01
Facial expressions are a key index of emotion and the interpretation of such expressions of emotion is critical to everyday social functioning. In this paper, we present an efficient video analysis technique for recognition of a specific expression, pain, from human faces. We employ an automatic face detector which detects face from the stored video frame using skin color modeling technique. For pain recognition, location and shape features of the detected faces are computed. These features are then used as inputs to a support vector machine (SVM) for classification. We compare the results with neural network based and eigenimage based automatic pain recognition systems. The experiment results indicate that using support vector machine as classifier can certainly improve the performance of automatic pain recognition system.
Brynsvold, Glen V.; Snyder, Jr., Harold J.
1976-06-22
An internal core tightener which is a linear actuated (vertical actuation motion) expanding device utilizing a minimum of moving parts to perform the lateral tightening function. The key features are: (1) large contact areas to transmit loads during reactor operation; (2) actuation cam surfaces loaded only during clamping and unclamping operation; (3) separation of the parts and internal operation involved in the holding function from those involved in the actuation function; and (4) preloaded pads with compliant travel at each face of the hexagonal assembly at the two clamping planes to accommodate thermal expansion and irradiation induced swelling. The latter feature enables use of a "fixed" outer core boundary, and thus eliminates the uncertainty in gross core dimensions, and potential for rapid core reactivity changes as a result of core dimensional change.
Exploring the capabilities of support vector machines in detecting silent data corruptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subasi, Omer; Di, Sheng; Bautista-Gomez, Leonardo
As the exascale era approaches, the increasing capacity of high-performance computing (HPC) systems with targeted power and energy budget goals introduces significant challenges in reliability. Silent data corruptions (SDCs), or silent errors, are one of the major sources that corrupt the execution results of HPC applications without being detected. Here in this paper, we explore a set of novel SDC detectors – by leveraging epsilon-insensitive support vector machine regression – to detect SDCs that occur in HPC applications. The key contributions are threefold. (1) Our exploration takes temporal, spatial, and spatiotemporal features into account and analyzes different detectors based onmore » different features. (2) We provide an in-depth study on the detection ability and performance with different parameters, and we optimize the detection range carefully. (3) Experiments with eight real-world HPC applications show that support-vector-machine-based detectors can achieve detection sensitivity (i.e., recall) up to 99% yet suffer a less than 1% false positive rate for most cases. Our detectors incur low performance overhead, 5% on average, for all benchmarks studied in this work.« less
Exploring the capabilities of support vector machines in detecting silent data corruptions
Subasi, Omer; Di, Sheng; Bautista-Gomez, Leonardo; ...
2018-02-01
As the exascale era approaches, the increasing capacity of high-performance computing (HPC) systems with targeted power and energy budget goals introduces significant challenges in reliability. Silent data corruptions (SDCs), or silent errors, are one of the major sources that corrupt the execution results of HPC applications without being detected. Here in this paper, we explore a set of novel SDC detectors – by leveraging epsilon-insensitive support vector machine regression – to detect SDCs that occur in HPC applications. The key contributions are threefold. (1) Our exploration takes temporal, spatial, and spatiotemporal features into account and analyzes different detectors based onmore » different features. (2) We provide an in-depth study on the detection ability and performance with different parameters, and we optimize the detection range carefully. (3) Experiments with eight real-world HPC applications show that support-vector-machine-based detectors can achieve detection sensitivity (i.e., recall) up to 99% yet suffer a less than 1% false positive rate for most cases. Our detectors incur low performance overhead, 5% on average, for all benchmarks studied in this work.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less
Improving the performance of univariate control charts for abnormal detection and classification
NASA Astrophysics Data System (ADS)
Yiakopoulos, Christos; Koutsoudaki, Maria; Gryllias, Konstantinos; Antoniadis, Ioannis
2017-03-01
Bearing failures in rotating machinery can cause machine breakdown and economical loss, if no effective actions are taken on time. Therefore, it is of prime importance to detect accurately the presence of faults, especially at their early stage, to prevent sequent damage and reduce costly downtime. The machinery fault diagnosis follows a roadmap of data acquisition, feature extraction and diagnostic decision making, in which mechanical vibration fault feature extraction is the foundation and the key to obtain an accurate diagnostic result. A challenge in this area is the selection of the most sensitive features for various types of fault, especially when the characteristics of failures are difficult to be extracted. Thus, a plethora of complex data-driven fault diagnosis methods are fed by prominent features, which are extracted and reduced through traditional or modern algorithms. Since most of the available datasets are captured during normal operating conditions, the last decade a number of novelty detection methods, able to work when only normal data are available, have been developed. In this study, a hybrid method combining univariate control charts and a feature extraction scheme is introduced focusing towards an abnormal change detection and classification, under the assumption that measurements under normal operating conditions of the machinery are available. The feature extraction method integrates the morphological operators and the Morlet wavelets. The effectiveness of the proposed methodology is validated on two different experimental cases with bearing faults, demonstrating that the proposed approach can improve the fault detection and classification performance of conventional control charts.
ERIC Educational Resources Information Center
Ermeling, Bradley Alan
2012-01-01
Past and contemporary scholars have emphasized the importance of job-embedded, systematic instructional inquiry for educators. A recent review of the literature highlights four key features shared by several well documented inquiry approaches for classroom teachers. Interestingly, another line of research suggests that these key features also…
ERIC Educational Resources Information Center
Dunst, Carl J.
2015-01-01
A model for designing and implementing evidence-based in-service professional development in early childhood intervention as well as the key features of the model are described. The key features include professional development specialist (PDS) description and demonstration of an intervention practice, active and authentic job-embedded…
Salient Key Features of Actual English Instructional Practices in Saudi Arabia
ERIC Educational Resources Information Center
Al-Seghayer, Khalid
2015-01-01
This is a comprehensive review of the salient key features of the actual English instructional practices in Saudi Arabia. The goal of this work is to gain insights into the practices and pedagogic approaches to English as a foreign language (EFL) teaching currently employed in this country. In particular, we identify the following central features…
ERIC Educational Resources Information Center
Jung, Youngok; Zuniga, Stephen; Howes, Carollee; Jeon, Hyun-Joo; Parrish, Deborah; Quick, Heather; Manship, Karen; Hauser, Alison
2016-01-01
Noting the lack of research on how early childhood education (ECE) programmes within family literacy programmes influence Latino children's early language and literacy development, this study examined key features of ECE programmes, specifically teacher-child interactions and child engagement in language and literacy activities and how these…
Detection of pesticide (Cyantraniliprole) residue on grapes using hyperspectral sensing
NASA Astrophysics Data System (ADS)
Mohite, Jayantrao; Karale, Yogita; Pappula, Srinivasu; Shabeer, Ahammed T. P.; Sawant, S. D.; Hingmire, Sandip
2017-05-01
Pesticide residues in the fruits, vegetables and agricultural commodities are harmful to humans and are becoming a health concern nowadays. Detection of pesticide residues on various commodities in an open environment is a challenging task. Hyperspectral sensing is one of the recent technologies used to detect the pesticide residues. This paper addresses the problem of detection of pesticide residues of Cyantraniliprole on grapes in open fields using multi temporal hyperspectral remote sensing data. The re ectance data of 686 samples of grapes with no, single and double dose application of Cyantraniliprole has been collected by handheld spectroradiometer (MS- 720) with a wavelength ranging from 350 nm to 1052 nm. The data collection was carried out over a large feature set of 213 spectral bands during the period of March to May 2015. This large feature set may cause model over-fitting problem as well as increase the computational time, so in order to get the most relevant features, various feature selection techniques viz Principle Component Analysis (PCA), LASSO and Elastic Net regularization have been used. Using this selected features, we evaluate the performance of various classifiers such as Artificial Neural Networks (ANN), Support Vector Machine (SVM), Random Forest (RF) and Extreme Gradient Boosting (XGBoost) to classify the grape sample with no, single or double application of Cyantraniliprole. The key finding of this paper is; most of the features selected by the LASSO varies between 350-373nm and 940-990nm consistently for all days. Experimental results also shows that, by using the relevant features selected by LASSO, SVM performs better with average prediction accuracy of 91.98 % among all classifiers, for all days.
NASA Astrophysics Data System (ADS)
Wang, Z.; Li, T.; Pan, L.; Kang, Z.
2017-09-01
With increasing attention for the indoor environment and the development of low-cost RGB-D sensors, indoor RGB-D images are easily acquired. However, scene semantic segmentation is still an open area, which restricts indoor applications. The depth information can help to distinguish the regions which are difficult to be segmented out from the RGB images with similar color or texture in the indoor scenes. How to utilize the depth information is the key problem of semantic segmentation for RGB-D images. In this paper, we propose an Encode-Decoder Fully Convolutional Networks for RGB-D image classification. We use Multiple Kernel Maximum Mean Discrepancy (MK-MMD) as a distance measure to find common and special features of RGB and D images in the network to enhance performance of classification automatically. To explore better methods of applying MMD, we designed two strategies; the first calculates MMD for each feature map, and the other calculates MMD for whole batch features. Based on the result of classification, we use the full connect CRFs for the semantic segmentation. The experimental results show that our method can achieve a good performance on indoor RGB-D image semantic segmentation.
Deformed Palmprint Matching Based on Stable Regions.
Wu, Xiangqian; Zhao, Qiushi
2015-12-01
Palmprint recognition (PR) is an effective technology for personal recognition. A main problem, which deteriorates the performance of PR, is the deformations of palmprint images. This problem becomes more severe on contactless occasions, in which images are acquired without any guiding mechanisms, and hence critically limits the applications of PR. To solve the deformation problems, in this paper, a model for non-linearly deformed palmprint matching is derived by approximating non-linear deformed palmprint images with piecewise-linear deformed stable regions. Based on this model, a novel approach for deformed palmprint matching, named key point-based block growing (KPBG), is proposed. In KPBG, an iterative M-estimator sample consensus algorithm based on scale invariant feature transform features is devised to compute piecewise-linear transformations to approximate the non-linear deformations of palmprints, and then, the stable regions complying with the linear transformations are decided using a block growing algorithm. Palmprint feature extraction and matching are performed over these stable regions to compute matching scores for decision. Experiments on several public palmprint databases show that the proposed models and the KPBG approach can effectively solve the deformation problem in palmprint verification and outperform the state-of-the-art methods.
Naive Bayes Bearing Fault Diagnosis Based on Enhanced Independence of Data
Zhang, Nannan; Wu, Lifeng; Yang, Jing; Guan, Yong
2018-01-01
The bearing is the key component of rotating machinery, and its performance directly determines the reliability and safety of the system. Data-based bearing fault diagnosis has become a research hotspot. Naive Bayes (NB), which is based on independent presumption, is widely used in fault diagnosis. However, the bearing data are not completely independent, which reduces the performance of NB algorithms. In order to solve this problem, we propose a NB bearing fault diagnosis method based on enhanced independence of data. The method deals with data vector from two aspects: the attribute feature and the sample dimension. After processing, the classification limitation of NB is reduced by the independence hypothesis. First, we extract the statistical characteristics of the original signal of the bearings effectively. Then, the Decision Tree algorithm is used to select the important features of the time domain signal, and the low correlation features is selected. Next, the Selective Support Vector Machine (SSVM) is used to prune the dimension data and remove redundant vectors. Finally, we use NB to diagnose the fault with the low correlation data. The experimental results show that the independent enhancement of data is effective for bearing fault diagnosis. PMID:29401730
NASA Technical Reports Server (NTRS)
1992-01-01
A major innovation of the Civil Service Reform Act of 1978 was the creation of a Senior Executive Service (SES). The purpose of the SES is both simple and bold: to attract executives of the highest quality into Federal service and to retain them by providing outstanding opportunities for career growth and reward. The SES is intended to: provide greater authority in managing executive resources; attract and retain highly competent executives, and assign them where they will effectively accomplish their missions and best use their talents; provide for systematic development of executives; hold executives accountable for individual and organizational performance; reward outstanding performers and remove poor performers; and provide for an executive merit system free of inappropriate personnel practices and arbitrary actions. This Handbook summarizes the key features of the SES at NASA. It is intended as a special welcome to new appointees and also as a general reference document. It contains an overview of SES management at NASA, including the Executive Resources Board and the Performance Review Board, which are mandated by law to carry out key SES functions. In addition, assistance is provided by a Senior Executive Committee in certain reviews and decisions and by Executive Position Managers in day-to-day administration and oversight.
The role of health informatics in clinical audit: part of the problem or key to the solution?
Georgiou, Andrew; Pearson, Michael
2002-05-01
The concepts of quality assurance (for which clinical audit is an essential part), evaluation and clinical governance each depend on the ability to derive and record measurements that describe clinical performance. Rapid IT developments have raised many new possibilities for managing health care. They have allowed for easier collection and processing of data in greater quantities. These developments have encouraged the growth of quality assurance as a key feature of health care delivery. In the past most of the emphasis has been on hospital information systems designed predominantly for the administration of patients and the management of financial performance. Large, hi-tech information system capacity does not guarantee quality information. The task of producing information that can be confidently used to monitor the quality of clinical care requires attention to key aspects of the design and operation of the audit. The Myocardial Infarction National Audit Project (MINAP) utilizes an IT-based system to collect and process data on large numbers of patients and make them readily available to contributing hospitals. The project shows that IT systems that employ rigorous health informatics methodologies can do much to improve the monitoring and provision of health care.
Simmering, Vanessa R; Wood, Chelsey M
2017-08-01
Working memory is a basic cognitive process that predicts higher-level skills. A central question in theories of working memory development is the generality of the mechanisms proposed to explain improvements in performance. Prior theories have been closely tied to particular tasks and/or age groups, limiting their generalizability. The cognitive dynamics theory of visual working memory development has been proposed to overcome this limitation. From this perspective, developmental improvements arise through the coordination of cognitive processes to meet demands of different behavioral tasks. This notion is described as real-time stability, and can be probed through experiments that assess how changing task demands impact children's performance. The current studies test this account by probing visual working memory for colors and shapes in a change detection task that compares detection of changes to new features versus swaps in color-shape binding. In Experiment 1, 3- to 4-year-old children showed impairments specific to binding swaps, as predicted by decreased real-time stability early in development; 5- to 6-year-old children showed a slight advantage on binding swaps, but 7- to 8-year-old children and adults showed no difference across trial types. Experiment 2 tested the proposed explanation of young children's binding impairment through added perceptual structure, which supported the stability and precision of feature localization in memory-a process key to detecting binding swaps. This additional structure improved young children's binding swap detection, but not new-feature detection or adults' performance. These results provide further evidence for the cognitive dynamics and real-time stability explanation of visual working memory development. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Reversed stereo depth and motion direction with anti-correlated stimuli.
Read, J C; Eagle, R A
2000-01-01
We used anti-correlated stimuli to compare the correspondence problem in stereo and motion. Subjects performed a two-interval forced-choice disparity/motion direction discrimination task for different displacements. For anti-correlated 1d band-pass noise, we found weak reversed depth and motion. With 2d anti-correlated stimuli, stereo performance was impaired, but the perception of reversed motion was enhanced. We can explain the main features of our data in terms of channels tuned to different spatial frequencies and orientation. We suggest that a key difference between the solution of the correspondence problem by the motion and stereo systems concerns the integration of information at different orientations.
Space Shuttle propulsion performance reconstruction from flight data
NASA Technical Reports Server (NTRS)
Rogers, Robert M.
1989-01-01
The aplication of extended Kalman filtering to estimating Space Shuttle Solid Rocket Booster (SRB) performance, specific impulse, from flight data in a post-flight processing computer program. The flight data used includes inertial platform acceleration, SRB head pressure, and ground based radar tracking data. The key feature in this application is the model used for the SRBs, which represents a reference quasi-static internal ballistics model normalized to the propellant burn depth. Dynamic states of mass overboard and propellant burn depth are included in the filter model to account for real-time deviations from the reference model used. Aerodynamic, plume, wind and main engine uncertainties are included.
Livermore Big Artificial Neural Network Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Essen, Brian Van; Jacobs, Sam; Kim, Hyojin
2016-07-01
LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.
By-Pass Diode Temperature Tests of a Solar Array Coupon under Space Thermal Environment Conditions
NASA Technical Reports Server (NTRS)
Wright, Kenneth H.; Schneider, Todd A.; Vaughn, Jason A.; Hoang, Bao; Wong, Frankie; Wu, Gordon
2016-01-01
By-Pass diodes are a key design feature of solar arrays and system design must be robust against local heating, especially with implementation of larger solar cells. By-Pass diode testing was performed to aid thermal model development for use in future array designs that utilize larger cell sizes that result in higher string currents. Testing was performed on a 56-cell Advanced Triple Junction solar array coupon provided by SSL. Test conditions were vacuum with cold array backside using discrete by-pass diode current steps of 0.25 A ranging from 0 A to 2.0 A.
CWRF performance at downscaling China climate characteristics
NASA Astrophysics Data System (ADS)
Liang, Xin-Zhong; Sun, Chao; Zheng, Xiaohui; Dai, Yongjiu; Xu, Min; Choi, Hyun I.; Ling, Tiejun; Qiao, Fengxue; Kong, Xianghui; Bi, Xunqiang; Song, Lianchun; Wang, Fang
2018-05-01
The performance of the regional Climate-Weather Research and Forecasting model (CWRF) for downscaling China climate characteristics is evaluated using a 1980-2015 simulation at 30 km grid spacing driven by the ECMWF Interim reanalysis (ERI). It is shown that CWRF outperforms the popular Regional Climate Modeling system (RegCM4.6) in key features including monsoon rain bands, diurnal temperature ranges, surface winds, interannual precipitation and temperature anomalies, humidity couplings, and 95th percentile daily precipitation. Even compared with ERI, which assimilates surface observations, CWRF better represents the geographic distributions of seasonal mean climate and extreme precipitation. These results indicate that CWRF may significantly enhance China climate modeling capabilities.
A data-driven multiplicative fault diagnosis approach for automation processes.
Hao, Haiyang; Zhang, Kai; Ding, Steven X; Chen, Zhiwen; Lei, Yaguo
2014-09-01
This paper presents a new data-driven method for diagnosing multiplicative key performance degradation in automation processes. Different from the well-established additive fault diagnosis approaches, the proposed method aims at identifying those low-level components which increase the variability of process variables and cause performance degradation. Based on process data, features of multiplicative fault are extracted. To identify the root cause, the impact of fault on each process variable is evaluated in the sense of contribution to performance degradation. Then, a numerical example is used to illustrate the functionalities of the method and Monte-Carlo simulation is performed to demonstrate the effectiveness from the statistical viewpoint. Finally, to show the practical applicability, a case study on the Tennessee Eastman process is presented. Copyright © 2013. Published by Elsevier Ltd.
Prototypes for Content-Based Image Retrieval in Clinical Practice
Depeursinge, Adrien; Fischer, Benedikt; Müller, Henning; Deserno, Thomas M
2011-01-01
Content-based image retrieval (CBIR) has been proposed as key technology for computer-aided diagnostics (CAD). This paper reviews the state of the art and future challenges in CBIR for CAD applied to clinical practice. We define applicability to clinical practice by having recently demonstrated the CBIR system on one of the CAD demonstration workshops held at international conferences, such as SPIE Medical Imaging, CARS, SIIM, RSNA, and IEEE ISBI. From 2009 to 2011, the programs of CADdemo@CARS and the CAD Demonstration Workshop at SPIE Medical Imaging were sought for the key word “retrieval” in the title. The systems identified were analyzed and compared according to the hierarchy of gaps for CBIR systems. In total, 70 software demonstrations were analyzed. 5 systems were identified meeting the criterions. The fields of application are (i) bone age assessment, (ii) bone fractures, (iii) interstitial lung diseases, and (iv) mammography. Bridging the particular gaps of semantics, feature extraction, feature structure, and evaluation have been addressed most frequently. In specific application domains, CBIR technology is available for clinical practice. While system development has mainly focused on bridging content and feature gaps, performance and usability have become increasingly important. The evaluation must be based on a larger set of reference data, and workflow integration must be achieved before CBIR-CAD is really established in clinical practice. PMID:21892374
The Porifera Ontology (PORO): enhancing sponge systematics with an anatomy ontology.
Thacker, Robert W; Díaz, Maria Cristina; Kerner, Adeline; Vignes-Lebbe, Régine; Segerdell, Erik; Haendel, Melissa A; Mungall, Christopher J
2014-01-01
Porifera (sponges) are ancient basal metazoans that lack organs. They provide insight into key evolutionary transitions, such as the emergence of multicellularity and the nervous system. In addition, their ability to synthesize unusual compounds offers potential biotechnical applications. However, much of the knowledge of these organisms has not previously been codified in a machine-readable way using modern web standards. The Porifera Ontology is intended as a standardized coding system for sponge anatomical features currently used in systematics. The ontology is available from http://purl.obolibrary.org/obo/poro.owl, or from the project homepage http://porifera-ontology.googlecode.com/. The version referred to in this manuscript is permanently available from http://purl.obolibrary.org/obo/poro/releases/2014-03-06/. By standardizing character representations, we hope to facilitate more rapid description and identification of sponge taxa, to allow integration with other evolutionary database systems, and to perform character mapping across the major clades of sponges to better understand the evolution of morphological features. Future applications of the ontology will focus on creating (1) ontology-based species descriptions; (2) taxonomic keys that use the nested terms of the ontology to more quickly facilitate species identifications; and (3) methods to map anatomical characters onto molecular phylogenies of sponges. In addition to modern taxa, the ontology is being extended to include features of fossil taxa.
Dzialak, Matthew R.; Olson, Chad V.; Harju, Seth M.; Webb, Stephen L.; Mudd, James P.; Winstead, Jeffrey B.; Hayden-Wing, L.D.
2011-01-01
Background Balancing animal conservation and human use of the landscape is an ongoing scientific and practical challenge throughout the world. We investigated reproductive success in female greater sage-grouse (Centrocercus urophasianus) relative to seasonal patterns of resource selection, with the larger goal of developing a spatially-explicit framework for managing human activity and sage-grouse conservation at the landscape level. Methodology/Principal Findings We integrated field-observation, Global Positioning Systems telemetry, and statistical modeling to quantify the spatial pattern of occurrence and risk during nesting and brood-rearing. We linked occurrence and risk models to provide spatially-explicit indices of habitat-performance relationships. As part of the analysis, we offer novel biological information on resource selection during egg-laying, incubation, and night. The spatial pattern of occurrence during all reproductive phases was driven largely by selection or avoidance of terrain features and vegetation, with little variation explained by anthropogenic features. Specifically, sage-grouse consistently avoided rough terrain, selected for moderate shrub cover at the patch level (within 90 m2), and selected for mesic habitat in mid and late brood-rearing phases. In contrast, risk of nest and brood failure was structured by proximity to anthropogenic features including natural gas wells and human-created mesic areas, as well as vegetation features such as shrub cover. Conclusions/Significance Risk in this and perhaps other human-modified landscapes is a top-down (i.e., human-mediated) process that would most effectively be minimized by developing a better understanding of specific mechanisms (e.g., predator subsidization) driving observed patterns, and using habitat-performance indices such as those developed herein for spatially-explicit guidance of conservation intervention. Working under the hypothesis that industrial activity structures risk by enhancing predator abundance or effectiveness, we offer specific recommendations for maintaining high-performance habitat and reducing low-performance habitat, particularly relative to the nesting phase, by managing key high-risk anthropogenic features such as industrial infrastructure and water developments. PMID:22022587
Guided Search for Triple Conjunctions
Nordfang, Maria; Wolfe, Jeremy M
2017-01-01
A key tenet of Feature Integration Theory and related theories such as Guided Search (GS) is that the binding of basic features requires attention. This would seem to predict that conjunctions of features of objects that have not been attended should not influence search. However, Found (1998) reported that an irrelevant feature (size) improved the efficiency of search for a color × orientation conjunction if it was correlated with the other two features across the display compared to the case where size was not correlated with color and orientation features. We examine this issue with somewhat different stimuli. We use triple conjunctions of color, orientation and shape (e.g. search for a red, vertical, oval-shaped item). This allows us to manipulate the number of features that each distractor shares with the target (Sharing) and it allows us to vary the total number of distractor types (and, thus, the number of groups of identical items; Grouping). We find these triple conjunction searches are generally very efficient – producing very shallow reaction time (RT) × set size slopes, consistent with strong guidance by basic features. Nevertheless, both of these variables, Sharing and Grouping modulate performance. These influences are not predicted by previous accounts of GS. However, both can be accommodated in a GS framework. Alternatively, it is possible, if not necessary, to see these effects as evidence for “preattentive binding” of conjunctions. PMID:25005070
Guided search for triple conjunctions.
Nordfang, Maria; Wolfe, Jeremy M
2014-08-01
A key tenet of feature integration theory and of related theories such as guided search (GS) is that the binding of basic features requires attention. This would seem to predict that conjunctions of features of objects that have not been attended should not influence search. However, Found (1998) reported that an irrelevant feature (size) improved the efficiency of search for a Color × Orientation conjunction if it was correlated with the other two features across the display, as compared to the case in which size was not correlated with color and orientation features. We examined this issue with somewhat different stimuli. We used triple conjunctions of color, orientation, and shape (e.g., search for a red, vertical, oval-shaped item). This allowed us to manipulate the number of features that each distractor shared with the target (sharing) and it allowed us to vary the total number of distractor types (and, thus, the number of groups of identical items: grouping). We found that these triple conjunction searches were generally very efficient--producing very shallow Reaction Time × Set Size slopes, consistent with strong guidance by basic features. Nevertheless, both of the variables, sharing and grouping, modulated performance. These influences were not predicted by previous accounts of GS; however, both can be accommodated in a GS framework. Alternatively, it is possible, though not necessary, to see these effects as evidence for "preattentive binding" of conjunctions.
Diagnostic methodology for incipient system disturbance based on a neural wavelet approach
NASA Astrophysics Data System (ADS)
Won, In-Ho
Since incipient system disturbances are easily mixed up with other events or noise sources, the signal from the system disturbance can be neglected or identified as noise. Thus, as available knowledge and information is obtained incompletely or inexactly from the measurements; an exploration into the use of artificial intelligence (AI) tools to overcome these uncertainties and limitations was done. A methodology integrating the feature extraction efficiency of the wavelet transform with the classification capabilities of neural networks is developed for signal classification in the context of detecting incipient system disturbances. The synergistic effects of wavelets and neural networks present more strength and less weakness than either technique taken alone. A wavelet feature extractor is developed to form concise feature vectors for neural network inputs. The feature vectors are calculated from wavelet coefficients to reduce redundancy and computational expense. During this procedure, the statistical features based on the fractal concept to the wavelet coefficients play a role as crucial key in the wavelet feature extractor. To verify the proposed methodology, two applications are investigated and successfully tested. The first involves pump cavitation detection using dynamic pressure sensor. The second pertains to incipient pump cavitation detection using signals obtained from a current sensor. Also, through comparisons between three proposed feature vectors and with statistical techniques, it is shown that the variance feature extractor provides a better approach in the performed applications.
Bidirectional RNN for Medical Event Detection in Electronic Health Records.
Jagannatha, Abhyuday N; Yu, Hong
2016-06-01
Sequence labeling for extraction of medical events and their attributes from unstructured text in Electronic Health Record (EHR) notes is a key step towards semantic understanding of EHRs. It has important applications in health informatics including pharmacovigilance and drug surveillance. The state of the art supervised machine learning models in this domain are based on Conditional Random Fields (CRFs) with features calculated from fixed context windows. In this application, we explored recurrent neural network frameworks and show that they significantly out-performed the CRF models.
NASA Astrophysics Data System (ADS)
Ward, M. C. L.; McNie, Mark E.; Bunyan, Robert J.; King, David O.; Carline, Roger T.; Wilson, Rebecca; Gillham, J. P.
1998-09-01
We review some of the attractive attributes of microengineering and relate them to features of the highly successful silicon microelectronics industry. We highlight the need for cost effective functionality rather than ultimate performance as a driver for success and review key examples of polysilicon devices from this point of view. The effective exploitation of the data generated by the cost effective polysilicon sensors is also considered and we conclude that `non traditional' data analysis will need to be exploited if full use is to be made of polysilicon devices.
ERIC Educational Resources Information Center
Packard, Richard D.; Dereshiwsky, Mary I.
This paper presents research findings concerning the Career Ladder pilot test program in Arizona. The program is designed to reward and motivate teachers based on performance. One of the program's key features is the flexibility and innovation allowed to participating districts in their individual development of program designs and structures. An…
NASA Astrophysics Data System (ADS)
Waltham, Chris
1999-07-01
A simple analysis is performed on the flight of a small balsa toy glider. All the basic features of flight have to be included in the calculation. Key differences between the flight of small objects like the glider, and full-sized aircraft, are examined. Good agreement with experimental data is obtained when only one parameter, the drag coefficient, is allowed to vary. The experimental drag coefficient is found to be within a factor of 2 of that obtained using the theory of ideal flat plates.
2013-06-01
high-performance contact adhesive (baseline) can be used to bond most rubber, cloth, metal, wood , foamed glass, paper honeycomb, decorative plastic ...and gasket adhesive (baseline) may be used to bond metal, wood , most plastics , neoprene, SBR, and butyl rubber (11). Key features are high immediate...nitrile rubber, most plastics and gasketing materials to a variety of substrates (13). This product contains 0% HAPs (14) and has been added to the
Realtime Decision Making on EO-1 Using Onboard Science Analysis
NASA Technical Reports Server (NTRS)
Sherwood, Robert; Chien, Steve; Davies, Ashley; Mandl, Dan; Frye, Stu
2004-01-01
Recent autonomy experiments conducted on Earth Observing 1 (EO-1) using the Autonomous Sciencecraft Experiment (ASE) flight software has been used to classify key features in hyperspectral images captured by EO-1. Furthermore, analysis is performed by this software onboard EO-1 and then used to modify the operational plan without interaction from the ground. This paper will outline the overall operations concept and provide some details and examples of the onboard science processing, science analysis, and replanning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamaguchi, A.
To obtain much higher performance than that of alternative power transmission systems, hydraulic systems have been continuously evolving to use high-pressure. Adoption of positive displacement pumps and motors is based on this reason. Therefore, tribology is a key terminology for hydraulic pumps and motors to obtain excellent performance and durability. In this paper the following topics are investigated: (1) the special feature of tribology of hydraulic pumps and motors; (2) indication of the important bearing/sealing parts in piston pumps and effects of the frictional force and leakage flow to performance; (3) the methods to break through the tribological limitation ofmore » hydraulic equipment; and (4) optimum design of the bearing/sealing parts used in the fluid to mixed lubrication regions.« less
How important is vehicle safety in the new vehicle purchase process?
Koppel, Sjaanie; Charlton, Judith; Fildes, Brian; Fitzharris, Michael
2008-05-01
Whilst there has been a significant increase in the amount of consumer interest in the safety performance of privately owned vehicles, the role that it plays in consumers' purchase decisions is poorly understood. The aims of the current study were to determine: how important vehicle safety is in the new vehicle purchase process; what importance consumers place on safety options/features relative to other convenience and comfort features, and how consumers conceptualise vehicle safety. In addition, the study aimed to investigate the key parameters associated with ranking 'vehicle safety' as the most important consideration in the new vehicle purchase. Participants recruited in Sweden and Spain completed a questionnaire about their new vehicle purchase. The findings from the questionnaire indicated that participants ranked safety-related factors (e.g., EuroNCAP (or other) safety ratings) as more important in the new vehicle purchase process than other vehicle factors (e.g., price, reliability etc.). Similarly, participants ranked safety-related features (e.g., advanced braking systems, front passenger airbags etc.) as more important than non-safety-related features (e.g., route navigation systems, air-conditioning etc.). Consistent with previous research, most participants equated vehicle safety with the presence of specific vehicle safety features or technologies rather than vehicle crash safety/test results or crashworthiness. The key parameters associated with ranking 'vehicle safety' as the most important consideration in the new vehicle purchase were: use of EuroNCAP, gender and education level, age, drivers' concern about crash involvement, first vehicle purchase, annual driving distance, person for whom the vehicle was purchased, and traffic infringement history. The findings from this study are important for policy makers, manufacturers and other stakeholders to assist in setting priorities with regard to the promotion and publicity of vehicle safety features for particular consumer groups (such as younger consumers) in order to increase their knowledge regarding vehicle safety and to encourage them to place highest priority on safety in the new vehicle purchase process.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-15
... (EPO) as the lead, to propose a revised standard for the filing of nucleotide and/or amino acid.... ST.25 uses a controlled vocabulary of feature keys to describe nucleic acid and amino acid sequences... patent data purposes. The XML standard also includes four qualifiers for amino acids. These feature keys...
Crafting your Elevator Pitch: Key Features of an Elevator Speech to Help You Reach the Top Floor
You never know when you will end up talking to someone who will end up helping to shape your career. Many of these chance meetings are brief and when you only get 2-3 minutes to make your case everything that you say has to count. This presentation will cover the key features o...
A group filter algorithm for sea mine detection
NASA Astrophysics Data System (ADS)
Cobb, J. Tory; An, Myoung; Tolimieri, Richard
2005-06-01
Automatic detection of sea mines in coastal regions is a difficult task due to the highly variable sea bottom conditions present in the underwater environment. Detection systems must be able to discriminate objects which vary in size, shape, and orientation from naturally occurring and man-made clutter. Additionally, these automated systems must be computationally efficient to be incorporated into unmanned underwater vehicle (UUV) sensor systems characterized by high sensor data rates and limited processing abilities. Using noncommutative group harmonic analysis, a fast, robust sea mine detection system is created. A family of unitary image transforms associated to noncommutative groups is generated and applied to side scan sonar image files supplied by Naval Surface Warfare Center Panama City (NSWC PC). These transforms project key image features, geometrically defined structures with orientations, and localized spectral information into distinct orthogonal components or feature subspaces of the image. The performance of the detection system is compared against the performance of an independent detection system in terms of probability of detection (Pd) and probability of false alarm (Pfa).
Actual vs perceived performance debriefing in surgery: practice far from perfect.
Ahmed, Maria; Sevdalis, Nick; Vincent, Charles; Arora, Sonal
2013-04-01
Performance feedback or debriefing in surgery is increasingly recognized as an essential means to optimize learning in the operating room (OR). However, there is a lack of evidence regarding the current practice and barriers to debriefing in the OR. Phase 1 consisted of semistructured interviews with surgical trainers and trainees to identify features of an effective debriefing and perceived barriers to debriefing. Phase 2 consisted of ethnographic observations of surgical cases to identify current practice and observed barriers to debriefing. Surgical trainers and trainees identified key features of effective debriefing with regard to the approach and content; however, these were not commonly identified in practice. Culture was recognized as a significant barrier to debriefing across both phases of the study. There is a disparity between what the surgical community views as effective debriefing and actual debriefing practices in the OR. Improvements to the current debriefing culture and practice within the field of surgery should be considered to facilitate learning from clinical practice. Copyright © 2013. Published by Elsevier Inc.
Ares-I-X Stability and Control Flight Test: Analysis and Plans
NASA Technical Reports Server (NTRS)
Brandon, Jay M.; Derry, Stephen D.; Heim, Eugene H.; Hueschen, Richard M.; Bacon, Barton J.
2008-01-01
The flight test of the Ares I-X vehicle provides a unique opportunity to reduce risk of the design of the Ares I vehicle and test out design, math modeling, and analysis methods. One of the key features of the Ares I design is the significant static aerodynamic instability coupled with the relatively flexible vehicle - potentially resulting in a challenging controls problem to provide adequate flight path performance while also providing adequate structural mode damping and preventing adverse control coupling to the flexible structural modes. Another challenge is to obtain enough data from the single flight to be able to conduct analysis showing the effectiveness of the controls solutions and have data to inform design decisions for Ares I. This paper will outline the modeling approaches and control system design to conduct this flight test, and also the system identification techniques developed to extract key information such as control system performance (gain/phase margins, for example), structural dynamics responses, and aerodynamic model estimations.
Chaisangmongkon, Warasinee; Swaminathan, Sruthi K.; Freedman, David J.; Wang, Xiao-Jing
2017-01-01
Summary Decision making involves dynamic interplay between internal judgements and external perception, which has been investigated in delayed match-to-category (DMC) experiments. Our analysis of neural recordings shows that, during DMC tasks, LIP and PFC neurons demonstrate mixed, time-varying, and heterogeneous selectivity, but previous theoretical work has not established the link between these neural characteristics and population-level computations. We trained a recurrent network model to perform DMC tasks and found that the model can remarkably reproduce key features of neuronal selectivity at the single-neuron and population levels. Analysis of the trained networks elucidates that robust transient trajectories of the neural population are the key driver of sequential categorical decisions. The directions of trajectories are governed by network self-organized connectivity, defining a ‘neural landscape’, consisting of a task-tailored arrangement of slow states and dynamical tunnels. With this model, we can identify functionally-relevant circuit motifs and generalize the framework to solve other categorization tasks. PMID:28334612
Investigation of HV/HR-CMOS technology for the ATLAS Phase-II Strip Tracker Upgrade
NASA Astrophysics Data System (ADS)
Fadeyev, V.; Galloway, Z.; Grabas, H.; Grillo, A. A.; Liang, Z.; Martinez-Mckinney, F.; Seiden, A.; Volk, J.; Affolder, A.; Buckland, M.; Meng, L.; Arndt, K.; Bortoletto, D.; Huffman, T.; John, J.; McMahon, S.; Nickerson, R.; Phillips, P.; Plackett, R.; Shipsey, I.; Vigani, L.; Bates, R.; Blue, A.; Buttar, C.; Kanisauskas, K.; Maneuski, D.; Benoit, M.; Di Bello, F.; Caragiulo, P.; Dragone, A.; Grenier, P.; Kenney, C.; Rubbo, F.; Segal, J.; Su, D.; Tamma, C.; Das, D.; Dopke, J.; Turchetta, R.; Wilson, F.; Worm, S.; Ehrler, F.; Peric, I.; Gregor, I. M.; Stanitzki, M.; Hoeferkamp, M.; Seidel, S.; Hommels, L. B. A.; Kramberger, G.; Mandić, I.; Mikuž, M.; Muenstermann, D.; Wang, R.; Zhang, J.; Warren, M.; Song, W.; Xiu, Q.; Zhu, H.
2016-09-01
ATLAS has formed strip CMOS project to study the use of CMOS MAPS devices as silicon strip sensors for the Phase-II Strip Tracker Upgrade. This choice of sensors promises several advantages over the conventional baseline design, such as better resolution, less material in the tracking volume, and faster construction speed. At the same time, many design features of the sensors are driven by the requirement of minimizing the impact on the rest of the detector. Hence the target devices feature long pixels which are grouped to form a virtual strip with binary-encoded z position. The key performance aspects are radiation hardness compatibility with HL-LHC environment, as well as extraction of the full hit position with full-reticle readout architecture. To date, several test chips have been submitted using two different CMOS technologies. The AMS 350 nm is a high voltage CMOS process (HV-CMOS), that features the sensor bias of up to 120 V. The TowerJazz 180 nm high resistivity CMOS process (HR-CMOS) uses a high resistivity epitaxial layer to provide the depletion region on top of the substrate. We have evaluated passive pixel performance, and charge collection projections. The results strongly support the radiation tolerance of these devices to radiation dose of the HL-LHC in the strip tracker region. We also describe design features for the next chip submission that are motivated by our technology evaluation.
Robertson, Sam; Gupta, Ritu; McIntosh, Sam
2016-10-01
This study developed a method to determine whether the distribution of individual player performances can be modelled to explain match outcome in team sports, using Australian Rules football as an example. Player-recorded values (converted to a percentage of team total) in 11 commonly reported performance indicators were obtained for all regular season matches played during the 2014 Australian Football League season, with team totals also recorded. Multiple features relating to heuristically determined percentiles for each performance indicator were then extracted for each team and match, along with the outcome (win/loss). A generalised estimating equation model comprising eight key features was developed, explaining match outcome at a median accuracy of 63.9% under 10-fold cross-validation. Lower 75th, 90th and 95th percentile values for team goals and higher 25th and 50th percentile values for disposals were linked with winning. Lower 95th and higher 25th percentile values for Inside 50s and Marks, respectively, were also important contributors. These results provide evidence supporting team strategies which aim to obtain an even spread of goal scorers in Australian Rules football. The method developed in this investigation could be used to quantify the importance of individual contributions to overall team performance in team sports.
Crandall, William; Bentzen, Billie Louise; Myers, Linda; Brabyn, John
2001-05-01
BACKGROUND: For a blind or visually impaired person, a vital prerequisite to accessing any feature of the built environment is being able to find this feature. Braille signs, even where available, do not replace the functions of print signage because they cannot be read from a distance. Remotely readable infrared signs utilise spoken infrared message transmissions to label key environmental features, so that a blind person with a suitable receiver can locate and identify them from a distance. METHODS: Three problems that are among the most challenging and dangerous faced by blind travellers are negotiating complex transit stations, locating bus stops and safely and efficiently crossing light-controlled intersections. We report the results of human factors studies using a remote infrared audible sign system (RIAS), Talking Signs(R), in these critical tasks, examining issues such as the amount of training needed to use the system, its impact on performance and safety, benefits for different population subgroups and user opinions of its value. RESULTS: Results are presented in the form of both objective performance measures and in subjects' ratings of the usefulness of the system in performing these tasks. Findings are that blind people can quickly and easily learn to use remote infrared audible signage effectively and that its use improves travel safety, efficiency and independence.? CONCLUSIONS: The technology provides equal access to a wide variety of public facilities.
Irusta, Unai; Morgado, Eduardo; Aramendi, Elisabete; Ayala, Unai; Wik, Lars; Kramer-Johansen, Jo; Eftestøl, Trygve; Alonso-Atienza, Felipe
2016-01-01
Early recognition of ventricular fibrillation (VF) and electrical therapy are key for the survival of out-of-hospital cardiac arrest (OHCA) patients treated with automated external defibrillators (AED). AED algorithms for VF-detection are customarily assessed using Holter recordings from public electrocardiogram (ECG) databases, which may be different from the ECG seen during OHCA events. This study evaluates VF-detection using data from both OHCA patients and public Holter recordings. ECG-segments of 4-s and 8-s duration were analyzed. For each segment 30 features were computed and fed to state of the art machine learning (ML) algorithms. ML-algorithms with built-in feature selection capabilities were used to determine the optimal feature subsets for both databases. Patient-wise bootstrap techniques were used to evaluate algorithm performance in terms of sensitivity (Se), specificity (Sp) and balanced error rate (BER). Performance was significantly better for public data with a mean Se of 96.6%, Sp of 98.8% and BER 2.2% compared to a mean Se of 94.7%, Sp of 96.5% and BER 4.4% for OHCA data. OHCA data required two times more features than the data from public databases for an accurate detection (6 vs 3). No significant differences in performance were found for different segment lengths, the BER differences were below 0.5-points in all cases. Our results show that VF-detection is more challenging for OHCA data than for data from public databases, and that accurate VF-detection is possible with segments as short as 4-s. PMID:27441719
Figuera, Carlos; Irusta, Unai; Morgado, Eduardo; Aramendi, Elisabete; Ayala, Unai; Wik, Lars; Kramer-Johansen, Jo; Eftestøl, Trygve; Alonso-Atienza, Felipe
2016-01-01
Early recognition of ventricular fibrillation (VF) and electrical therapy are key for the survival of out-of-hospital cardiac arrest (OHCA) patients treated with automated external defibrillators (AED). AED algorithms for VF-detection are customarily assessed using Holter recordings from public electrocardiogram (ECG) databases, which may be different from the ECG seen during OHCA events. This study evaluates VF-detection using data from both OHCA patients and public Holter recordings. ECG-segments of 4-s and 8-s duration were analyzed. For each segment 30 features were computed and fed to state of the art machine learning (ML) algorithms. ML-algorithms with built-in feature selection capabilities were used to determine the optimal feature subsets for both databases. Patient-wise bootstrap techniques were used to evaluate algorithm performance in terms of sensitivity (Se), specificity (Sp) and balanced error rate (BER). Performance was significantly better for public data with a mean Se of 96.6%, Sp of 98.8% and BER 2.2% compared to a mean Se of 94.7%, Sp of 96.5% and BER 4.4% for OHCA data. OHCA data required two times more features than the data from public databases for an accurate detection (6 vs 3). No significant differences in performance were found for different segment lengths, the BER differences were below 0.5-points in all cases. Our results show that VF-detection is more challenging for OHCA data than for data from public databases, and that accurate VF-detection is possible with segments as short as 4-s.
Concatenated Coding Using Trellis-Coded Modulation
NASA Technical Reports Server (NTRS)
Thompson, Michael W.
1997-01-01
In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.
Feature Extraction for Track Section Status Classification Based on UGW Signals
Yang, Yuan; Shi, Lin
2018-01-01
Track status classification is essential for the stability and safety of railway operations nowadays, when railway networks are becoming more and more complex and broad. In this situation, monitoring systems are already a key element in applications dedicated to evaluating the status of a certain track section, often determining whether it is free or occupied by a train. Different technologies have already been involved in the design of monitoring systems, including ultrasonic guided waves (UGW). This work proposes the use of the UGW signals captured by a track monitoring system to extract the features that are relevant for determining the corresponding track section status. For that purpose, three features of UGW signals have been considered: the root mean square value, the energy, and the main frequency components. Experimental results successfully validated how these features can be used to classify the track section status into free, occupied and broken. Furthermore, spatial and temporal dependencies among these features were analysed in order to show how they can improve the final classification performance. Finally, a preliminary high-level classification system based on deep learning networks has been envisaged for future works. PMID:29673156
Decoding natural images from evoked brain activities using encoding models with invertible mapping.
Li, Chao; Xu, Junhai; Liu, Baolin
2018-05-21
Recent studies have built encoding models in the early visual cortex, and reliable mappings have been made between the low-level visual features of stimuli and brain activities. However, these mappings are irreversible, so that the features cannot be directly decoded. To solve this problem, we designed a sparse framework-based encoding model that predicted brain activities from a complete feature representation. Moreover, according to the distribution and activation rules of neurons in the primary visual cortex (V1), three key transformations were introduced into the basic feature to improve the model performance. In this setting, the mapping was simple enough that it could be inverted using a closed-form formula. Using this mapping, we designed a hybrid identification method based on the support vector machine (SVM), and tested it on a published functional magnetic resonance imaging (fMRI) dataset. The experiments confirmed the rationality of our encoding model, and the identification accuracies for 2 subjects increased from 92% and 72% to 98% and 92% with the chance level only 0.8%. Copyright © 2018 Elsevier Ltd. All rights reserved.
High Precision Prediction of Functional Sites in Protein Structures
Buturovic, Ljubomir; Wong, Mike; Tang, Grace W.; Altman, Russ B.; Petkovic, Dragutin
2014-01-01
We address the problem of assigning biological function to solved protein structures. Computational tools play a critical role in identifying potential active sites and informing screening decisions for further lab analysis. A critical parameter in the practical application of computational methods is the precision, or positive predictive value. Precision measures the level of confidence the user should have in a particular computed functional assignment. Low precision annotations lead to futile laboratory investigations and waste scarce research resources. In this paper we describe an advanced version of the protein function annotation system FEATURE, which achieved 99% precision and average recall of 95% across 20 representative functional sites. The system uses a Support Vector Machine classifier operating on the microenvironment of physicochemical features around an amino acid. We also compared performance of our method with state-of-the-art sequence-level annotator Pfam in terms of precision, recall and localization. To our knowledge, no other functional site annotator has been rigorously evaluated against these key criteria. The software and predictive models are incorporated into the WebFEATURE service at http://feature.stanford.edu/wf4.0-beta. PMID:24632601
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, J.H.; Ellis, J.R.; Montague, S.
1997-03-01
One of the principal applications of monolithically integrated micromechanical/microelectronic systems has been accelerometers for automotive applications. As integrated MEMS/CMOS technologies such as those developed by U.C. Berkeley, Analog Devices, and Sandia National Laboratories mature, additional systems for more sensitive inertial measurements will enter the commercial marketplace. In this paper, the authors will examine key technology design rules which impact the performance and cost of inertial measurement devices manufactured in integrated MEMS/CMOS technologies. These design parameters include: (1) minimum MEMS feature size, (2) minimum CMOS feature size, (3) maximum MEMS linear dimension, (4) number of mechanical MEMS layers, (5) MEMS/CMOS spacing.more » In particular, the embedded approach to integration developed at Sandia will be examined in the context of these technology features. Presently, this technology offers MEMS feature sizes as small as 1 {micro}m, CMOS critical dimensions of 1.25 {micro}m, MEMS linear dimensions of 1,000 {micro}m, a single mechanical level of polysilicon, and a 100 {micro}m space between MEMS and CMOS. This is applicable to modern precision guided munitions.« less
Learning-based landmarks detection for osteoporosis analysis
NASA Astrophysics Data System (ADS)
Cheng, Erkang; Zhu, Ling; Yang, Jie; Azhari, Azhari; Sitam, Suhardjo; Liang, Xin; Megalooikonomou, Vasileios; Ling, Haibin
2016-03-01
Osteoporosis is the common cause for a broken bone among senior citizens. Early diagnosis of osteoporosis requires routine examination which may be costly for patients. A potential low cost diagnosis is to identify a senior citizen at high risk of osteoporosis by pre-screening during routine dental examination. Therefore, osteoporosis analysis using dental radiographs severs as a key step in routine dental examination. The aim of this study is to localize landmarks in dental radiographs which are helpful to assess the evidence of osteoporosis. We collect eight landmarks which are critical in osteoporosis analysis. Our goal is to localize these landmarks automatically for a given dental radiographic image. To address the challenges such as large variations of appearances in subjects, in this paper, we formulate the task into a multi-class classification problem. A hybrid feature pool is used to represent these landmarks. For the discriminative classification problem, we use a random forest to fuse the hybrid feature representation. In the experiments, we also evaluate the performances of individual feature component and the hybrid fused feature. Our proposed method achieves average detection error of 2:9mm.
Financial Performance of Health Insurers: State-Run Versus Federal-Run Exchanges.
Hall, Mark A; McCue, Michael J; Palazzolo, Jennifer R
2018-06-01
Many insurers incurred financial losses in individual markets for health insurance during 2014, the first year of Affordable Care Act mandated changes. This analysis looks at key financial ratios of insurers to compare profitability in 2014 and 2013, identify factors driving financial performance, and contrast the financial performance of health insurers operating in state-run exchanges versus the federal exchange. Overall, the median loss of sampled insurers was -3.9%, no greater than their loss in 2013. Reduced administrative costs offset increases in medical losses. Insurers performed better in states with state-run exchanges than insurers in states using the federal exchange in 2014. Medical loss ratios are the underlying driver more than administrative costs in the difference in performance between states with federal versus state-run exchanges. Policy makers looking to improve the financial performance of the individual market should focus on features that differentiate the markets associated with state-run versus federal exchanges.
Castillo, Encarnación; López-Ramos, Juan A.; Morales, Diego P.
2018-01-01
Security is a critical challenge for the effective expansion of all new emerging applications in the Internet of Things paradigm. Therefore, it is necessary to define and implement different mechanisms for guaranteeing security and privacy of data interchanged within the multiple wireless sensor networks being part of the Internet of Things. However, in this context, low power and low area are required, limiting the resources available for security and thus hindering the implementation of adequate security protocols. Group keys can save resources and communications bandwidth, but should be combined with public key cryptography to be really secure. In this paper, a compact and unified co-processor for enabling Elliptic Curve Cryptography along to Advanced Encryption Standard with low area requirements and Group-Key support is presented. The designed co-processor allows securing wireless sensor networks with independence of the communications protocols used. With an area occupancy of only 2101 LUTs over Spartan 6 devices from Xilinx, it requires 15% less area while achieving near 490% better performance when compared to cryptoprocessors with similar features in the literature. PMID:29337921
Parrilla, Luis; Castillo, Encarnación; López-Ramos, Juan A; Álvarez-Bermejo, José A; García, Antonio; Morales, Diego P
2018-01-16
Security is a critical challenge for the effective expansion of all new emerging applications in the Internet of Things paradigm. Therefore, it is necessary to define and implement different mechanisms for guaranteeing security and privacy of data interchanged within the multiple wireless sensor networks being part of the Internet of Things. However, in this context, low power and low area are required, limiting the resources available for security and thus hindering the implementation of adequate security protocols. Group keys can save resources and communications bandwidth, but should be combined with public key cryptography to be really secure. In this paper, a compact and unified co-processor for enabling Elliptic Curve Cryptography along to Advanced Encryption Standard with low area requirements and Group-Key support is presented. The designed co-processor allows securing wireless sensor networks with independence of the communications protocols used. With an area occupancy of only 2101 LUTs over Spartan 6 devices from Xilinx, it requires 15% less area while achieving near 490% better performance when compared to cryptoprocessors with similar features in the literature.
ERIC Educational Resources Information Center
Work Keys USA, 1998
1998-01-01
"Work Keys" is a comprehensive program for assessing and teaching workplace skills. This serial "special issue" features 18 first-hand reports on Work Keys projects in action in states across North America. They show how the Work Keys is helping businesses and educators solve the challenge of building a world-class work force.…
Multiple Paths to Mathematics Practice in Al-Kashi's "Key to Arithmetic"
ERIC Educational Resources Information Center
Taani, Osama
2014-01-01
In this paper, I discuss one of the most distinguishing features of Jamshid al-Kashi's pedagogy from his "Key to Arithmetic", a well-known Arabic mathematics textbook from the fifteenth century. This feature is the multiple paths that he includes to find a desired result. In the first section light is shed on al-Kashi's life…
An Analysis of the Contents and Pedagogy of Al-Kashi's 1427 "Key to Arithmetic" (Miftah Al-Hisab)
ERIC Educational Resources Information Center
Ta'ani, Osama Hekmat
2011-01-01
Al-Kashi's 1427 "Key to Arithmetic" had important use over several hundred years in mathematics teaching in Medieval Islam throughout the time of the Ottoman Empire. Its pedagogical features have never been studied before. In this dissertation I have made a close pedagogical analysis of these features and discovered several teaching…
Estimation of end point foot clearance points from inertial sensor data.
Santhiranayagam, Braveena K; Lai, Daniel T H; Begg, Rezaul K; Palaniswami, Marimuthu
2011-01-01
Foot clearance parameters provide useful insight into tripping risks during walking. This paper proposes a technique for the estimate of key foot clearance parameters using inertial sensor (accelerometers and gyroscopes) data. Fifteen features were extracted from raw inertial sensor measurements, and a regression model was used to estimate two key foot clearance parameters: First maximum vertical clearance (m x 1) after toe-off and the Minimum Toe Clearance (MTC) of the swing foot. Comparisons are made against measurements obtained using an optoelectronic motion capture system (Optotrak), at 4 different walking speeds. General Regression Neural Networks (GRNN) were used to estimate the desired parameters from the sensor features. Eight subjects foot clearance data were examined and a Leave-one-subject-out (LOSO) method was used to select the best model. The best average Root Mean Square Errors (RMSE) across all subjects obtained using all sensor features at the maximum speed for m x 1 was 5.32 mm and for MTC was 4.04 mm. Further application of a hill-climbing feature selection technique resulted in 0.54-21.93% improvement in RMSE and required fewer input features. The results demonstrated that using raw inertial sensor data with regression models and feature selection could accurately estimate key foot clearance parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip
This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip
This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less
Exploiting Information Diffusion Feature for Link Prediction in Sina Weibo
NASA Astrophysics Data System (ADS)
Li, Dong; Zhang, Yongchao; Xu, Zhiming; Chu, Dianhui; Li, Sheng
2016-01-01
The rapid development of online social networks (e.g., Twitter and Facebook) has promoted research related to social networks in which link prediction is a key problem. Although numerous attempts have been made for link prediction based on network structure, node attribute and so on, few of the current studies have considered the impact of information diffusion on link creation and prediction. This paper mainly addresses Sina Weibo, which is the largest microblog platform with Chinese characteristics, and proposes the hypothesis that information diffusion influences link creation and verifies the hypothesis based on real data analysis. We also detect an important feature from the information diffusion process, which is used to promote link prediction performance. Finally, the experimental results on Sina Weibo dataset have demonstrated the effectiveness of our methods.
Exploiting Information Diffusion Feature for Link Prediction in Sina Weibo.
Li, Dong; Zhang, Yongchao; Xu, Zhiming; Chu, Dianhui; Li, Sheng
2016-01-28
The rapid development of online social networks (e.g., Twitter and Facebook) has promoted research related to social networks in which link prediction is a key problem. Although numerous attempts have been made for link prediction based on network structure, node attribute and so on, few of the current studies have considered the impact of information diffusion on link creation and prediction. This paper mainly addresses Sina Weibo, which is the largest microblog platform with Chinese characteristics, and proposes the hypothesis that information diffusion influences link creation and verifies the hypothesis based on real data analysis. We also detect an important feature from the information diffusion process, which is used to promote link prediction performance. Finally, the experimental results on Sina Weibo dataset have demonstrated the effectiveness of our methods.
NASA Astrophysics Data System (ADS)
Sa, Qila; Wang, Zhihui
2018-03-01
At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.
10 CFR 1045.17 - Classification levels.
Code of Federal Regulations, 2014 CFR
2014-01-01
... classification include detailed technical descriptions of critical features of a nuclear explosive design that... classification include designs for specific weapon components (not revealing critical features), key features of uranium enrichment technologies, or specifications of weapon materials. (3) Confidential. The Director of...
10 CFR 1045.17 - Classification levels.
Code of Federal Regulations, 2013 CFR
2013-01-01
... classification include detailed technical descriptions of critical features of a nuclear explosive design that... classification include designs for specific weapon components (not revealing critical features), key features of uranium enrichment technologies, or specifications of weapon materials. (3) Confidential. The Director of...
10 CFR 1045.17 - Classification levels.
Code of Federal Regulations, 2011 CFR
2011-01-01
... classification include detailed technical descriptions of critical features of a nuclear explosive design that... classification include designs for specific weapon components (not revealing critical features), key features of uranium enrichment technologies, or specifications of weapon materials. (3) Confidential. The Director of...
10 CFR 1045.17 - Classification levels.
Code of Federal Regulations, 2012 CFR
2012-01-01
... classification include detailed technical descriptions of critical features of a nuclear explosive design that... classification include designs for specific weapon components (not revealing critical features), key features of uranium enrichment technologies, or specifications of weapon materials. (3) Confidential. The Director of...
Wang, ShaoPeng; Zhang, Yu-Hang; Huang, GuoHua; Chen, Lei; Cai, Yu-Dong
2017-01-01
Myristoylation is an important hydrophobic post-translational modification that is covalently bound to the amino group of Gly residues on the N-terminus of proteins. The many diverse functions of myristoylation on proteins, such as membrane targeting, signal pathway regulation and apoptosis, are largely due to the lipid modification, whereas abnormal or irregular myristoylation on proteins can lead to several pathological changes in the cell. To better understand the function of myristoylated sites and to correctly identify them in protein sequences, this study conducted a novel computational investigation on identifying myristoylation sites in protein sequences. A training dataset with 196 positive and 84 negative peptide segments were obtained. Four types of features derived from the peptide segments following the myristoylation sites were used to specify myristoylatedand non-myristoylated sites. Then, feature selection methods including maximum relevance and minimum redundancy (mRMR), incremental feature selection (IFS), and a machine learning algorithm (extreme learning machine method) were adopted to extract optimal features for the algorithm to identify myristoylation sites in protein sequences, thereby building an optimal prediction model. As a result, 41 key features were extracted and used to build an optimal prediction model. The effectiveness of the optimal prediction model was further validated by its performance on a test dataset. Furthermore, detailed analyses were also performed on the extracted 41 features to gain insight into the mechanism of myristoylation modification. This study provided a new computational method for identifying myristoylation sites in protein sequences. We believe that it can be a useful tool to predict myristoylation sites from protein sequences. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Simple training tricks for mastering and taming bypass procedures in neurosurgery
Hafez, Ahmad; Raj, Rahul; Lawton, Michael T.; Niemelä, Mika
2017-01-01
Background: Neurosurgeons devoted to bypass neurosurgery or revascularization neurosurgery are becoming scarcer. From a practical point of view, “bypass neurosurgeons” are anastomosis makers, vessels technicians, and time-racing repairers of vessel walls. This requires understanding the key features and hidden tricks of bypass surgery. The goal of this paper is to provide simple and inexpensive tricks for taming the art of bypass neurosurgery. Most of these tricks and materials described can be borrowed, donated, or purchased inexpensively. Methods: We performed a review of relevant training materials and recorded videos for training bypass procedures for 3 years between June 2014 and July 2017. In total, 1,300 training bypass procedures were performed, of which 200 procedures were chosen for this paper. Results: A training laboratory bypass procedures is required to enable a neurosurgeon to develop the necessary skills. The important skills for training bypass procedures gained through meticulous practice to be as reflexes are coordination, speed, agility, flexibility, and reaction time. Bypassing requires synchronization between the surgeon's gross movements, fine motoric skills, and mental strength. The suturing rhythm must be timed in a brain–body–hand fashion. Conclusion: Bypass-training is a critical part of neurosurgical training and not for a selected few. Diligent and meticulous training can enable every neurosurgeon to tame the art of bypass neurosurgery. This requires understanding the key features and hidden tricks of bypass surgery, as well as uncountable hours of training. In bypass neurosurgery, quality and time goes hand in hand. PMID:29285411
A hierarchical anatomical classification schema for prediction of phenotypic side effects
Kanji, Rakesh
2018-01-01
Prediction of adverse drug reactions is an important problem in drug discovery endeavors which can be addressed with data-driven strategies. SIDER is one of the most reliable and frequently used datasets for identification of key features as well as building machine learning models for side effects prediction. The inherently unbalanced nature of this data presents with a difficult multi-label multi-class problem towards prediction of drug side effects. We highlight the intrinsic issue with SIDER data and methodological flaws in relying on performance measures such as AUC while attempting to predict side effects.We argue for the use of metrics that are robust to class imbalance for evaluation of classifiers. Importantly, we present a ‘hierarchical anatomical classification schema’ which aggregates side effects into organs, sub-systems, and systems. With the help of a weighted performance measure, using 5-fold cross-validation we show that this strategy facilitates biologically meaningful side effects prediction at different levels of anatomical hierarchy. By implementing various machine learning classifiers we show that Random Forest model yields best classification accuracy at each level of coarse-graining. The manually curated, hierarchical schema for side effects can also serve as the basis of future studies towards prediction of adverse reactions and identification of key features linked to specific organ systems. Our study provides a strategy for hierarchical classification of side effects rooted in the anatomy and can pave the way for calibrated expert systems for multi-level prediction of side effects. PMID:29494708
Development of the European Small Geostationary Satellite SGEO
NASA Astrophysics Data System (ADS)
Lübberstedt, H.; Schneider, A.; Schuff, H.; Miesner, Th.; Winkler, A.
2008-08-01
The SGEO product portfolio, ranging from Satellite platform delivery up to in-orbit delivery of a turnkey system including satellite and ground control station, is designed for applications ranging from TV Broadcast to multimedia applications, Internet access, mobile or fixed services in a wide range of frequency bands. Furthermore, Data Relay missions such as the European Data Relay Satellite (EDRS) as well as other institutional missions are targeted. Key design features of the SGEO platform are high flexibility and modularity in order to accommodate a very wide range of future missions, a short development time below two years and the objective to build the system based on ITAR free subsystems and components. The system will provide a long lifetime of up to 15 years in orbit operations with high reliability. SGEO is the first European satellite to perform all orbit control tasks solely by electrical propulsion (EP). This design provides high mass efficiency and the capability for direct injection into geostationary orbit without chemical propulsion (CP). Optionally, an Apogee Engine Module based on CP will provide the perigee raising manoeuvres in case of a launch into geostationary transfer orbit (GTO). This approach allows an ideal choice out of a wide range of launcher candidates in dependence of the required payload capacity. SGEO will offer to the market a versatile and high performance satellite system with low investment risk for the customer and a short development time. This paper provides an overview of the SGEO system key features and the current status of the SGEO programme.
A hierarchical anatomical classification schema for prediction of phenotypic side effects.
Wadhwa, Somin; Gupta, Aishwarya; Dokania, Shubham; Kanji, Rakesh; Bagler, Ganesh
2018-01-01
Prediction of adverse drug reactions is an important problem in drug discovery endeavors which can be addressed with data-driven strategies. SIDER is one of the most reliable and frequently used datasets for identification of key features as well as building machine learning models for side effects prediction. The inherently unbalanced nature of this data presents with a difficult multi-label multi-class problem towards prediction of drug side effects. We highlight the intrinsic issue with SIDER data and methodological flaws in relying on performance measures such as AUC while attempting to predict side effects.We argue for the use of metrics that are robust to class imbalance for evaluation of classifiers. Importantly, we present a 'hierarchical anatomical classification schema' which aggregates side effects into organs, sub-systems, and systems. With the help of a weighted performance measure, using 5-fold cross-validation we show that this strategy facilitates biologically meaningful side effects prediction at different levels of anatomical hierarchy. By implementing various machine learning classifiers we show that Random Forest model yields best classification accuracy at each level of coarse-graining. The manually curated, hierarchical schema for side effects can also serve as the basis of future studies towards prediction of adverse reactions and identification of key features linked to specific organ systems. Our study provides a strategy for hierarchical classification of side effects rooted in the anatomy and can pave the way for calibrated expert systems for multi-level prediction of side effects.
User-centered design and evaluation of a next generation fixed-split ergonomic keyboard.
McLoone, Hugh E; Jacobson, Melissa; Hegg, Chau; Johnson, Peter W
2010-01-01
Research has shown that fixed-split, ergonomic keyboards lessen the pain and functional status in symptomatic individuals as well as reduce the likelihood of developing musculoskeletal disorders in asymptomatic typists over extended use. The goal of this study was to evaluate design features to determine whether the current fixed-split ergonomic keyboard design could be improved. Thirty-nine, adult-aged, fixed-split ergonomic keyboard users were recruited to participate in one of three studies. First utilizing non-functional models and later a functional prototype, three studies evaluated keyboard design features including: 1) keyboard lateral inclination, 2) wrist rest height, 3) keyboard slope, and 4) curved "gull-wing" key layouts. The findings indicated that keyboard lateral inclination could be increased from 8° to 14°; wrist rest height could be increased up to 10 mm from current setting; positive, flat, and negative slope settings were equally preferred and facilitated greater postural variation; and participants preferred a new gull-wing key layout. The design changes reduced forearm pronation and wrist extension while not adversely affecting typing performance. This research demonstrated how iterative-evaluative, user-centered research methods can be utilized to improve a product's design such as a fixed-split ergonomic keyboard.
Understanding local-scale drivers of biodiversity outcomes in terrestrial protected areas.
Barnes, Megan D; Craigie, Ian D; Dudley, Nigel; Hockings, Marc
2017-07-01
Conservation relies heavily on protected areas (PAs) maintaining their key biodiversity features to meet global biodiversity conservation goals. However, PAs have had variable success, with many failing to fully maintain their biodiversity features. The current literature concerning what drives variability in PA performance is rapidly expanding but unclear, sometimes contradictory, and spread across multiple disciplines. A clear understanding of the drivers of successful biodiversity conservation in PAs is necessary to make them fully effective. Here, we conduct a comprehensive assessment of the current state of knowledge concerning the drivers of biological outcomes within PAs, focusing on those that can be addressed at local scales. We evaluate evidence in support of potential drivers to identify those that enable more successful outcomes and those that impede success and provide a synthetic review. Interactions are discussed where they are known, and we highlight gaps in understanding. We find that elements of PA design, management, and local and national governance challenges, species and system ecology, and sociopolitical context can all influence outcomes. Adjusting PA management to focus on actions and policies that influence the key drivers identified here could improve global biodiversity outcomes. © 2016 New York Academy of Sciences.
Identifying Key Features of Effective Active Learning: The Effects of Writing and Peer Discussion
Pangle, Wiline M.; Wyatt, Kevin H.; Powell, Karli N.; Sherwood, Rachel E.
2014-01-01
We investigated some of the key features of effective active learning by comparing the outcomes of three different methods of implementing active-learning exercises in a majors introductory biology course. Students completed activities in one of three treatments: discussion, writing, and discussion + writing. Treatments were rotated weekly between three sections taught by three different instructors in a full factorial design. The data set was analyzed by generalized linear mixed-effect models with three independent variables: student aptitude, treatment, and instructor, and three dependent (assessment) variables: change in score on pre- and postactivity clicker questions, and coding scores on in-class writing and exam essays. All independent variables had significant effects on student performance for at least one of the dependent variables. Students with higher aptitude scored higher on all assessments. Student scores were higher on exam essay questions when the activity was implemented with a writing component compared with peer discussion only. There was a significant effect of instructor, with instructors showing different degrees of effectiveness with active-learning techniques. We suggest that individual writing should be implemented as part of active learning whenever possible and that instructors may need training and practice to become effective with active learning. PMID:25185230
Cascaded ensemble of convolutional neural networks and handcrafted features for mitosis detection
NASA Astrophysics Data System (ADS)
Wang, Haibo; Cruz-Roa, Angel; Basavanhally, Ajay; Gilmore, Hannah; Shih, Natalie; Feldman, Mike; Tomaszewski, John; Gonzalez, Fabio; Madabhushi, Anant
2014-03-01
Breast cancer (BCa) grading plays an important role in predicting disease aggressiveness and patient outcome. A key component of BCa grade is mitotic count, which involves quantifying the number of cells in the process of dividing (i.e. undergoing mitosis) at a specific point in time. Currently mitosis counting is done manually by a pathologist looking at multiple high power fields on a glass slide under a microscope, an extremely laborious and time consuming process. The development of computerized systems for automated detection of mitotic nuclei, while highly desirable, is confounded by the highly variable shape and appearance of mitoses. Existing methods use either handcrafted features that capture certain morphological, statistical or textural attributes of mitoses or features learned with convolutional neural networks (CNN). While handcrafted features are inspired by the domain and the particular application, the data-driven CNN models tend to be domain agnostic and attempt to learn additional feature bases that cannot be represented through any of the handcrafted features. On the other hand, CNN is computationally more complex and needs a large number of labeled training instances. Since handcrafted features attempt to model domain pertinent attributes and CNN approaches are largely unsupervised feature generation methods, there is an appeal to attempting to combine these two distinct classes of feature generation strategies to create an integrated set of attributes that can potentially outperform either class of feature extraction strategies individually. In this paper, we present a cascaded approach for mitosis detection that intelligently combines a CNN model and handcrafted features (morphology, color and texture features). By employing a light CNN model, the proposed approach is far less demanding computationally, and the cascaded strategy of combining handcrafted features and CNN-derived features enables the possibility of maximizing performance by leveraging the disconnected feature sets. Evaluation on the public ICPR12 mitosis dataset that has 226 mitoses annotated on 35 High Power Fields (HPF, x400 magnification) by several pathologists and 15 testing HPFs yielded an F-measure of 0.7345. Apart from this being the second best performance ever recorded for this MITOS dataset, our approach is faster and requires fewer computing resources compared to extant methods, making this feasible for clinical use.
Dynamical aspects of behavior generation under constraints
Harter, Derek; Achunala, Srinivas
2007-01-01
Dynamic adaptation is a key feature of brains helping to maintain the quality of their performance in the face of increasingly difficult constraints. How to achieve high-quality performance under demanding real-time conditions is an important question in the study of cognitive behaviors. Animals and humans are embedded in and constrained by their environments. Our goal is to improve the understanding of the dynamics of the interacting brain–environment system by studying human behaviors when completing constrained tasks and by modeling the observed behavior. In this article we present results of experiments with humans performing tasks on the computer under variable time and resource constraints. We compare various models of behavior generation in order to describe the observed human performance. Finally we speculate on mechanisms how chaotic neurodynamics can contribute to the generation of flexible human behaviors under constraints. PMID:19003514
Case base classification on digital mammograms: improving the performance of case base classifier
NASA Astrophysics Data System (ADS)
Raman, Valliappan; Then, H. H.; Sumari, Putra; Venkatesa Mohan, N.
2011-10-01
Breast cancer continues to be a significant public health problem in the world. Early detection is the key for improving breast cancer prognosis. The aim of the research presented here is in twofold. First stage of research involves machine learning techniques, which segments and extracts features from the mass of digital mammograms. Second level is on problem solving approach which includes classification of mass by performance based case base classifier. In this paper we build a case-based Classifier in order to diagnose mammographic images. We explain different methods and behaviors that have been added to the classifier to improve the performance of the classifier. Currently the initial Performance base Classifier with Bagging is proposed in the paper and it's been implemented and it shows an improvement in specificity and sensitivity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messner, Mark C.; Sham, Sam; Wang, Yanli
This report summarizes the experiments performed in FY17 on Gr. 91 steels. The testing of Gr. 91 has technical significance because, currently, it is the only approved material for Class A construction that is strongly cyclic softening. Specific FY17 testing includes the following activities for Gr. 91 steel. First, two types of key feature testing have been initiated, including two-bar thermal ratcheting and Simplified Model Testing (SMT). The goal is to qualify the Elastic – Perfectly Plastic (EPP) design methodologies and to support incorporation of these rules for Gr. 91 into the ASME Division 5 Code. The preliminary SMT testmore » results show that Gr. 91 is most damaging when tested with compression hold mode under the SMT creep fatigue testing condition. Two-bar thermal ratcheting test results at a temperature range between 350 to 650o C were compared with the EPP strain limits code case evaluation, and the results show that the EPP strain limits code case is conservative. The material information obtained from these key feature tests can also be used to verify its material model. Second, to provide experimental data in support of the viscoplastic material model development at Argonne National Laboratory, selective tests were performed to evaluate the effect of cyclic softening on strain rate sensitivity and creep rates. The results show the prior cyclic loading history decreases the strain rate sensitivity and increases creep rates. In addition, isothermal cyclic stress-strain curves were generated at six different temperatures, and a nonisothermal thermomechanical testing was also performed to provide data to calibrate the viscoplastic material model.« less
Reusable Launch Vehicle Tank/Intertank Sizing Trade Study
NASA Technical Reports Server (NTRS)
Dorsey, John T.; Myers, David E.; Martin, Carl J.
2000-01-01
A tank and intertank sizing tool that includes effects of major design drivers, and which allows parametric studies to be performed, has been developed and calibrated against independent representative results. Although additional design features, such as bulkheads and field joints, are not currently included in the process, the improved level of fidelity has allowed parametric studies to be performed which have resulted in understanding of key tank and intertank design drivers, design sensitivities, and definition of preferred design spaces. The sizing results demonstrated that there were many interactions between the configuration parameters of internal/external payload, vehicle fineness ratio (half body angle), fuel arrangement (LOX-forward/LOX-aft), number of tanks, and tank shape/arrangement (number of lobes).
Microaneurysm detection with radon transform-based classification on retina images.
Giancardo, L; Meriaudeau, F; Karnowski, T P; Li, Y; Tobin, K W; Chaum, E
2011-01-01
The creation of an automatic diabetic retinopathy screening system using retina cameras is currently receiving considerable interest in the medical imaging community. The detection of microaneurysms is a key element in this effort. In this work, we propose a new microaneurysms segmentation technique based on a novel application of the radon transform, which is able to identify these lesions without any previous knowledge of the retina morphological features and with minimal image preprocessing. The algorithm has been evaluated on the Retinopathy Online Challenge public dataset, and its performance compares with the best current techniques. The performance is particularly good at low false positive ratios, which makes it an ideal candidate for diabetic retinopathy screening systems.
A potassium Rankine multimegawatt nuclear electric propulsion concept
NASA Technical Reports Server (NTRS)
Baumeister, E.; Rovang, R.; Mills, J.; Sercel, J.; Frisbee, R.
1990-01-01
Multimegawatt nuclear electric propulsion (NEP) has been identified as a potentially attractive option for future space exploratory missions. A liquid-metal-cooled reactor, potassium Rankine power system that is being developed is suited to fulfill this application. The key features of the nuclear power system are described, and system characteristics are provided for various potential NEP power ranges and operational lifetimes. The results of recent mission studies are presented to illustrate some of the potential benefits to future space exploration to be gained from high-power NEP. Specifically, mission analyses have been performed to assess the mass and trip time performance of advanced NEP for both cargo and piloted missions to Mars.
Gao, Yu-Fei; Li, Bi-Qing; Cai, Yu-Dong; Feng, Kai-Yan; Li, Zhan-Dong; Jiang, Yang
2013-01-27
Identification of catalytic residues plays a key role in understanding how enzymes work. Although numerous computational methods have been developed to predict catalytic residues and active sites, the prediction accuracy remains relatively low with high false positives. In this work, we developed a novel predictor based on the Random Forest algorithm (RF) aided by the maximum relevance minimum redundancy (mRMR) method and incremental feature selection (IFS). We incorporated features of physicochemical/biochemical properties, sequence conservation, residual disorder, secondary structure and solvent accessibility to predict active sites of enzymes and achieved an overall accuracy of 0.885687 and MCC of 0.689226 on an independent test dataset. Feature analysis showed that every category of the features except disorder contributed to the identification of active sites. It was also shown via the site-specific feature analysis that the features derived from the active site itself contributed most to the active site determination. Our prediction method may become a useful tool for identifying the active sites and the key features identified by the paper may provide valuable insights into the mechanism of catalysis.
Parallel Key Frame Extraction for Surveillance Video Service in a Smart City.
Zheng, Ran; Yao, Chuanwei; Jin, Hai; Zhu, Lei; Zhang, Qin; Deng, Wei
2015-01-01
Surveillance video service (SVS) is one of the most important services provided in a smart city. It is very important for the utilization of SVS to provide design efficient surveillance video analysis techniques. Key frame extraction is a simple yet effective technique to achieve this goal. In surveillance video applications, key frames are typically used to summarize important video content. It is very important and essential to extract key frames accurately and efficiently. A novel approach is proposed to extract key frames from traffic surveillance videos based on GPU (graphics processing units) to ensure high efficiency and accuracy. For the determination of key frames, motion is a more salient feature in presenting actions or events, especially in surveillance videos. The motion feature is extracted in GPU to reduce running time. It is also smoothed to reduce noise, and the frames with local maxima of motion information are selected as the final key frames. The experimental results show that this approach can extract key frames more accurately and efficiently compared with several other methods.
Vision technology/algorithms for space robotics applications
NASA Technical Reports Server (NTRS)
Krishen, Kumar; Defigueiredo, Rui J. P.
1987-01-01
The thrust of automation and robotics for space applications has been proposed for increased productivity, improved reliability, increased flexibility, higher safety, and for the performance of automating time-consuming tasks, increasing productivity/performance of crew-accomplished tasks, and performing tasks beyond the capability of the crew. This paper provides a review of efforts currently in progress in the area of robotic vision. Both systems and algorithms are discussed. The evolution of future vision/sensing is projected to include the fusion of multisensors ranging from microwave to optical with multimode capability to include position, attitude, recognition, and motion parameters. The key feature of the overall system design will be small size and weight, fast signal processing, robust algorithms, and accurate parameter determination. These aspects of vision/sensing are also discussed.
Botsis, T; Woo, E J; Ball, R
2013-01-01
We previously demonstrated that a general purpose text mining system, the Vaccine adverse event Text Mining (VaeTM) system, could be used to automatically classify reports of an-aphylaxis for post-marketing safety surveillance of vaccines. To evaluate the ability of VaeTM to classify reports to the Vaccine Adverse Event Reporting System (VAERS) of possible Guillain-Barré Syndrome (GBS). We used VaeTM to extract the key diagnostic features from the text of reports in VAERS. Then, we applied the Brighton Collaboration (BC) case definition for GBS, and an information retrieval strategy (i.e. the vector space model) to quantify the specific information that is included in the key features extracted by VaeTM and compared it with the encoded information that is already stored in VAERS as Medical Dictionary for Regulatory Activities (MedDRA) Preferred Terms (PTs). We also evaluated the contribution of the primary (diagnosis and cause of death) and secondary (second level diagnosis and symptoms) diagnostic VaeTM-based features to the total VaeTM-based information. MedDRA captured more information and better supported the classification of reports for GBS than VaeTM (AUC: 0.904 vs. 0.777); the lower performance of VaeTM is likely due to the lack of extraction by VaeTM of specific laboratory results that are included in the BC criteria for GBS. On the other hand, the VaeTM-based classification exhibited greater specificity than the MedDRA-based approach (94.96% vs. 87.65%). Most of the VaeTM-based information was contained in the secondary diagnostic features. For GBS, clinical signs and symptoms alone are not sufficient to match MedDRA coding for purposes of case classification, but are preferred if specificity is the priority.
Problems of quality and equity in pain management: exploring the role of biomedical culture.
Crowley-Matoka, Megan; Saha, Somnath; Dobscha, Steven K; Burgess, Diana J
2009-10-01
To explore how social scientific analyses of the culture of biomedicine may contribute to advancing our understanding of ongoing issues of quality and equity in pain management. Drawing upon the rich body of social scientific literature on the culture of biomedicine, we identify key features of biomedical culture with particular salience for pain management. We then examine how these cultural features of biomedicine may shape key phases of the pain management process in ways that have implications not just for quality, but for equity in pain management as well. We bring together a range of literatures in developing our analysis, including literatures on the culture of biomedicine, pain management and health care disparities. We surveyed the relevant literatures to identify and inter-relate key features of biomedical culture, key phases of the pain management process, and key dimensions of identified problems with suboptimal and inequitable treatment of pain. We identified three key features of biomedical culture with critical implications for pain management: 1) mind-body dualism; 2) a focus on disease vs illness; and 3) a bias toward cure vs care. Each of these cultural features play a role in the key phases of pain management, specifically pain-related communication, assessment and treatment decision-making, in ways that may hinder successful treatment of pain in general -- and of pain patients from disadvantaged groups in particular. Deepening our understanding of the role of biomedical culture in pain management has implications for education, policy and research as part of ongoing efforts to ameliorate problems in both quality and equity in managing pain. In particular, we suggest that building upon the existing the cultural competence movement in medicine to include fostering a deeper understanding of biomedical culture and its impact on physicians may be useful. From a policy perspective, we identify pain management as an area where the need for a shift to a more biopsychosocial model of health care is particularly pressing, and suggest prioritization of inter-disciplinary, multimodal approaches to pain as one key strategy in realizing this shift. Finally, in terms of research, we identify the need for empirical research to assess aspects of biomedical culture that may influence physician's attitudes and behaviors related to pain management, as well as to explore how these cultural values and their effects may vary across different settings within the practice of medicine.
On Designing Multicore-Aware Simulators for Systems Biology Endowed with OnLine Statistics
Calcagno, Cristina; Coppo, Mario
2014-01-01
The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed. PMID:25050327
On designing multicore-aware simulators for systems biology endowed with OnLine statistics.
Aldinucci, Marco; Calcagno, Cristina; Coppo, Mario; Damiani, Ferruccio; Drocco, Maurizio; Sciacca, Eva; Spinella, Salvatore; Torquati, Massimo; Troina, Angelo
2014-01-01
The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed.
Reducing negative affect and increasing rapport improve interracial mentorship outcomes
Ayduk, Özlem; Boykin, C. Malik; Mendoza-Denton, Rodolfo
2018-01-01
Research suggests that interracial mentoring relationships are strained by negative affect and low rapport. As such, it stands to reason that strategies that decrease negative affect and increase rapport should improve these relationships. However, previous research has not tested this possibility. In video-chats (Studies 1 and 2) and face-to-face meetings (Study 3), we manipulated the degree of mutual self-disclosure between mentees and mentors, a strategy that has been shown to reduce negative affect and increase rapport. We then measured negative affect and rapport as mediators, and mentee performance (quality of speech delivered; Studies 1 and 3) and mentor performance (warmth and helpfulness; Studies 2 and 3) as key outcomes. Results revealed that increased self-disclosure decreased negative affect and increased rapport for both mentees and mentors. Among mentees, decreased negative affect predicted better performance (Studies 1 and 3). Among mentors, increased rapport predicted warmer feedback (Studies 2 and 3). These effects remained significant when we meta-analyzed data across studies (Study 4), and also revealed the relationship of rapport to more helpful feedback. Findings suggest that affect and rapport are key features in facilitating positive outcomes in interracial mentoring relationships. PMID:29617368
Key clinical features to identify girls with CDKL5 mutations.
Bahi-Buisson, Nadia; Nectoux, Juliette; Rosas-Vargas, Haydeé; Milh, Mathieu; Boddaert, Nathalie; Girard, Benoit; Cances, Claude; Ville, Dorothée; Afenjar, Alexandra; Rio, Marlène; Héron, Delphine; N'guyen Morel, Marie Ange; Arzimanoglou, Alexis; Philippe, Christophe; Jonveaux, Philippe; Chelly, Jamel; Bienvenu, Thierry
2008-10-01
Mutations in the human X-linked cyclin-dependent kinase-like 5 (CDKL5) gene have been shown to cause infantile spasms as well as Rett syndrome (RTT)-like phenotype. To date, less than 25 different mutations have been reported. So far, there are still little data on the key clinical diagnosis criteria and on the natural history of CDKL5-associated encephalopathy. We screened the entire coding region of CDKL5 for mutations in 183 females with encephalopathy with early seizures by denaturing high liquid performance chromatography and direct sequencing, and we identified in 20 unrelated girls, 18 different mutations including 7 novel mutations. These mutations were identified in eight patients with encephalopathy with RTT-like features, five with infantile spasms and seven with encephalopathy with refractory epilepsy. Early epilepsy with normal interictal EEG and severe hypotonia are the key clinical features in identifying patients likely to have CDKL5 mutations. Our study also indicates that these patients clearly exhibit some RTT features such as deceleration of head growth, stereotypies and hand apraxia and that these RTT features become more evident in older and ambulatory patients. However, some RTT signs are clearly absent such as the so called RTT disease profile (period of nearly normal development followed by regression with loss of acquired fine finger skill in early childhood and characteristic intensive eye communication) and the characteristic evolution of the RTT electroencephalogram. Interestingly, in addition to the overall stereotypical symptomatology (age of onset and evolution of the disease) resulting from CDKL5 mutations, atypical forms of CDKL5-related conditions have also been observed. Our data suggest that phenotypic heterogeneity does not correlate with the nature or the position of the mutations or with the pattern of X-chromosome inactivation, but most probably with the functional transcriptional and/or translational consequences of CDKL5 mutations. In conclusion, our report show that search for mutations in CDKL5 is indicated in girls with early onset of a severe intractable seizure disorder or infantile spasms with severe hypotonia, and in girls with RTT-like phenotype and early onset seizures, though, in our cohort, mutations in CDKL5 account for about 10% of the girls affected by these disorders.
Corcoran, R; Rowse, G; Moore, R; Blackwood, N; Kinderman, P; Howard, R; Cummins, S; Bentall, R P
2008-11-01
A tendency to make hasty decisions on probabilistic reasoning tasks and a difficulty attributing mental states to others are key cognitive features of persecutory delusions (PDs) in the context of schizophrenia. This study examines whether these same psychological anomalies characterize PDs when they present in the context of psychotic depression. Performance on measures of probabilistic reasoning and theory of mind (ToM) was examined in five subgroups differing in diagnostic category and current illness status. The tendency to draw hasty decisions in probabilistic settings and poor ToM tested using story format feature in PDs irrespective of diagnosis. Furthermore, performance on the ToM story task correlated with the degree of distress caused by and preoccupation with the current PDs in the currently deluded groups. By contrast, performance on the non-verbal ToM task appears to be more sensitive to diagnosis, as patients with schizophrenia spectrum disorders perform worse on this task than those with depression irrespective of the presence of PDs. The psychological anomalies associated with PDs examined here are transdiagnostic but different measures of ToM may be more or less sensitive to indices of severity of the PDs, diagnosis and trait- or state-related cognitive effects.
Gabbay, Robert A.; Friedberg, Mark W.; Miller-Day, Michelle; Cronholm, Peter F.; Adelman, Alan; Schneider, Eric C.
2013-01-01
PURPOSE The medical home has gained national attention as a model to reorganize primary care to improve health outcomes. Pennsylvania has undertaken one of the largest state-based, multipayer medical home pilot projects. We used a positive deviance approach to identify and compare factors driving the care models of practices showing the greatest and least improvement in diabetes care in a sample of 25 primary care practices in southeast Pennsylvania. METHODS We ranked practices into improvement quintiles on the basis of the average absolute percentage point improvement from baseline to 18 months in 3 registry-based measures of performance related to diabetes care: glycated hemoglobin concentration, blood pressure, and low-density lipoprotein cholesterol level. We then conducted surveys and key informant interviews with leaders and staff in the 5 most and least improved practices, and compared their responses. RESULTS The most improved/higher-performing practices tended to have greater structural capabilities (eg, electronic health records) than the least improved/lower-performing practices at baseline. Interviews revealed striking differences between the groups in terms of leadership styles and shared vision; sense, use, and development of teams; processes for monitoring progress and obtaining feedback; and presence of technologic and financial distractions. CONCLUSIONS Positive deviance analysis suggests that primary care practices’ baseline structural capabilities and abilities to buffer the stresses of change may be key facilitators of performance improvement in medical home transformations. Attention to the practices’ structural capabilities and factors shaping successful change, especially early in the process, will be necessary to improve the likelihood of successful medical home transformation and better care. PMID:23690393
NASA Astrophysics Data System (ADS)
Acconcia, Giulia; Cominelli, Alessandro; Peronio, Pietro; Rech, Ivan; Ghioni, Massimo
2017-05-01
The analysis of optical signals by means of Single Photon Avalanche Diodes (SPADs) has been subject to a widespread interest in recent years. The development of multichannel high-performance Time Correlated Single Photon Counting (TCSPC) acquisition systems has undergone a fast trend. Concerning the detector performance, best in class results have been obtained resorting to custom technologies leading also to a strong dependence of the detector timing jitter from the threshold used to determine the onset of the photogenerated current flow. In this scenario, the avalanche current pick-up circuit plays a key role in determining the timing performance of the TCSPC acquisition system, especially with a large array of SPAD detectors because of electrical crosstalk issues. We developed a new current pick-up circuit based on a transimpedance amplifier structure able to extract the timing information from a 50-μm-diameter custom technology SPAD with a state-of-art timing jitter as low as 32ps and suitable to be exploited with SPAD arrays. In this paper we discuss the key features of this structure and we present a new version of the pick-up circuit that also provides quenching capabilities in order to minimize the number of interconnections required, an aspect that becomes more and more crucial in densely integrated systems.
Real-Time Lane Region Detection Using a Combination of Geometrical and Image Features
Cáceres Hernández, Danilo; Kurnianggoro, Laksono; Filonenko, Alexander; Jo, Kang Hyun
2016-01-01
Over the past few decades, pavement markings have played a key role in intelligent vehicle applications such as guidance, navigation, and control. However, there are still serious issues facing the problem of lane marking detection. For example, problems include excessive processing time and false detection due to similarities in color and edges between traffic signs (channeling lines, stop lines, crosswalk, arrows, etc.). This paper proposes a strategy to extract the lane marking information taking into consideration its features such as color, edge, and width, as well as the vehicle speed. Firstly, defining the region of interest is a critical task to achieve real-time performance. In this sense, the region of interest is dependent on vehicle speed. Secondly, the lane markings are detected by using a hybrid color-edge feature method along with a probabilistic method, based on distance-color dependence and a hierarchical fitting model. Thirdly, the following lane marking information is extracted: the number of lane markings to both sides of the vehicle, the respective fitting model, and the centroid information of the lane. Using these parameters, the region is computed by using a road geometric model. To evaluate the proposed method, a set of consecutive frames was used in order to validate the performance. PMID:27869657
Parameterized Micro-benchmarking: An Auto-tuning Approach for Complex Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Wenjing; Krishnamoorthy, Sriram; Agrawal, Gagan
2012-05-15
Auto-tuning has emerged as an important practical method for creating highly optimized implementations of key computational kernels and applications. However, the growing complexity of architectures and applications is creating new challenges for auto-tuning. Complex applications can involve a prohibitively large search space that precludes empirical auto-tuning. Similarly, architectures are becoming increasingly complicated, making it hard to model performance. In this paper, we focus on the challenge to auto-tuning presented by applications with a large number of kernels and kernel instantiations. While these kernels may share a somewhat similar pattern, they differ considerably in problem sizes and the exact computation performed.more » We propose and evaluate a new approach to auto-tuning which we refer to as parameterized micro-benchmarking. It is an alternative to the two existing classes of approaches to auto-tuning: analytical model-based and empirical search-based. Particularly, we argue that the former may not be able to capture all the architectural features that impact performance, whereas the latter might be too expensive for an application that has several different kernels. In our approach, different expressions in the application, different possible implementations of each expression, and the key architectural features, are used to derive a simple micro-benchmark and a small parameter space. This allows us to learn the most significant features of the architecture that can impact the choice of implementation for each kernel. We have evaluated our approach in the context of GPU implementations of tensor contraction expressions encountered in excited state calculations in quantum chemistry. We have focused on two aspects of GPUs that affect tensor contraction execution: memory access patterns and kernel consolidation. Using our parameterized micro-benchmarking approach, we obtain a speedup of up to 2 over the version that used default optimizations, but no auto-tuning. We demonstrate that observations made from microbenchmarks match the behavior seen from real expressions. In the process, we make important observations about the memory hierarchy of two of the most recent NVIDIA GPUs, which can be used in other optimization frameworks as well.« less
Silicon concentrator cell-assembly development
NASA Astrophysics Data System (ADS)
1982-08-01
The purpose was to develop an improved cell assembly design for photovoltaic concentrator receivers. Efforts were concentrated on a study of adhesive/separator systems that might be applied between cell and substrate, because this area holds the key to improved heat transfer, electrical isolation and adhesion. It is also the area in which simpler construction methods offer the greatest benefits for economy and reliability in the manufacturing process. Of the ten most promising designs subjected to rigorous environmental testing, eight designs featuring acrylic and silicon adhesives and fiberglass and polyester separators performed very well.
NASA Technical Reports Server (NTRS)
Bahr, D. W.; Burrus, D. L.; Sabla, P. E.
1979-01-01
A sector combustor technology development program was conducted to define an advanced double annular dome combustor sized for use in the quiet clean short haul experimental engine (QCSEE). A design which meets the emission goals, and combustor performance goals of the QCSEE engine program was developed. Key design features were identified which resulted in substantial reduction in carbon monoxide and unburned hydrocarbon emission levels at ground idle operating conditions, in addition to very low nitric oxide emission levels at high power operating conditions. Their significant results are reported.
Health policy. Who's got the master card?
Robinson, Ray
2002-09-26
The last decade has seen huge shifts away from the command and control model which dominated health policy since the foundation of the NHS. The current Labour government Initially favoured a system based on collaboration and partnership working but the incentives to achieve this were not sufficiently strong. Competition is now once again openly cited as a driver for improved performance. Political demands mean that command and control are likely to remain key features of government health policy. But this, in turn, is likely to place major limitations on the local autonomy pledged by the government.
Advanced modulation technology development for earth station demodulator applications
NASA Technical Reports Server (NTRS)
Davis, R. C.; Wernlund, J. V.; Gann, J. A.; Roesch, J. F.; Wright, T.; Crowley, R. D.
1989-01-01
The purpose of this contract was to develop a high rate (200 Mbps), bandwidth efficient, modulation format using low cost hardware, in 1990's technology. The modulation format chosen is 16-ary continuous phase frequency shift keying (CPFSK). The implementation of the modulation format uses a unique combination of a limiter/discriminator followed by an accumulator to determine transmitted phase. An important feature of the modulation scheme is the way coding is applied to efficiently gain back the performance lost by the close spacing of the phase points.
Telesat Canada's mobile satellite system
NASA Astrophysics Data System (ADS)
Bertenyi, E.; Wachira, M.
1987-10-01
Telesat Canada plans to begin instituting mobile satellite ('Msat') services in the early 1990s, in order to permit voice and data communications between land vehicles, aircraft, and ships throughout the remote northern regions and the 200-mile offshore regions of Canada and any other point on Canadian territory. An account is presently given of Msat's overall configuration and projected capacity, together with the design features and performance capabilities of the constituent ground, space, and network control segments. Key technology items are the spacecraft high power RF amplifier and its large deployable antenna.
Image detection and compression for memory efficient system analysis
NASA Astrophysics Data System (ADS)
Bayraktar, Mustafa
2015-02-01
The advances in digital signal processing have been progressing towards efficient use of memory and processing. Both of these factors can be utilized efficiently by using feasible techniques of image storage by computing the minimum information of image which will enhance computation in later processes. Scale Invariant Feature Transform (SIFT) can be utilized to estimate and retrieve of an image. In computer vision, SIFT can be implemented to recognize the image by comparing its key features from SIFT saved key point descriptors. The main advantage of SIFT is that it doesn't only remove the redundant information from an image but also reduces the key points by matching their orientation and adding them together in different windows of image [1]. Another key property of this approach is that it works on highly contrasted images more efficiently because it`s design is based on collecting key points from the contrast shades of image.
Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P
2010-06-01
The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it was possible to interchange certain descriptors (i.e. molecular weight and melting point) without incurring a loss of model quality. Such synergy suggested that a model constructed from discrete terms in an equation may not be the most appropriate way of representing mechanistic understandings of skin absorption.
"Key Concepts in ELT": Taking Stock
ERIC Educational Resources Information Center
Hall, Graham
2012-01-01
This article identifies patterns and trends within "Key Concepts in ELT", both since the inception of the feature in ELT Journal in 1993 and during the 17 years of the current editorship. After outlining the aims of the series, the article identifies key themes that have emerged over time, exploring the links between "Key Concepts" pieces and the…
Lu, Na; Li, Tengfei; Pan, Jinjin; Ren, Xiaodong; Feng, Zuren; Miao, Hongyu
2015-05-01
Electroencephalogram (EEG) provides a non-invasive approach to measure the electrical activities of brain neurons and has long been employed for the development of brain-computer interface (BCI). For this purpose, various patterns/features of EEG data need to be extracted and associated with specific events like cue-paced motor imagery. However, this is a challenging task since EEG data are usually non-stationary time series with a low signal-to-noise ratio. In this study, we propose a novel method, called structure constrained semi-nonnegative matrix factorization (SCS-NMF), to extract the key patterns of EEG data in time domain by imposing the mean envelopes of event-related potentials (ERPs) as constraints on the semi-NMF procedure. The proposed method is applicable to general EEG time series, and the extracted temporal features by SCS-NMF can also be combined with other features in frequency domain to improve the performance of motor imagery classification. Real data experiments have been performed using the SCS-NMF approach for motor imagery classification, and the results clearly suggest the superiority of the proposed method. Comparison experiments have also been conducted. The compared methods include ICA, PCA, Semi-NMF, Wavelets, EMD and CSP, which further verified the effectivity of SCS-NMF. The SCS-NMF method could obtain better or competitive performance over the state of the art methods, which provides a novel solution for brain pattern analysis from the perspective of structure constraint. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamann, Thomas
Dye-sensitized solar cells (DSSCs) have attracted a lot of interest as they proffer the possibility of extremely inexpensive and efficient solar energy conversion. The excellent performance of the most efficient DSSCs relies on two main features: 1) a high surface area nanoparticle semiconductor photoanode to allow for excellent light absorption with moderate extinction molecular dyes and 2) slow recombination rates from the photoanode to I 3 - allowing good charge collection. The I 3 -/I - couple, however, has some disadvantages, notably the redox potential limits the maximum open-circuit voltage, and the dye regeneration requires a large driving force whichmore » constrains the light harvesting ability. Thus, the design features that allow DSSCs to perform as well as they do also prevent further significant improvements in performance. As a consequence, the most efficient device configuration, and the maximum efficiency, has remained essentially unchanged over the last 16 years. Significant gains in performance are possible; however it will likely require a substantial paradigm shift. The general goal of this project is to understand the fundamental role of dye-sensitized solar cell, DSSC, components (sensitizer, redox shuttle, and photoanode) involved in key processes in order to overcome the kinetic and energetic constraints of current generation DSSCs. For example, the key to achieving high energy conversion efficiency DSSCs is the realization of a redox shuttle which fulfills the dual requirements of 1) efficient dye regeneration with a minimal driving force and 2) efficient charge collection. In current generation DSSCs, however, only one or the other of these requirements is met. We are currently primarily interested in understanding the physical underpinnings of the regeneration and recombination reactions. Our approach is to systematically vary the components involved in reactions and interrogate them with a series of photoelectrochemical (PEC) measurements. The lessons learned will ultimately be used to develop design rules for next generation DSSCs.« less
Which ante mortem clinical features predict progressive supranuclear palsy pathology?
Respondek, Gesine; Kurz, Carolin; Arzberger, Thomas; Compta, Yaroslau; Englund, Elisabet; Ferguson, Leslie W; Gelpi, Ellen; Giese, Armin; Irwin, David J; Meissner, Wassilios G; Nilsson, Christer; Pantelyat, Alexander; Rajput, Alex; van Swieten, John C; Troakes, Claire; Josephs, Keith A; Lang, Anthony E; Mollenhauer, Brit; Müller, Ulrich; Whitwell, Jennifer L; Antonini, Angelo; Bhatia, Kailash P; Bordelon, Yvette; Corvol, Jean-Christophe; Colosimo, Carlo; Dodel, Richard; Grossman, Murray; Kassubek, Jan; Krismer, Florian; Levin, Johannes; Lorenzl, Stefan; Morris, Huw; Nestor, Peter; Oertel, Wolfgang H; Rabinovici, Gil D; Rowe, James B; van Eimeren, Thilo; Wenning, Gregor K; Boxer, Adam; Golbe, Lawrence I; Litvan, Irene; Stamelou, Maria; Höglinger, Günter U
2017-07-01
Progressive supranuclear palsy (PSP) is a neuropathologically defined disease presenting with a broad spectrum of clinical phenotypes. To identify clinical features and investigations that predict or exclude PSP pathology during life, aiming at an optimization of the clinical diagnostic criteria for PSP. We performed a systematic review of the literature published since 1996 to identify clinical features and investigations that may predict or exclude PSP pathology. We then extracted standardized data from clinical charts of patients with pathologically diagnosed PSP and relevant disease controls and calculated the sensitivity, specificity, and positive predictive value of key clinical features for PSP in this cohort. Of 4166 articles identified by the database inquiry, 269 met predefined standards. The literature review identified clinical features predictive of PSP, including features of the following 4 functional domains: ocular motor dysfunction, postural instability, akinesia, and cognitive dysfunction. No biomarker or genetic feature was found reliably validated to predict definite PSP. High-quality original natural history data were available from 206 patients with pathologically diagnosed PSP and from 231 pathologically diagnosed disease controls (54 corticobasal degeneration, 51 multiple system atrophy with predominant parkinsonism, 53 Parkinson's disease, 73 behavioral variant frontotemporal dementia). We identified clinical features that predicted PSP pathology, including phenotypes other than Richardson's syndrome, with varying sensitivity and specificity. Our results highlight the clinical variability of PSP and the high prevalence of phenotypes other than Richardson's syndrome. The features of variant phenotypes with high specificity and sensitivity should serve to optimize clinical diagnosis of PSP. © 2017 International Parkinson and Movement Disorder Society. © 2017 International Parkinson and Movement Disorder Society.
An algorithm for calculating minimum Euclidean distance between two geographic features
NASA Astrophysics Data System (ADS)
Peuquet, Donna J.
1992-09-01
An efficient algorithm is presented for determining the shortest Euclidean distance between two features of arbitrary shape that are represented in quadtree form. These features may be disjoint point sets, lines, or polygons. It is assumed that the features do not overlap. Features also may be intertwined and polygons may be complex (i.e. have holes). Utilizing a spatial divide-and-conquer approach inherent in the quadtree data model, the basic rationale is to narrow-in on portions of each feature quickly that are on a facing edge relative to the other feature, and to minimize the number of point-to-point Euclidean distance calculations that must be performed. Besides offering an efficient, grid-based alternative solution, another unique and useful aspect of the current algorithm is that is can be used for rapidly calculating distance approximations at coarser levels of resolution. The overall process can be viewed as a top-down parallel search. Using one list of leafcode addresses for each of the two features as input, the algorithm is implemented by successively dividing these lists into four sublists for each descendant quadrant. The algorithm consists of two primary phases. The first determines facing adjacent quadrant pairs where part or all of the two features are separated between the two quadrants, respectively. The second phase then determines the closest pixel-level subquadrant pairs within each facing quadrant pair at the lowest level. The key element of the second phase is a quick estimate distance heuristic for further elimination of locations that are not as near as neighboring locations.
Learning from patients: Identifying design features of medicines that cause medication use problems.
Notenboom, Kim; Leufkens, Hubert Gm; Vromans, Herman; Bouvy, Marcel L
2017-01-30
Usability is a key factor in ensuring safe and efficacious use of medicines. However, several studies showed that people experience a variety of problems using their medicines. The purpose of this study was to identify design features of oral medicines that cause use problems among older patients in daily practice. A qualitative study with semi-structured interviews on the experiences of older people with the use of their medicines was performed (n=59). Information on practical problems, strategies to overcome these problems and the medicines' design features that caused these problems were collected. The practical problems and management strategies were categorised into 'use difficulties' and 'use errors'. A total of 158 use problems were identified, of which 45 were categorized as use difficulties and 113 as use error. Design features that contributed the most to the occurrence of use difficulties were the dimensions and surface texture of the dosage form (29.6% and 18.5%, respectively). Design features that contributed the most to the occurrence of use errors were the push-through force of blisters (22.1%) and tamper evident packaging (12.1%). These findings will help developers of medicinal products to proactively address potential usability issues with their medicines. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Classification of SD-OCT volumes for DME detection: an anomaly detection approach
NASA Astrophysics Data System (ADS)
Sankar, S.; Sidibé, D.; Cheung, Y.; Wong, T. Y.; Lamoureux, E.; Milea, D.; Meriaudeau, F.
2016-03-01
Diabetic Macular Edema (DME) is the leading cause of blindness amongst diabetic patients worldwide. It is characterized by accumulation of water molecules in the macula leading to swelling. Early detection of the disease helps prevent further loss of vision. Naturally, automated detection of DME from Optical Coherence Tomography (OCT) volumes plays a key role. To this end, a pipeline for detecting DME diseases in OCT volumes is proposed in this paper. The method is based on anomaly detection using Gaussian Mixture Model (GMM). It starts with pre-processing the B-scans by resizing, flattening, filtering and extracting features from them. Both intensity and Local Binary Pattern (LBP) features are considered. The dimensionality of the extracted features is reduced using PCA. As the last stage, a GMM is fitted with features from normal volumes. During testing, features extracted from the test volume are evaluated with the fitted model for anomaly and classification is made based on the number of B-scans detected as outliers. The proposed method is tested on two OCT datasets achieving a sensitivity and a specificity of 80% and 93% on the first dataset, and 100% and 80% on the second one. Moreover, experiments show that the proposed method achieves better classification performances than other recently published works.
Evolutionary optimization of radial basis function classifiers for data mining applications.
Buchtala, Oliver; Klimek, Manuel; Sick, Bernhard
2005-10-01
In many data mining applications that address classification problems, feature and model selection are considered as key tasks. That is, appropriate input features of the classifier must be selected from a given (and often large) set of possible features and structure parameters of the classifier must be adapted with respect to these features and a given data set. This paper describes an evolutionary algorithm (EA) that performs feature and model selection simultaneously for radial basis function (RBF) classifiers. In order to reduce the optimization effort, various techniques are integrated that accelerate and improve the EA significantly: hybrid training of RBF networks, lazy evaluation, consideration of soft constraints by means of penalty terms, and temperature-based adaptive control of the EA. The feasibility and the benefits of the approach are demonstrated by means of four data mining problems: intrusion detection in computer networks, biometric signature verification, customer acquisition with direct marketing methods, and optimization of chemical production processes. It is shown that, compared to earlier EA-based RBF optimization techniques, the runtime is reduced by up to 99% while error rates are lowered by up to 86%, depending on the application. The algorithm is independent of specific applications so that many ideas and solutions can be transferred to other classifier paradigms.
2015-01-01
Background Investigations into novel biomarkers using omics techniques generate large amounts of data. Due to their size and numbers of attributes, these data are suitable for analysis with machine learning methods. A key component of typical machine learning pipelines for omics data is feature selection, which is used to reduce the raw high-dimensional data into a tractable number of features. Feature selection needs to balance the objective of using as few features as possible, while maintaining high predictive power. This balance is crucial when the goal of data analysis is the identification of highly accurate but small panels of biomarkers with potential clinical utility. In this paper we propose a heuristic for the selection of very small feature subsets, via an iterative feature elimination process that is guided by rule-based machine learning, called RGIFE (Rule-guided Iterative Feature Elimination). We use this heuristic to identify putative biomarkers of osteoarthritis (OA), articular cartilage degradation and synovial inflammation, using both proteomic and transcriptomic datasets. Results and discussion Our RGIFE heuristic increased the classification accuracies achieved for all datasets when no feature selection is used, and performed well in a comparison with other feature selection methods. Using this method the datasets were reduced to a smaller number of genes or proteins, including those known to be relevant to OA, cartilage degradation and joint inflammation. The results have shown the RGIFE feature reduction method to be suitable for analysing both proteomic and transcriptomics data. Methods that generate large ‘omics’ datasets are increasingly being used in the area of rheumatology. Conclusions Feature reduction methods are advantageous for the analysis of omics data in the field of rheumatology, as the applications of such techniques are likely to result in improvements in diagnosis, treatment and drug discovery. PMID:25923811
Sui, Jing; Adali, Tülay; Pearlson, Godfrey D.; Calhoun, Vince D.
2013-01-01
Extraction of relevant features from multitask functional MRI (fMRI) data in order to identify potential biomarkers for disease, is an attractive goal. In this paper, we introduce a novel feature-based framework, which is sensitive and accurate in detecting group differences (e.g. controls vs. patients) by proposing three key ideas. First, we integrate two goal-directed techniques: coefficient-constrained independent component analysis (CC-ICA) and principal component analysis with reference (PCA-R), both of which improve sensitivity to group differences. Secondly, an automated artifact-removal method is developed for selecting components of interest derived from CC-ICA, with an average accuracy of 91%. Finally, we propose a strategy for optimal feature/component selection, aiming to identify optimal group-discriminative brain networks as well as the tasks within which these circuits are engaged. The group-discriminating performance is evaluated on 15 fMRI feature combinations (5 single features and 10 joint features) collected from 28 healthy control subjects and 25 schizophrenia patients. Results show that a feature from a sensorimotor task and a joint feature from a Sternberg working memory (probe) task and an auditory oddball (target) task are the top two feature combinations distinguishing groups. We identified three optimal features that best separate patients from controls, including brain networks consisting of temporal lobe, default mode and occipital lobe circuits, which when grouped together provide improved capability in classifying group membership. The proposed framework provides a general approach for selecting optimal brain networks which may serve as potential biomarkers of several brain diseases and thus has wide applicability in the neuroimaging research community. PMID:19457398
Predicting human olfactory perception from chemical features of odor molecules.
Keller, Andreas; Gerkin, Richard C; Guan, Yuanfang; Dhurandhar, Amit; Turu, Gabor; Szalai, Bence; Mainland, Joel D; Ihara, Yusuke; Yu, Chung Wen; Wolfinger, Russ; Vens, Celine; Schietgat, Leander; De Grave, Kurt; Norel, Raquel; Stolovitzky, Gustavo; Cecchi, Guillermo A; Vosshall, Leslie B; Meyer, Pablo
2017-02-24
It is still not possible to predict whether a given molecule will have a perceived odor or what olfactory percept it will produce. We therefore organized the crowd-sourced DREAM Olfaction Prediction Challenge. Using a large olfactory psychophysical data set, teams developed machine-learning algorithms to predict sensory attributes of molecules based on their chemoinformatic features. The resulting models accurately predicted odor intensity and pleasantness and also successfully predicted 8 among 19 rated semantic descriptors ("garlic," "fish," "sweet," "fruit," "burnt," "spices," "flower," and "sour"). Regularized linear models performed nearly as well as random forest-based ones, with a predictive accuracy that closely approaches a key theoretical limit. These models help to predict the perceptual qualities of virtually any molecule with high accuracy and also reverse-engineer the smell of a molecule. Copyright © 2017, American Association for the Advancement of Science.
Specific features of goal setting in road traffic safety
NASA Astrophysics Data System (ADS)
Kolesov, V. I.; Danilov, O. F.; Petrov, A. I.
2017-10-01
Road traffic safety (RTS) management is inherently a branch of cybernetics and therefore requires clear formalization of the task. The paper aims at identification of the specific features of goal setting in RTS management under the system approach. The paper presents the results of cybernetic modeling of the cause-to-effect mechanism of a road traffic accident (RTA); in here, the mechanism itself is viewed as a complex system. A designed management goal function is focused on minimizing the difficulty in achieving the target goal. Optimization of the target goal has been performed using the Lagrange principle. The created working algorithms have passed the soft testing. The key role of the obtained solution in the tactical and strategic RTS management is considered. The dynamics of the management effectiveness indicator has been analyzed based on the ten-year statistics for Russia.
Support Vector Machines for Hyperspectral Remote Sensing Classification
NASA Technical Reports Server (NTRS)
Gualtieri, J. Anthony; Cromp, R. F.
1998-01-01
The Support Vector Machine provides a new way to design classification algorithms which learn from examples (supervised learning) and generalize when applied to new data. We demonstrate its success on a difficult classification problem from hyperspectral remote sensing, where we obtain performances of 96%, and 87% correct for a 4 class problem, and a 16 class problem respectively. These results are somewhat better than other recent results on the same data. A key feature of this classifier is its ability to use high-dimensional data without the usual recourse to a feature selection step to reduce the dimensionality of the data. For this application, this is important, as hyperspectral data consists of several hundred contiguous spectral channels for each exemplar. We provide an introduction to this new approach, and demonstrate its application to classification of an agriculture scene.
Overlaid caption extraction in news video based on SVM
NASA Astrophysics Data System (ADS)
Liu, Manman; Su, Yuting; Ji, Zhong
2007-11-01
Overlaid caption in news video often carries condensed semantic information which is key cues for content-based video indexing and retrieval. However, it is still a challenging work to extract caption from video because of its complex background and low resolution. In this paper, we propose an effective overlaid caption extraction approach for news video. We first scan the video key frames using a small window, and then classify the blocks into the text and non-text ones via support vector machine (SVM), with statistical features extracted from the gray level co-occurrence matrices, the LH and HL sub-bands wavelet coefficients and the orientated edge intensity ratios. Finally morphological filtering and projection profile analysis are employed to localize and refine the candidate caption regions. Experiments show its high performance on four 30-minute news video programs.
Research on key technology of prognostic and health management for autonomous underwater vehicle
NASA Astrophysics Data System (ADS)
Zhou, Zhi
2017-12-01
Autonomous Underwater Vehicles (AUVs) are non-cable and autonomous motional underwater robotics. With a wide range of activities, it can reach thousands of kilometers. Because it has the advantages of wide range, good maneuverability, safety and intellectualization, it becomes an important tool for various underwater tasks. How to improve diagnosis accuracy of the AUVs electrical system faults, and how to repair AUVs by the information are the focus of navy in the world. In turn, ensuring safe and reliable operation of the system has very important significance to improve AUVs sailing performance. To solve these problems, in the paper the prognostic and health management(PHM) technology is researched and used to AUV, and the overall framework and key technology are proposed, such as data acquisition, feature extraction, fault diagnosis, failure prediction and so on.
Facets : a Cloudcompare Plugin to Extract Geological Planes from Unstructured 3d Point Clouds
NASA Astrophysics Data System (ADS)
Dewez, T. J. B.; Girardeau-Montaut, D.; Allanic, C.; Rohmer, J.
2016-06-01
Geological planar facets (stratification, fault, joint…) are key features to unravel the tectonic history of rock outcrop or appreciate the stability of a hazardous rock cliff. Measuring their spatial attitude (dip and strike) is generally performed by hand with a compass/clinometer, which is time consuming, requires some degree of censoring (i.e. refusing to measure some features judged unimportant at the time), is not always possible for fractures higher up on the outcrop and is somewhat hazardous. 3D virtual geological outcrop hold the potential to alleviate these issues. Efficiently segmenting massive 3D point clouds into individual planar facets, inside a convenient software environment was lacking. FACETS is a dedicated plugin within CloudCompare v2.6.2 (http://cloudcompare.org/ ) implemented to perform planar facet extraction, calculate their dip and dip direction (i.e. azimuth of steepest decent) and report the extracted data in interactive stereograms. Two algorithms perform the segmentation: Kd-Tree and Fast Marching. Both divide the point cloud into sub-cells, then compute elementary planar objects and aggregate them progressively according to a planeity threshold into polygons. The boundaries of the polygons are adjusted around segmented points with a tension parameter, and the facet polygons can be exported as 3D polygon shapefiles towards third party GIS software or simply as ASCII comma separated files. One of the great features of FACETS is the capability to explore planar objects but also 3D points with normals with the stereogram tool. Poles can be readily displayed, queried and manually segmented interactively. The plugin blends seamlessly into CloudCompare to leverage all its other 3D point cloud manipulation features. A demonstration of the tool is presented to illustrate these different features. While designed for geological applications, FACETS could be more widely applied to any planar objects.
Asymmetric cryptography based on wavefront sensing.
Peng, Xiang; Wei, Hengzheng; Zhang, Peng
2006-12-15
A system of asymmetric cryptography based on wavefront sensing (ACWS) is proposed for the first time to our knowledge. One of the most significant features of the asymmetric cryptography is that a trapdoor one-way function is required and constructed by analogy to wavefront sensing, in which the public key may be derived from optical parameters, such as the wavelength or the focal length, while the private key may be obtained from a kind of regular point array. The ciphertext is generated by the encoded wavefront and represented with an irregular array. In such an ACWS system, the encryption key is not identical to the decryption key, which is another important feature of an asymmetric cryptographic system. The processes of asymmetric encryption and decryption are formulized mathematically and demonstrated with a set of numerical experiments.
Face Alignment via Regressing Local Binary Features.
Ren, Shaoqing; Cao, Xudong; Wei, Yichen; Sun, Jian
2016-03-01
This paper presents a highly efficient and accurate regression approach for face alignment. Our approach has two novel components: 1) a set of local binary features and 2) a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. This approach achieves the state-of-the-art results when tested on the most challenging benchmarks to date. Furthermore, because extracting and regressing local binary features are computationally very cheap, our system is much faster than previous methods. It achieves over 3000 frames per second (FPS) on a desktop or 300 FPS on a mobile phone for locating a few dozens of landmarks. We also study a key issue that is important but has received little attention in the previous research, which is the face detector used to initialize alignment. We investigate several face detectors and perform quantitative evaluation on how they affect alignment accuracy. We find that an alignment friendly detector can further greatly boost the accuracy of our alignment method, reducing the error up to 16% relatively. To facilitate practical usage of face detection/alignment methods, we also propose a convenient metric to measure how good a detector is for alignment initialization.
Feature Masking in Computer Game Promotes Visual Imagery
ERIC Educational Resources Information Center
Smith, Glenn Gordon; Morey, Jim; Tjoe, Edwin
2007-01-01
Can learning of mental imagery skills for visualizing shapes be accelerated with feature masking? Chemistry, physics fine arts, military tactics, and laparoscopic surgery often depend on mentally visualizing shapes in their absence. Does working with "spatial feature-masks" (skeletal shapes, missing key identifying portions) encourage people to…
NASA Technical Reports Server (NTRS)
Sullivan, T. J.; Parker, D. E.
1979-01-01
A design technology study was performed to identify a high speed, multistage, variable geometry fan configuration capable of achieving wide flow modulation with near optimum efficiency at the important operating condition. A parametric screening study of the front and rear block fans was conducted in which the influence of major fan design features on weight and efficiency was determined. Key design parameters were varied systematically to determine the fan configuration most suited for a double bypass, variable cycle engine. Two and three stage fans were considered for the front block. A single stage, core driven fan was studied for the rear block. Variable geometry concepts were evaluated to provide near optimum off design performance. A detailed aerodynamic design and a preliminary mechanical design were carried out for the selected fan configuration. Performance predictions were made for the front and rear block fans.
Performance assessment of a closed-loop system for diabetes management.
Martinez-Millana, A; Fico, G; Fernández-Llatas, C; Traver, V
2015-12-01
Telemedicine systems can play an important role in the management of diabetes, a chronic condition that is increasing worldwide. Evaluations on the consistency of information across these systems and on their performance in a real situation are still missing. This paper presents a remote monitoring system for diabetes management based on physiological sensors, mobile technologies and patient/doctor applications over a service-oriented architecture that has been evaluated in an international trial (83,905 operation records). The proposed system integrates three types of running environments and data engines in a single service-oriented architecture. This feature is used to assess key performance indicators comparing them with other type of architectures. Data sustainability across the applications has been evaluated showing better outcomes for full integrated sensors. At the same time, runtime performance of clients has been assessed spotting no differences regarding the operative environment.
Challenging ocular image recognition
NASA Astrophysics Data System (ADS)
Pauca, V. Paúl; Forkin, Michael; Xu, Xiao; Plemmons, Robert; Ross, Arun A.
2011-06-01
Ocular recognition is a new area of biometric investigation targeted at overcoming the limitations of iris recognition performance in the presence of non-ideal data. There are several advantages for increasing the area beyond the iris, yet there are also key issues that must be addressed such as size of the ocular region, factors affecting performance, and appropriate corpora to study these factors in isolation. In this paper, we explore and identify some of these issues with the goal of better defining parameters for ocular recognition. An empirical study is performed where iris recognition methods are contrasted with texture and point operators on existing iris and face datasets. The experimental results show a dramatic recognition performance gain when additional features are considered in the presence of poor quality iris data, offering strong evidence for extending interest beyond the iris. The experiments also highlight the need for the direct collection of additional ocular imagery.
Sequential data access with Oracle and Hadoop: a performance comparison
NASA Astrophysics Data System (ADS)
Baranowski, Zbigniew; Canali, Luca; Grancher, Eric
2014-06-01
The Hadoop framework has proven to be an effective and popular approach for dealing with "Big Data" and, thanks to its scaling ability and optimised storage access, Hadoop Distributed File System-based projects such as MapReduce or HBase are seen as candidates to replace traditional relational database management systems whenever scalable speed of data processing is a priority. But do these projects deliver in practice? Does migrating to Hadoop's "shared nothing" architecture really improve data access throughput? And, if so, at what cost? Authors answer these questions-addressing cost/performance as well as raw performance- based on a performance comparison between an Oracle-based relational database and Hadoop's distributed solutions like MapReduce or HBase for sequential data access. A key feature of our approach is the use of an unbiased data model as certain data models can significantly favour one of the technologies tested.
Breast cancer detection in rotational thermography images using texture features
NASA Astrophysics Data System (ADS)
Francis, Sheeja V.; Sasikala, M.; Bhavani Bharathi, G.; Jaipurkar, Sandeep D.
2014-11-01
Breast cancer is a major cause of mortality in young women in the developing countries. Early diagnosis is the key to improve survival rate in cancer patients. Breast thermography is a diagnostic procedure that non-invasively images the infrared emissions from breast surface to aid in the early detection of breast cancer. Due to limitations in imaging protocol, abnormality detection by conventional breast thermography, is often a challenging task. Rotational thermography is a novel technique developed in order to overcome the limitations of conventional breast thermography. This paper evaluates this technique's potential for automatic detection of breast abnormality, from the perspective of cold challenge. Texture features are extracted in the spatial domain, from rotational thermogram series, prior to and post the application of cold challenge. These features are fed to a support vector machine for automatic classification of normal and malignant breasts, resulting in a classification accuracy of 83.3%. Feature reduction has been performed by principal component analysis. As a novel attempt, the ability of this technique to locate the abnormality has been studied. The results of the study indicate that rotational thermography holds great potential as a screening tool for breast cancer detection.
METHODOLOGICAL QUALITY OF ECONOMIC EVALUATIONS ALONGSIDE TRIALS OF KNEE PHYSIOTHERAPY.
García-Pérez, Lidia; Linertová, Renata; Arvelo-Martín, Alejandro; Guerra-Marrero, Carolina; Martínez-Alberto, Carlos Enrique; Cuéllar-Pompa, Leticia; Escobar, Antonio; Serrano-Aguilar, Pedro
2017-01-01
The methodological quality of an economic evaluation performed alongside a clinical trial can be underestimated if the paper does not report key methodological features. This study discusses methodological assessment issues on the example of a systematic review on cost-effectiveness of physiotherapy for knee osteoarthritis. Six economic evaluation studies included in the systematic review and related clinical trials were assessed using the 10-question check-list by Drummond and the Physiotherapy Evidence Database (PEDro) scale. All economic evaluations were performed alongside a clinical trial but the studied interventions were too heterogeneous to be synthesized. Methodological quality of the economic evaluations reported in the papers was not free of drawbacks, and in some cases, it improved when information from the related clinical trial was taken into account. Economic evaluation papers dedicate little space to methodological features of related clinical trials; therefore, the methodological quality can be underestimated if evaluated separately from the trials. Future economic evaluations should follow more strictly the recommendations about methodology and the authors should pay special attention to the quality of reporting.
The role played by different TiO2 features on the photocatalytic degradation of paracetamol
NASA Astrophysics Data System (ADS)
Rimoldi, Luca; Meroni, Daniela; Falletta, Ermelinda; Ferretti, Anna Maria; Gervasini, Antonella; Cappelletti, Giuseppe; Ardizzone, Silvia
2017-12-01
Photocatalytic reactions promoted by TiO2 can be affected by a large number of oxide features (e.g. surface area, morphology and phase composition). In this context, the role played by the surface characteristics (e.g. surface acidity, wettability, etc.) has been often disregarded. In this work, pristine and Ta-doped TiO2 nanomaterials with different phase composition (pure anatase and anatase/brookite mixture) were synthesized by sol-gel and characterized under the structural and morphological point of view. A careful characterization of the acid properties of the materials has been performed by liquid-solid acid-base titration by means of 2-phenylethylamine (PEA) adsorption to determine the acid site density and average acid strength. Photocatalytic tests were performed in the degradation of paracetamol (acetaminophen) under UV irradiation and results were discussed in the light of the detailed scenarios describing the different oxides. The surface acidity of the samples, was recognized as one of the key parameters controlling the photocatalytic activity. A possible molecule degradation route is proposed on the ground of GC-MS and ESI-MS analyses.
Enabling Future Robotic Missions with Multicore Processors
NASA Technical Reports Server (NTRS)
Powell, Wesley A.; Johnson, Michael A.; Wilmot, Jonathan; Some, Raphael; Gostelow, Kim P.; Reeves, Glenn; Doyle, Richard J.
2011-01-01
Recent commercial developments in multicore processors (e.g. Tilera, Clearspeed, HyperX) have provided an option for high performance embedded computing that rivals the performance attainable with FPGA-based reconfigurable computing architectures. Furthermore, these processors offer more straightforward and streamlined application development by allowing the use of conventional programming languages and software tools in lieu of hardware design languages such as VHDL and Verilog. With these advantages, multicore processors can significantly enhance the capabilities of future robotic space missions. This paper will discuss these benefits, along with onboard processing applications where multicore processing can offer advantages over existing or competing approaches. This paper will also discuss the key artchitecural features of current commercial multicore processors. In comparison to the current art, the features and advancements necessary for spaceflight multicore processors will be identified. These include power reduction, radiation hardening, inherent fault tolerance, and support for common spacecraft bus interfaces. Lastly, this paper will explore how multicore processors might evolve with advances in electronics technology and how avionics architectures might evolve once multicore processors are inserted into NASA robotic spacecraft.
A Hierarchical Convolutional Neural Network for vesicle fusion event classification.
Li, Haohan; Mao, Yunxiang; Yin, Zhaozheng; Xu, Yingke
2017-09-01
Quantitative analysis of vesicle exocytosis and classification of different modes of vesicle fusion from the fluorescence microscopy are of primary importance for biomedical researches. In this paper, we propose a novel Hierarchical Convolutional Neural Network (HCNN) method to automatically identify vesicle fusion events in time-lapse Total Internal Reflection Fluorescence Microscopy (TIRFM) image sequences. Firstly, a detection and tracking method is developed to extract image patch sequences containing potential fusion events. Then, a Gaussian Mixture Model (GMM) is applied on each image patch of the patch sequence with outliers rejected for robust Gaussian fitting. By utilizing the high-level time-series intensity change features introduced by GMM and the visual appearance features embedded in some key moments of the fusion process, the proposed HCNN architecture is able to classify each candidate patch sequence into three classes: full fusion event, partial fusion event and non-fusion event. Finally, we validate the performance of our method on 9 challenging datasets that have been annotated by cell biologists, and our method achieves better performances when comparing with three previous methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Preliminary Investigation of Hall Thruster Technology
NASA Technical Reports Server (NTRS)
Gallimore, Alec D.
1997-01-01
A three-year NASA/BMDO-sponsored experimental program to conduct performance and plume plasma property measurements on two Russian Stationary Plasma Thrusters (SPTs) has been completed. The program utilized experimental facilitates at the University of Michigan's Plasmadynamics and Electric Propulsion Laboratory (PEPL). The main features of the proposed effort were as follows: (1) Characterized Hall thruster (and arcjet) performance by measuring ion exhaust velocity with probes at various thruster conditions; (2) Used a variety of probe diagnostics in the thruster plume to measure plasma properties and flow properties including T(sub e) and n(sub e) ion current density and ion energy distribution, and electric fields by mapping plasma potential; (3) Used emission spectroscopy to identify species within the plume and to measure electron temperatures. A key and unique feature of our research was our collaboration with Russian Hall thruster researcher Dr. Sergey A Khartov, Deputy Dean of International Relations at the Moscow Aviation Institute (MAI). His activities in this program included consulting on and participation in research at PEPL through use of a MAI-built SPT and ion energy probe.
Preliminary Assessment of Thrust Augmentation of NEP Based Missions
NASA Technical Reports Server (NTRS)
Chew, Gilbert; Pelaccio, Dennis G.; Chiroux, Robert; Pervan, Sherry; Rauwolf, Gerald A.; White, Charles
2005-01-01
Science Applications International Corporation (SAIC), with support from NASA Marshall Space Flight Center, has conducted a preliminary study to compare options for augmenting the thrust of a conventional nuclear electric propulsion (NEP) system. These options include a novel nuclear propulsion system concept known as Hybrid Indirect Nuclear Propulsion (HINP) and conventional chemical propulsion. The utility and technical feasibility of the HINP concept are assessed, and features and potential of this new in-space propulsion system concept are identified. As part of the study, SAIC developed top-level design tools to model the size and performance of an HINP system, as well as for several chemical propulsion options, including liquid and gelled propellants. A mission trade study was performed to compare a representative HINP system with chemical propulsion options for thrust augmentation of NEP systems for a mission to Saturn's moon Titan. Details pertaining to the approach, features, initial demonstration results for HINP model development, and the mission trade study are presented. Key technology and design issues associated with the HINP concept and future work recommendations are also identified.
Naeem, Muhammad Awais; Armutlulu, Andac; Imtiaz, Qasim; Donat, Felix; Schäublin, Robin; Kierzkowska, Agnieszka; Müller, Christoph R
2018-06-19
Calcium looping, a CO 2 capture technique, may offer a mid-term if not near-term solution to mitigate climate change, triggered by the yet increasing anthropogenic CO 2 emissions. A key requirement for the economic operation of calcium looping is the availability of highly effective CaO-based CO 2 sorbents. Here we report a facile synthesis route that yields hollow, MgO-stabilized, CaO microspheres featuring highly porous multishelled morphologies. As a thermal stabilizer, MgO minimized the sintering-induced decay of the sorbents' CO 2 capacity and ensured a stable CO 2 uptake over multiple operation cycles. Detailed electron microscopy-based analyses confirm a compositional homogeneity which is identified, together with the characteristics of its porous structure, as an essential feature to yield a high-performance sorbent. After 30 cycles of repeated CO 2 capture and sorbent regeneration, the best performing material requires as little as 11 wt.% MgO for structural stabilization and exceeds the CO 2 uptake of the limestone-derived reference material by ~500%.
Perkins, Casey; Muller, George
2015-10-08
The number of connections between physical and cyber security systems is rapidly increasing due to centralized control from automated and remotely connected means. As the number of interfaces between systems continues to grow, the interactions and interdependencies between them cannot be ignored. Historically, physical and cyber vulnerability assessments have been performed independently. This independent evaluation omits important aspects of the integrated system, where the impacts resulting from malicious or opportunistic attacks are not easily known or understood. Here, we describe a discrete event simulation model that uses information about integrated physical and cyber security systems, attacker characteristics and simple responsemore » rules to identify key safeguards that limit an attacker's likelihood of success. Key features of the proposed model include comprehensive data generation to support a variety of sophisticated analyses, and full parameterization of safeguard performance characteristics and attacker behaviours to evaluate a range of scenarios. Lastly, we also describe the core data requirements and the network of networks that serves as the underlying simulation structure.« less
Seven Key Principles of Program and Project Success: A Best Practices Survey
NASA Technical Reports Server (NTRS)
Bilardo, Vincent J.; Korte, John J.; Dankhoff, Walter; Langan, Kevin; Branscome, Darrell R.; Fragola, Joseph R.; Dugal, Dale J.; Gormley, Thomas J.; Hammond, Walter E.; Hollopeter, James J.;
2008-01-01
The National Aeronautics and Space Administration (NASA) Organization Design Team (ODT), consisting of 20 seasoned program and project managers and systems engineers from a broad spectrum of the aerospace industry, academia, and government, was formed to support the Next Generation Launch Technology (NGLT) Program and the Constellation Systems Program. The purpose of the ODT was to investigate organizational factors that can lead to success or failure of complex government programs, and to identify tools and methods for the design, modeling, and analysis of new and more-efficient program and project organizations. The ODT conducted a series of workshops featuring invited lectures from seasoned program and project managers representing 25 significant technical programs spanning 50 years of experience. The result was the identification of seven key principles of program success that can be used to help design and operate future program organizations. This paper presents the success principles and examples of best practices that can significantly improve the design of program, project, and performing technical line organizations, the assessment of workforce needs and organization performance, and the execution of programs and projects.
High-performance MCT and QWIP IR detectors at Sofradir
NASA Astrophysics Data System (ADS)
Reibel, Yann; Rubaldo, Laurent; Manissadjian, Alain; Billon-Lanfrey, David; Rothman, Johan; de Borniol, Eric; Destéfanis, Gérard; Costard, E.
2012-11-01
Cooled IR technologies are challenged for answering new system needs like compactness and reduction of cryo-power which is key feature for the SWaP (Size, Weight and Power) requirements. This paper describes the status of MCT IR technology in France at Leti and Sofradir. A focus will be made on hot detector technology for SWAP applications. Sofradir has improved its HgCdTe technology to open the way for High Operating Temperature systems that release the Stirling cooler engine power consumption. Solutions for high performance detectors such as dual bands, much smaller pixel pitch or megapixels will also be discussed. In the meantime, the development of avalanche photodiodes or TV format with digital interface is key to bringing customers cutting-edge functionalities. Since 1997, Sofradir has been working with Thales and Research Technologies (TRT) to develop and produce Quantum Well Infrared Photodetectors (QWIP) as a complementary offer with MCT, to provide large LW staring arrays. A dualband MW-LW QWIP detector (25μm pitch 384×288 IDDCA) is currently under development. We will present in this paper its latest results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perkins, Casey; Muller, George
The number of connections between physical and cyber security systems is rapidly increasing due to centralized control from automated and remotely connected means. As the number of interfaces between systems continues to grow, the interactions and interdependencies between them cannot be ignored. Historically, physical and cyber vulnerability assessments have been performed independently. This independent evaluation omits important aspects of the integrated system, where the impacts resulting from malicious or opportunistic attacks are not easily known or understood. Here, we describe a discrete event simulation model that uses information about integrated physical and cyber security systems, attacker characteristics and simple responsemore » rules to identify key safeguards that limit an attacker's likelihood of success. Key features of the proposed model include comprehensive data generation to support a variety of sophisticated analyses, and full parameterization of safeguard performance characteristics and attacker behaviours to evaluate a range of scenarios. Lastly, we also describe the core data requirements and the network of networks that serves as the underlying simulation structure.« less
NASA Astrophysics Data System (ADS)
Wiegart, L.; Rakitin, M.; Fluerasu, A.; Chubar, O.
2017-08-01
We present the application of fully- and partially-coherent synchrotron radiation wavefront propagation simulation functions, implemented in the "Synchrotron Radiation Workshop" computer code, to create a `virtual beamline' mimicking the Coherent Hard X-ray scattering beamline at NSLS-II. The beamline simulation includes all optical beamline components, such as the insertion device, mirror with metrology data, slits, double crystal monochromator and refractive focusing elements (compound refractive lenses and kinoform lenses). A feature of this beamline is the exploitation of X-ray beam coherence, boosted by the low-emittance NSLS-II storage-ring, for techniques such as X-ray Photon Correlation Spectroscopy or Coherent Diffraction Imaging. The key performance parameters are the degree of Xray beam coherence and photon flux, and the trade-off between them needs to guide the beamline settings for specific experimental requirements. Simulations of key performance parameters are compared to measurements obtained during beamline commissioning, and include the spectral flux of the undulator source, the degree of transverse coherence as well as focal spot sizes.
Sack, Lawren; Scoffoni, Christine
2013-06-01
The design and function of leaf venation are important to plant performance, with key implications for the distribution and productivity of ecosystems, and applications in paleobiology, agriculture and technology. We synthesize classical concepts and the recent literature on a wide range of aspects of leaf venation. We describe 10 major structural features that contribute to multiple key functions, and scale up to leaf and plant performance. We describe the development and plasticity of leaf venation and its adaptation across environments globally, and a new global data compilation indicating trends relating vein length per unit area to climate, growth form and habitat worldwide. We synthesize the evolution of vein traits in the major plant lineages throughout paleohistory, highlighting the multiple origins of individual traits. We summarize the strikingly diverse current applications of leaf vein research in multiple fields of science and industry. A unified core understanding will enable an increasing range of plant biologists to incorporate leaf venation into their research. © 2013 The Authors New Phytologist © 2013 New Phytologist Trust.
Haptic exploration of fingertip-sized geometric features using a multimodal tactile sensor
NASA Astrophysics Data System (ADS)
Ponce Wong, Ruben D.; Hellman, Randall B.; Santos, Veronica J.
2014-06-01
Haptic perception remains a grand challenge for artificial hands. Dexterous manipulators could be enhanced by "haptic intelligence" that enables identification of objects and their features via touch alone. Haptic perception of local shape would be useful when vision is obstructed or when proprioceptive feedback is inadequate, as observed in this study. In this work, a robot hand outfitted with a deformable, bladder-type, multimodal tactile sensor was used to replay four human-inspired haptic "exploratory procedures" on fingertip-sized geometric features. The geometric features varied by type (bump, pit), curvature (planar, conical, spherical), and footprint dimension (1.25 - 20 mm). Tactile signals generated by active fingertip motions were used to extract key parameters for use as inputs to supervised learning models. A support vector classifier estimated order of curvature while support vector regression models estimated footprint dimension once curvature had been estimated. A distal-proximal stroke (along the long axis of the finger) enabled estimation of order of curvature with an accuracy of 97%. Best-performing, curvature-specific, support vector regression models yielded R2 values of at least 0.95. While a radial-ulnar stroke (along the short axis of the finger) was most helpful for estimating feature type and size for planar features, a rolling motion was most helpful for conical and spherical features. The ability to haptically perceive local shape could be used to advance robot autonomy and provide haptic feedback to human teleoperators of devices ranging from bomb defusal robots to neuroprostheses.
Digital holographic-based cancellable biometric for personal authentication
NASA Astrophysics Data System (ADS)
Verma, Gaurav; Sinha, Aloka
2016-05-01
In this paper, we propose a new digital holographic-based cancellable biometric scheme for personal authentication and verification. The realization of cancellable biometric is presented by using an optoelectronic experimental approach, in which an optically recorded hologram of the fingerprint of a person is numerically reconstructed. Each reconstructed feature has its own perspective, which is utilized to generate user-specific fingerprint features by using a feature-extraction process. New representations of the user-specific fingerprint features can be obtained from the same hologram, by changing the reconstruction distance (d) by an amount Δd between the recording plane and the reconstruction plane. This parameter is the key to make the cancellable user-specific fingerprint features using a digital holographic technique, which allows us to choose different reconstruction distances when reissuing the user-specific fingerprint features in the event of compromise. We have shown theoretically that each user-specific fingerprint feature has a unique identity with a high discrimination ability, and the chances of a match between them are minimal. In this aspect, a recognition system has also been demonstrated using the fingerprint biometric of the enrolled person at a particular reconstruction distance. For the performance evaluation of a fingerprint recognition system—the false acceptance ratio, the false rejection ratio and the equal error rate are calculated using correlation. The obtained results show good discrimination ability between the genuine and the impostor populations with the highest recognition rate of 98.23%.
Lim, Meng-Hui; Teoh, Andrew Beng Jin; Toh, Kar-Ann
2013-06-01
Biometric discretization is a key component in biometric cryptographic key generation. It converts an extracted biometric feature vector into a binary string via typical steps such as segmentation of each feature element into a number of labeled intervals, mapping of each interval-captured feature element onto a binary space, and concatenation of the resulted binary output of all feature elements into a binary string. Currently, the detection rate optimized bit allocation (DROBA) scheme is one of the most effective biometric discretization schemes in terms of its capability to assign binary bits dynamically to user-specific features with respect to their discriminability. However, we learn that DROBA suffers from potential discriminative feature misdetection and underdiscretization in its bit allocation process. This paper highlights such drawbacks and improves upon DROBA based on a novel two-stage algorithm: 1) a dynamic search method to efficiently recapture such misdetected features and to optimize the bit allocation of underdiscretized features and 2) a genuine interval concealment technique to alleviate crucial information leakage resulted from the dynamic search. Improvements in classification accuracy on two popular face data sets vindicate the feasibility of our approach compared with DROBA.
Examining sustainability in a hospital setting: case of smoking cessation.
Campbell, Sharon; Pieters, Karen; Mullen, Kerri-Anne; Reece, Robin; Reid, Robert D
2011-09-14
The Ottawa Model of Smoking Cessation (OMSC) is a hospital-based smoking cessation program that is expanding across Canada. While the short-term effectiveness of hospital cessation programs has been documented, less is known about long-term sustainability. The purpose of this exploratory study was to understand how hospitals using the OMSC were addressing sustainability and determine if there were critical factors or issues that should be addressed as the program expanded. Six hospitals that differed on OMSC program activities (identify and document smokers, advise quitting, provide medication, and offer follow-up) were intentionally selected, and two key informants per hospital were interviewed using a semi-structured interview guide. Key informants were asked to reflect on the initial decision to implement the OMSC, the current implementation process, and perceived sustainability of the program. Qualitative analysis of the interview transcripts was conducted and themes related to problem definition, stakeholder influence, and program features emerged. Sustainability was operationalized as higher performance of OMSC activities than at baseline. Factors identified in the literature as important for sustainability, such as program design, differences in implementation, organizational characteristics, and the community environment did not explain differences in program sustainability. Instead, key informants identified factors that reflected the interaction between how the health problem was defined by stakeholders, how priorities and concerns were addressed, features of the program itself, and fit within the hospital context and resources as being influential to the sustainability of the program. Applying a sustainability model to a hospital smoking cessation program allowed for an examination of how decisions made during implementation may impact sustainability. Examining these factors during implementation may provide insight into issues affecting program sustainability, and foster development of a sustainability plan. Based on this study, we suggest that sustainability plans should focus on enhancing interactions between the health problem, program features, and stakeholder influence.
Examining sustainability in a hospital setting: Case of smoking cessation
2011-01-01
Background The Ottawa Model of Smoking Cessation (OMSC) is a hospital-based smoking cessation program that is expanding across Canada. While the short-term effectiveness of hospital cessation programs has been documented, less is known about long-term sustainability. The purpose of this exploratory study was to understand how hospitals using the OMSC were addressing sustainability and determine if there were critical factors or issues that should be addressed as the program expanded. Methods Six hospitals that differed on OMSC program activities (identify and document smokers, advise quitting, provide medication, and offer follow-up) were intentionally selected, and two key informants per hospital were interviewed using a semi-structured interview guide. Key informants were asked to reflect on the initial decision to implement the OMSC, the current implementation process, and perceived sustainability of the program. Qualitative analysis of the interview transcripts was conducted and themes related to problem definition, stakeholder influence, and program features emerged. Results Sustainability was operationalized as higher performance of OMSC activities than at baseline. Factors identified in the literature as important for sustainability, such as program design, differences in implementation, organizational characteristics, and the community environment did not explain differences in program sustainability. Instead, key informants identified factors that reflected the interaction between how the health problem was defined by stakeholders, how priorities and concerns were addressed, features of the program itself, and fit within the hospital context and resources as being influential to the sustainability of the program. Conclusions Applying a sustainability model to a hospital smoking cessation program allowed for an examination of how decisions made during implementation may impact sustainability. Examining these factors during implementation may provide insight into issues affecting program sustainability, and foster development of a sustainability plan. Based on this study, we suggest that sustainability plans should focus on enhancing interactions between the health problem, program features, and stakeholder influence. PMID:21917156
Study for Updated Gout Classification Criteria (SUGAR): identification of features to classify gout
Taylor, William J.; Fransen, Jaap; Jansen, Tim L.; Dalbeth, Nicola; Schumacher, H. Ralph; Brown, Melanie; Louthrenoo, Worawit; Vazquez-Mellado, Janitzia; Eliseev, Maxim; McCarthy, Geraldine; Stamp, Lisa K.; Perez-Ruiz, Fernando; Sivera, Francisca; Ea, Hang-Korng; Gerritsen, Martijn; Scire, Carlo; Cavagna, Lorenzo; Lin, Chingtsai; Chou, Yin-Yi; Tausche, Anne-Kathrin; Vargas-Santos, Ana Beatriz; Janssen, Matthijs; Chen, Jiunn-Horng; Slot, Ole; Cimmino, Marco A.; Uhlig, Till; Neogi, Tuhina
2015-01-01
Objective To determine which clinical, laboratory and imaging features most accurately distinguished gout from non-gout. Methods A cross-sectional study of consecutive rheumatology clinic patients with at least one swollen joint or subcutaneous tophus. Gout was defined by synovial fluid or tophus aspirate microscopy by certified examiners in all patients. The sample was randomly divided into a model development (2/3) and test sample (1/3). Univariate and multivariate association between clinical features and MSU-defined gout was determined using logistic regression modelling. Shrinkage of regression weights was performed to prevent over-fitting of the final model. Latent class analysis was conducted to identify patterns of joint involvement. Results In total, 983 patients were included. Gout was present in 509 (52%). In the development sample (n=653), these features were selected for the final model (multivariate OR) joint erythema (2.13), difficulty walking (7.34), time to maximal pain < 24 hours (1.32), resolution by 2 weeks (3.58), tophus (7.29), MTP1 ever involved (2.30), location of currently tender joints: Other foot/ankle (2.28), MTP1 (2.82), serum urate level > 6 mg/dl (0.36 mmol/l) (3.35), ultrasound double contour sign (7.23), Xray erosion or cyst (2.49). The final model performed adequately in the test set with no evidence of misfit, high discrimination and predictive ability. MTP1 involvement was the most common joint pattern (39.4%) in gout cases. Conclusion Ten key discriminating features have been identified for further evaluation for new gout classification criteria. Ultrasound findings and degree of uricemia add discriminating value, and will significantly contribute to more accurate classification criteria. PMID:25777045
Scianna, Claudia; Niccolini, Federico; Bianchi, Carlo Nike; Guidetti, Paolo
2018-06-18
Marine Protected Areas (MPAs) are important tools to achieve marine conservation and resources management goals. The management effectiveness of MPAs (the degree to which MPAs achieve their goals) is highly variable and can be affected by many MPA attributes, for example their design, enforcement and age. Another key factor possibly affecting MPA management effectiveness is the management performance, here conceived according to Horigue et al. definition (2014) as the "level of effort exerted to enhance and sustain management of MPAs". Organization Science (OS), the discipline that studies organizations, can offer a useful framework to assess and interpret MPA management performance. Using an exploratory multiple case study approach, we applied OS principles to 11 Mediterranean MPAs in order to: i) characterize several MPA organizational features; ii) assess MPA management performance (evaluated as the effort deployed in, for example, planning the future, formalizing measurable goals, and implementing specific strategies). Results show that a number of organizational features and networking attributes are highly variable among the MPAs we have studied. For instance, goals are seldom measurable and the strategy to achieve goals is not systematically pursued. Two relevant outcomes emerge from this exploratory study: i) the management performance of the MPAs considered needs considerable improvements; ii) the methods and the approach proposed could help MPAs' managers and policy makers to understand how to improve their management performance and, consequently, their effectiveness. Copyright © 2018. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Liu, Zhenyu; Cui, Xingwei; Tang, Zhenchao; Dong, Di; Zang, Yali; Tian, Jie
2017-03-01
Previous researches have shown that type 2 diabetes mellitus (T2DM) is associated with an increased risk of cognitive impairment. Early detection of brain abnormalities at the preclinical stage can be useful for developing preventive interventions to abate cognitive decline. We aimed to investigate the whole-brain resting-state functional connectivity (RSFC) patterns of T2DM patients between 90 regions of interest (ROIs) based on the RS-fMRI data, which can be used to test the feasibility of identifying T2DM patients with cognitive impairment from other T2DM patients. 74 patients were recruited in this study and multivariate pattern analysis was utilized to assess the prediction performance. Elastic net was firstly used to select the key features for prediction, and then a linear discrimination model was constructed. 23 RSFCs were selected and it achieved the performance with classification accuracy of 90.54% and areas under the receiver operating characteristic curve (AUC) of 0.944 using ten-fold cross-validation. The results provide strong evidence that functional interactions of brain regions undergo notable alterations between T2DM patients with cognitive impairment or not. By analyzing the RSFCs that were selected as key features, we found that most of them involved the frontal or temporal. We speculated that cognitive impairment in T2DM patients mainly impacted these two lobes. Overall, the present study indicated that RSFCs undergo notable alterations associated with the cognitive impairment in T2DM patients, and it is possible to predicted cognitive impairment early with RSFCs.
3D Kinematics and Hydrodynamic Analysis of Freely Swimming Cetacean
NASA Astrophysics Data System (ADS)
Ren, Yan; Sheinberg, Dustin; Liu, Geng; Dong, Haibo; Fish, Frank; Javed, Joveria
2015-11-01
It's widely thought that flexibility and the ability to control flexibility are crucial elements in determining the performance of animal swimming. However, there is a lack of quantification of both span-wise and chord-wise deformation of Cetacean's flukes and associated hydrodynamic performance during actively swimming. To fill this gap, we examined the motion and flexure of both dolphin fluke and orca fluke in steady swimming using a combined experimental and computational approach. It is found that the fluke surface morphing can effectively modulate the flow structures and influence the propulsive performance. Findings from this work are fundamental for understanding key kinematic features of effective Cetacean propulsors, and for quantifying the hydrodynamic force production that naturally occurs during different types of swimming. This work is supported by ONR MURI N00014-14-1-0533 and NSF CBET-1313217.
NASA Astrophysics Data System (ADS)
Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.
2016-05-01
In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.
Laszlo, Sarah; Plaut, David C
2012-03-01
The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between explicit, computational models and physiological data collected during the performance of cognitive tasks, we developed a PDP model of visual word recognition which simulates key results from the ERP reading literature, while simultaneously being able to successfully perform lexical decision-a benchmark task for reading models. Simulations reveal that the model's success depends on the implementation of several neurally plausible features in its architecture which are sufficiently domain-general to be relevant to cognitive modeling more generally. Copyright © 2011 Elsevier Inc. All rights reserved.
Very High Reflectivity Supermirrors And Their Applications
NASA Astrophysics Data System (ADS)
Mezei, F.
1989-01-01
Very high reflectivity (some 95 % or better) supermirrors, with cut-off angles up to 2 times the critical angle of Ni coated simple total reflection neutron mirrors, can be produced using well established conventional deposition techniques. This performance makes applications involving multiple reflections and transmission geometries feasible, which in turn allow us to use more sophisticated neutron optical systems in order to optimize performance and minimize the amount a scarce supermirrors required. A key feature of several of these novel systems is the distribution of tasks between the several optical components achieving the desired performance by multiple action. The design and characteristics of a series of novel applications, such as polarizing cavities, collimators and guides, non-polarizing guides, beam compressors, deflectors and splitters (most of them tested or being implemented) are the main subjects of the present paper.
NASA Technical Reports Server (NTRS)
Ghista, D. N.; Rasmussen, D. N.; Linebarger, R. N.; Sandler, H.
1971-01-01
Interdisciplinary engineering research effort in studying the intact human left ventricle has been employed to physiologically monitor the heart and to obtain its 'state-of-health' characteristics. The left ventricle was selected for this purpose because it plays a key role in supplying energy to the body cells. The techniques for measurement of the left ventricular geometry are described; the geometry is effectively displayed to bring out the abnormalities in cardiac function. Methods of mathematical modeling, which make it possible to determine the performance of the intact left ventricular muscle, are also described. Finally, features of a control system for the left ventricle for predicting the effect of certain physiological stress situations on the ventricle performance are discussed.
NASA Astrophysics Data System (ADS)
Hansford, Graeme M.; Freshwater, Ray A.; Eden, Louise; Turnbull, Katharine F. V.; Hadaway, David E.; Ostanin, Victor P.; Jones, Roderic L.
2006-01-01
The design of a very lightweight dew-/frost-point hygrometer for balloon-borne atmospheric water vapor profiling is described. The instrument is based on a surface-acoustic-wave sensor. The low instrument weight is a key feature, allowing flights on meteorological balloons which brings many more flight opportunities. The hygrometer shows consistently good performance in the troposphere and while water vapor measurements near the tropopause and in the stratosphere are possible with the current instrument, the long-time response in these regions hampers realistic measurements. The excellent intrinsic sensitivity of the surface-acoustic-wave sensor should permit considerable improvement in the hygrometer performance in the very dry regions of the atmosphere.
Special features of the CLUSTER antenna and radial booms design, development and verification
NASA Technical Reports Server (NTRS)
Gianfiglio, G.; Yorck, M.; Luhmann, H. J.
1995-01-01
CLUSTER is a scientific space mission to in-situ investigate the Earth's plasma environment by means of four identical spin-stabilized spacecraft. Each spacecraft is provided with a set of four rigid booms: two Antenna Booms and two Radial Booms. This paper presents a summary of the boom development and verification phases addressing the key aspects of the Radial Boom design. In particular, it concentrates on the difficulties encountered in fulfilling simultaneously the requirements of minimum torque ratio and maximum allowed shock loads at boom latching for this two degree of freedom boom. The paper also provides an overview of the analysis campaign and testing program performed to achieve sufficient confidence in the boom performance and operation.
Categorizing biomedicine images using novel image features and sparse coding representation
2013-01-01
Background Images embedded in biomedical publications carry rich information that often concisely summarize key hypotheses adopted, methods employed, or results obtained in a published study. Therefore, they offer valuable clues for understanding main content in a biomedical publication. Prior studies have pointed out the potential of mining images embedded in biomedical publications for automatically understanding and retrieving such images' associated source documents. Within the broad area of biomedical image processing, categorizing biomedical images is a fundamental step for building many advanced image analysis, retrieval, and mining applications. Similar to any automatic categorization effort, discriminative image features can provide the most crucial aid in the process. Method We observe that many images embedded in biomedical publications carry versatile annotation text. Based on the locations of and the spatial relationships between these text elements in an image, we thus propose some novel image features for image categorization purpose, which quantitatively characterize the spatial positions and distributions of text elements inside a biomedical image. We further adopt a sparse coding representation (SCR) based technique to categorize images embedded in biomedical publications by leveraging our newly proposed image features. Results we randomly selected 990 images of the JPG format for use in our experiments where 310 images were used as training samples and the rest were used as the testing cases. We first segmented 310 sample images following the our proposed procedure. This step produced a total of 1035 sub-images. We then manually labeled all these sub-images according to the two-level hierarchical image taxonomy proposed by [1]. Among our annotation results, 316 are microscopy images, 126 are gel electrophoresis images, 135 are line charts, 156 are bar charts, 52 are spot charts, 25 are tables, 70 are flow charts, and the remaining 155 images are of the type "others". A serial of experimental results are obtained. Firstly, each image categorizing results is presented, and next image categorizing performance indexes such as precision, recall, F-score, are all listed. Different features which include conventional image features and our proposed novel features indicate different categorizing performance, and the results are demonstrated. Thirdly, we conduct an accuracy comparison between support vector machine classification method and our proposed sparse representation classification method. At last, our proposed approach is compared with three peer classification method and experimental results verify our impressively improved performance. Conclusions Compared with conventional image features that do not exploit characteristics regarding text positions and distributions inside images embedded in biomedical publications, our proposed image features coupled with the SR based representation model exhibit superior performance for classifying biomedical images as demonstrated in our comparative benchmark study. PMID:24565470
NASA Astrophysics Data System (ADS)
La Barbera, Selina; Vincent, Adrien F.; Vuillaume, Dominique; Querlioz, Damien; Alibart, Fabien
2016-12-01
Bio-inspired computing represents today a major challenge at different levels ranging from material science for the design of innovative devices and circuits to computer science for the understanding of the key features required for processing of natural data. In this paper, we propose a detail analysis of resistive switching dynamics in electrochemical metallization cells for synaptic plasticity implementation. We show how filament stability associated to joule effect during switching can be used to emulate key synaptic features such as short term to long term plasticity transition and spike timing dependent plasticity. Furthermore, an interplay between these different synaptic features is demonstrated for object motion detection in a spike-based neuromorphic circuit. System level simulation presents robust learning and promising synaptic operation paving the way to complex bio-inspired computing systems composed of innovative memory devices.
Institutions and the implementation of tobacco control in Brazil.
Lencucha, Raphael; Drope, Jeffrey; Bialous, Stella Aguinaga; Richter, Ana Paula; Silva, Vera Luiza da Costa E
2017-10-19
This research examines the institutional features of Brazil's National Commission for the Implementation of the Framework Convention on Tobacco Control (CONICQ) and how these institutional features have facilitated and hindered its ability to foster intersectoral tobacco control. In particular, we evaluate the key institutional features of CONICQ starting from when it was one of the key drivers of change and improvements in early tobacco control policies, which helped to make Brazil a world leader in this area. We also examine how the committee has evolved, as tobacco control has improved and particularly elucidate some of the major challenges that it faces to bring together often disparate government sectors to generate public health policies.
Miller, Douglass R.; Rung, Alessandra; Parikh, Grishma
2014-01-01
Abstract We provide a general overview of features and technical specifications of an online, interactive tool for the identification of scale insects of concern to the U.S.A. ports-of-entry. Full lists of terminal taxa included in the keys (of which there are four), a list of features used in them, and a discussion of the structure of the tool are provided. We also briefly discuss the advantages of interactive keys for the identification of potential scale insect pests. The interactive key is freely accessible on http://idtools.org/id/scales/index.php PMID:25152668
NASA Astrophysics Data System (ADS)
Marhoubi, Asmaa H.; Saravi, Sara; Edirisinghe, Eran A.
2015-05-01
The present generation of mobile handheld devices comes equipped with a large number of sensors. The key sensors include the Ambient Light Sensor, Proximity Sensor, Gyroscope, Compass and the Accelerometer. Many mobile applications are driven based on the readings obtained from either one or two of these sensors. However the presence of multiple-sensors will enable the determination of more detailed activities that are carried out by the user of a mobile device, thus enabling smarter mobile applications to be developed that responds more appropriately to user behavior and device usage. In the proposed research we use recent advances in machine learning to fuse together the data obtained from all key sensors of a mobile device. We investigate the possible use of single and ensemble classifier based approaches to identify a mobile device's behavior in the space it is present. Feature selection algorithms are used to remove non-discriminant features that often lead to poor classifier performance. As the sensor readings are noisy and include a significant proportion of missing values and outliers, we use machine learning based approaches to clean the raw data obtained from the sensors, before use. Based on selected practical case studies, we demonstrate the ability to accurately recognize device behavior based on multi-sensor data fusion.
Molecular modeling and SPRi investigations of interleukin 6 (IL6) protein and DNA aptamers.
Rhinehardt, Kristen L; Vance, Stephen A; Mohan, Ram V; Sandros, Marinella; Srinivas, Goundla
2018-06-01
Interleukin 6 (IL6), an inflammatory response protein has major implications in immune-related inflammatory diseases. Identification of aptamers for the IL6 protein aids in diagnostic, therapeutic, and theranostic applications. Three different DNA aptamers and their interactions with IL6 protein were extensively investigated in a phosphate buffed saline (PBS) solution. Molecular-level modeling through molecular dynamics provided insights of structural, conformational changes and specific binding domains of these protein-aptamer complexes. Multiple simulations reveal consistent binding region for all protein-aptamer complexes. Conformational changes coupled with quantitative analysis of center of mass (COM) distance, radius of gyration (R g ), and number of intermolecular hydrogen bonds in each IL6 protein-aptamer complex was used to determine their binding performance strength and obtain molecular configurations with strong binding. A similarity comparison of the molecular configurations with strong binding from molecular-level modeling concurred with Surface Plasmon Resonance imaging (SPRi) for these three aptamer complexes, thus corroborating molecular modeling analysis findings. Insights from the natural progression of IL6 protein-aptamer binding modeled in this work has identified key features such as the orientation and location of the aptamer in the binding event. These key features are not readily feasible from wet lab experiments and impact the efficacy of the aptamers in diagnostic and theranostic applications.
Geometric characterization and simulation of planar layered elastomeric fibrous biomaterials
Carleton, James B.; D'Amore, Antonio; Feaver, Kristen R.; Rodin, Gregory J.; Sacks, Michael S.
2014-01-01
Many important biomaterials are composed of multiple layers of networked fibers. While there is a growing interest in modeling and simulation of the mechanical response of these biomaterials, a theoretical foundation for such simulations has yet to be firmly established. Moreover, correctly identifying and matching key geometric features is a critically important first step for performing reliable mechanical simulations. The present work addresses these issues in two ways. First, using methods of geometric probability we develop theoretical estimates for the mean linear and areal fiber intersection densities for two-dimensional fibrous networks. These densities are expressed in terms of the fiber density and the orientation distribution function, both of which are relatively easy-to-measure properties. Secondly, we develop a random walk algorithm for geometric simulation of two-dimensional fibrous networks which can accurately reproduce the prescribed fiber density and orientation distribution function. Furthermore, the linear and areal fiber intersection densities obtained with the algorithm are in agreement with the theoretical estimates. Both theoretical and computational results are compared with those obtained by post-processing of SEM images of actual scaffolds. These comparisons reveal difficulties inherent to resolving fine details of multilayered fibrous networks. The methods provided herein can provide a rational means to define and generate key geometric features from experimentally measured or prescribed scaffold structural data. PMID:25311685
A case-control study of boat-related injuries and fatalities in Washington State.
Stempski, Sarah; Schiff, Melissa; Bennett, Elizabeth; Quan, Linda
2014-08-01
To identify risk factors associated with boat-related injuries and deaths. We performed a case-control study using the Washington Boat Accident Investigation Report Database for 2003-2010. Cases were fatally injured boat occupants, and controls were non-fatally injured boat occupants involved in a boating incident. We evaluated the association between victim, boat and incident factors and risk of death using Poisson regression to estimate RRs and 95% CIs. Of 968 injured boaters, 26% died. Fatalities were 2.6 times more likely to not be wearing a personal flotation device (PFD) and 2.2 times more likely to not have any safety features on their boat compared with those who survived. Boating fatalities were more likely to be in a non-motorised boat, to have alcohol involved in the incident, to be in an incident that involved capsizing, sinking, flooding or swamping, and to involve a person leaving the boat voluntarily, being ejected or falling than those who survived. Increasing PFD use, safety features on the boat and alcohol non-use are key strategies and non-motorised boaters are key target populations to prevent boating deaths. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Simple 2.5 GHz time-bin quantum key distribution
NASA Astrophysics Data System (ADS)
Boaron, Alberto; Korzh, Boris; Houlmann, Raphael; Boso, Gianluca; Rusca, Davide; Gray, Stuart; Li, Ming-Jun; Nolan, Daniel; Martin, Anthony; Zbinden, Hugo
2018-04-01
We present a 2.5 GHz quantum key distribution setup with the emphasis on a simple experimental realization. It features a three-state time-bin protocol based on a pulsed diode laser and a single intensity modulator. Implementing an efficient one-decoy scheme and finite-key analysis, we achieve record breaking secret key rates of 1.5 kbps over 200 km of standard optical fibers.
How Task Features Impact Evidence from Assessments Embedded in Simulations and Games
ERIC Educational Resources Information Center
Almond, Russell G.; Kim, Yoon Jeon; Velasquez, Gertrudes; Shute, Valerie J.
2014-01-01
One of the key ideas of evidence-centered assessment design (ECD) is that task features can be deliberately manipulated to change the psychometric properties of items. ECD identifies a number of roles that task-feature variables can play, including determining the focus of evidence, guiding form creation, determining item difficulty and…
OLMS: Online Learning Management System for E-Learning
ERIC Educational Resources Information Center
Ippakayala, Vinay Kumar; El-Ocla, Hosam
2017-01-01
In this paper we introduce a learning management system that provides a management system for centralized control of course content. A secure system to record lectures is implemented as a key feature of this application. This feature would be accessed through web camera and mobile recording. These features are mainly designed for e-learning…
Feature Binding in Visual Working Memory Evaluated by Type Identification Paradigm
ERIC Educational Resources Information Center
Saiki, Jun; Miyatsuji, Hirofumi
2007-01-01
Memory for feature binding comprises a key ingredient in coherent object representations. Previous studies have been equivocal about human capacity for objects in the visual working memory. To evaluate memory for feature binding, a type identification paradigm was devised and used with a multiple-object permanence tracking task. Using objects…
Transformational change in healthcare: an examination of four case studies.
Charlesworth, Kate; Jamieson, Maggie; Davey, Rachel; Butler, Colin D
2016-04-01
Objectives Healthcare leaders around the world are calling for radical, transformational change of our health and care systems. This will be a difficult and complex task. In this article, we examine case studies in which transformational change has been achieved, and seek to learn from these experiences. Methods We used the case study method to investigate examples of transformational change in healthcare. The case studies were identified from preliminary doctoral research into the transition towards future sustainable health and social care systems. Evidence was collected from multiple sources, key features of each case study were displayed in a matrix and thematic analysis was conducted. The results are presented in narrative form. Results Four case studies were selected: two from the US, one from Australia and one from the UK. The notable features are discussed for each case study. There were many common factors: a well communicated vision, innovative redesign, extensive consultation and engagement with staff and patients, performance management, automated information management and high-quality leadership. Conclusions Although there were some notable differences between the case studies, overall the characteristics of success were similar and collectively provide a blueprint for transformational change in healthcare. What is known about the topic? Healthcare leaders around the world are calling for radical redesign of our systems in order to meet the challenges of modern society. What does this paper add? There are some remarkable examples of transformational change in healthcare. The key factors in success are similar across the case studies. What are the implications for practitioners? Collectively, these key factors can guide future attempts at transformational change in healthcare.
Learning Compositional Shape Models of Multiple Distance Metrics by Information Projection.
Luo, Ping; Lin, Liang; Liu, Xiaobai
2016-07-01
This paper presents a novel compositional contour-based shape model by incorporating multiple distance metrics to account for varying shape distortions or deformations. Our approach contains two key steps: 1) contour feature generation and 2) generative model pursuit. For each category, we first densely sample an ensemble of local prototype contour segments from a few positive shape examples and describe each segment using three different types of distance metrics. These metrics are diverse and complementary with each other to capture various shape deformations. We regard the parameterized contour segment plus an additive residual ϵ as a basic subspace, namely, ϵ -ball, in the sense that it represents local shape variance under the certain distance metric. Using these ϵ -balls as features, we then propose a generative learning algorithm to pursue the compositional shape model, which greedily selects the most representative features under the information projection principle. In experiments, we evaluate our model on several public challenging data sets, and demonstrate that the integration of multiple shape distance metrics is capable of dealing various shape deformations, articulations, and background clutter, hence boosting system performance.
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian
2018-01-01
In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.
Cheng, Jun-Hu; Sun, Da-Wen; Pu, Hongbin
2016-04-15
The potential use of feature wavelengths for predicting drip loss in grass carp fish, as affected by being frozen at -20°C for 24 h and thawed at 4°C for 1, 2, 4, and 6 days, was investigated. Hyperspectral images of frozen-thawed fish were obtained and their corresponding spectra were extracted. Least-squares support vector machine and multiple linear regression (MLR) models were established using five key wavelengths, selected by combining a genetic algorithm and successive projections algorithm, and this showed satisfactory performance in drip loss prediction. The MLR model with a determination coefficient of prediction (R(2)P) of 0.9258, and lower root mean square error estimated by a prediction (RMSEP) of 1.12%, was applied to transfer each pixel of the image and generate the distribution maps of exudation changes. The results confirmed that it is feasible to identify the feature wavelengths using variable selection methods and chemometric analysis for developing on-line multispectral imaging. Copyright © 2015 Elsevier Ltd. All rights reserved.
Progress in Validation of Wind-US for Ramjet/Scramjet Combustion
NASA Technical Reports Server (NTRS)
Engblom, William A.; Frate, Franco C.; Nelson, Chris C.
2005-01-01
Validation of the Wind-US flow solver against two sets of experimental data involving high-speed combustion is attempted. First, the well-known Burrows- Kurkov supersonic hydrogen-air combustion test case is simulated, and the sensitively of ignition location and combustion performance to key parameters is explored. Second, a numerical model is developed for simulation of an X-43B candidate, full-scale, JP-7-fueled, internal flowpath operating in ramjet mode. Numerical results using an ethylene-air chemical kinetics model are directly compared against previously existing pressure-distribution data along the entire flowpath, obtained in direct-connect testing conducted at NASA Langley Research Center. Comparison to derived quantities such as burn efficiency and thermal throat location are also made. Reasonable to excellent agreement with experimental data is demonstrated for key parameters in both simulation efforts. Additional Wind-US feature needed to improve simulation efforts are described herein, including maintaining stagnation conditions at inflow boundaries for multi-species flow. An open issue regarding the sensitivity of isolator unstart to key model parameters is briefly discussed.
Patient-centred care in general dental practice - a systematic review of the literature
2014-01-01
Background Delivering improvements in quality is a key objective within most healthcare systems, and a view which has been widely embraced within the NHS in the United Kingdom. Within the NHS, quality is evaluated across three key dimensions: clinical effectiveness, safety and patient experience, with the latter modelled on the Picker Principles of Patient-Centred Care (PCC). Quality improvement is an important feature of the current dental contract reforms in England, with “patient experience” likely to have a central role in the evaluation of quality. An understanding and appreciation of the evidence underpinning PCC within dentistry is highly relevant if we are to use this as a measure of quality in general dental practice. Methods A systematic review of the literature was undertaken to identify the features of PCC relevant to dentistry and ascertain the current research evidence base underpinning its use as a measure of quality within general dental practice. Results Three papers were identified which met the inclusion criteria and demonstrated the use of primary research to provide an understanding of the key features of PCC within dentistry. None of the papers identified were based in general dental practice and none of the three studies sought the views of patients. Some distinct differences were noted between the key features of PCC reported within the dental literature and those developed within the NHS Patient Experience Framework. Conclusions This systematic review reveals a lack of understanding of PCC within dentistry, and in particular general dental practice. There is currently a poor evidence base to support the use of the current patient reported outcome measures as indicators of patient-centredness. Further research is necessary to understand the important features of PCC in dentistry and patients’ views should be central to this research. PMID:24902842
Juhlin, Kristina; Norén, G. Niklas
2017-01-01
Abstract Purpose To develop a method for data‐driven exploration in pharmacovigilance and illustrate its use by identifying the key features of individual case safety reports related to medication errors. Methods We propose vigiPoint, a method that contrasts the relative frequency of covariate values in a data subset of interest to those within one or more comparators, utilizing odds ratios with adaptive statistical shrinkage. Nested analyses identify higher order patterns, and permutation analysis is employed to protect against chance findings. For illustration, a total of 164 000 adverse event reports related to medication errors were characterized and contrasted to the other 7 833 000 reports in VigiBase, the WHO global database of individual case safety reports, as of May 2013. The initial scope included 2000 features, such as patient age groups, reporter qualifications, and countries of origin. Results vigiPoint highlighted 109 key features of medication error reports. The most prominent were that the vast majority of medication error reports were from the United States (89% compared with 49% for other reports in VigiBase); that the majority of reports were sent by consumers (53% vs 17% for other reports); that pharmacists (12% vs 5.3%) and lawyers (2.9% vs 1.5%) were overrepresented; and that there were more medication error reports than expected for patients aged 2‐11 years (10% vs 5.7%), particularly in Germany (16%). Conclusions vigiPoint effectively identified key features of medication error reports in VigiBase. More generally, it reduces lead times for analysis and ensures reproducibility and transparency. An important next step is to evaluate its use in other data. PMID:28815800
Biological and functional relevance of CASP predictions
Liu, Tianyun; Ish‐Shalom, Shirbi; Torng, Wen; Lafita, Aleix; Bock, Christian; Mort, Matthew; Cooper, David N; Bliven, Spencer; Capitani, Guido; Mooney, Sean D.
2017-01-01
Abstract Our goal is to answer the question: compared with experimental structures, how useful are predicted models for functional annotation? We assessed the functional utility of predicted models by comparing the performances of a suite of methods for functional characterization on the predictions and the experimental structures. We identified 28 sites in 25 protein targets to perform functional assessment. These 28 sites included nine sites with known ligand binding (holo‐sites), nine sites that are expected or suggested by experimental authors for small molecule binding (apo‐sites), and Ten sites containing important motifs, loops, or key residues with important disease‐associated mutations. We evaluated the utility of the predictions by comparing their microenvironments to the experimental structures. Overall structural quality correlates with functional utility. However, the best‐ranked predictions (global) may not have the best functional quality (local). Our assessment provides an ability to discriminate between predictions with high structural quality. When assessing ligand‐binding sites, most prediction methods have higher performance on apo‐sites than holo‐sites. Some servers show consistently high performance for certain types of functional sites. Finally, many functional sites are associated with protein‐protein interaction. We also analyzed biologically relevant features from the protein assemblies of two targets where the active site spanned the protein‐protein interface. For the assembly targets, we find that the features in the models are mainly determined by the choice of template. PMID:28975675
Testing earthquake source inversion methodologies
Page, M.; Mai, P.M.; Schorlemmer, D.
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.
Thermochromic halide perovskite solar cells.
Lin, Jia; Lai, Minliang; Dou, Letian; Kley, Christopher S; Chen, Hong; Peng, Fei; Sun, Junliang; Lu, Dylan; Hawks, Steven A; Xie, Chenlu; Cui, Fan; Alivisatos, A Paul; Limmer, David T; Yang, Peidong
2018-03-01
Smart photovoltaic windows represent a promising green technology featuring tunable transparency and electrical power generation under external stimuli to control the light transmission and manage the solar energy. Here, we demonstrate a thermochromic solar cell for smart photovoltaic window applications utilizing the structural phase transitions in inorganic halide perovskite caesium lead iodide/bromide. The solar cells undergo thermally-driven, moisture-mediated reversible transitions between a transparent non-perovskite phase (81.7% visible transparency) with low power output and a deeply coloured perovskite phase (35.4% visible transparency) with high power output. The inorganic perovskites exhibit tunable colours and transparencies, a peak device efficiency above 7%, and a phase transition temperature as low as 105 °C. We demonstrate excellent device stability over repeated phase transition cycles without colour fade or performance degradation. The photovoltaic windows showing both photoactivity and thermochromic features represent key stepping-stones for integration with buildings, automobiles, information displays, and potentially many other technologies.
NASA Astrophysics Data System (ADS)
Shukla, Jaikaran N.; Halfen, Frank J.; Brynsvold, Glen V.; Syed, Akbar; Jiang, Thomas J.; Wong, Kwok K.; Otwell, Robert L.
1994-07-01
Recent work in lower power generic early applications for the SP-100 have resulted in control system design simplification for a 20 kWe design with thermoelectric power conversion. This paper presents the non-mission-dependent control system features for this design. The control system includes a digital computer based controller, dual purpose control rods and drives, temperature sensors, and neutron flux monitors. The thaw system is mission dependent and can be either electrical or based on NaK trace lines. Key features of the control system and components are discussed. As was the case for higher power applications, the initial on-orbit approach to criticality involves the relatively fast withdrawal of the control-rods to a near-critical position followed by slower movement through critical and into the power range. The control system performs operating maneuvers as well as providing for automatic startup, shutdown, restart, and reactor protection.
Thermochromic halide perovskite solar cells
NASA Astrophysics Data System (ADS)
Lin, Jia; Lai, Minliang; Dou, Letian; Kley, Christopher S.; Chen, Hong; Peng, Fei; Sun, Junliang; Lu, Dylan; Hawks, Steven A.; Xie, Chenlu; Cui, Fan; Alivisatos, A. Paul; Limmer, David T.; Yang, Peidong
2018-03-01
Smart photovoltaic windows represent a promising green technology featuring tunable transparency and electrical power generation under external stimuli to control the light transmission and manage the solar energy. Here, we demonstrate a thermochromic solar cell for smart photovoltaic window applications utilizing the structural phase transitions in inorganic halide perovskite caesium lead iodide/bromide. The solar cells undergo thermally-driven, moisture-mediated reversible transitions between a transparent non-perovskite phase (81.7% visible transparency) with low power output and a deeply coloured perovskite phase (35.4% visible transparency) with high power output. The inorganic perovskites exhibit tunable colours and transparencies, a peak device efficiency above 7%, and a phase transition temperature as low as 105 °C. We demonstrate excellent device stability over repeated phase transition cycles without colour fade or performance degradation. The photovoltaic windows showing both photoactivity and thermochromic features represent key stepping-stones for integration with buildings, automobiles, information displays, and potentially many other technologies.
Surgery-first orthognathic approach case series: Salient features and guidelines
Gandedkar, Narayan H; Chng, Chai Kiat; Tan, Winston
2016-01-01
Conventional orthognathic surgery treatment involves a prolonged period of orthodontic treatment (pre- and post-surgery), making the total treatment period of 3–4 years too exhaustive. Surgery-first orthognathic approach (SFOA) sees orthognathic surgery being carried out first, followed by orthodontic treatment to align the teeth and occlusion. Following orthognathic surgery, a period of rapid metabolic activity within tissues ensues is known as the regional acceleratory phenomenon (RAP). By performing surgery first, RAP can be harnessed to facilitate efficient orthodontic treatment. This phenomenon is believed to be a key factor in the notable reduction in treatment duration using SFOA. This article presents two cases treated with SFOA with emphasis on “case selection, treatment strategy, merits, and limitations” of SFOA. Further, salient features comparison of “conventional orthognathic surgery” and “SFOA” with an overview of author's SFOA treatment protocol is enumerated. PMID:26998476
NASA Astrophysics Data System (ADS)
Yang, Qingchun; Wang, Hongxin; Chetehouna, Khaled; Gascoin, Nicolas
2017-01-01
The supersonic combustion ramjet (scramjet) engine remains the most promising airbreathing engine cycle for hypersonic flight, particularly the high-performance dual-mode scramjet in the range of flight Mach number from 4 to 7, because it can operates under different combustion modes. Isolator is a very key component of the dual-mode scramjet engine. In this paper, nonlinear characteristics of combustion mode transition is theoretically analyzed. The discontinuous sudden changes of static pressure and Mach number are obtained as the mode transition occurs, which emphasizing the importance of predication and control of combustion modes. In this paper, a predication model of different combustion modes is developed based on these these nonlinear features in the isolator flow field. it can provide a valuable reference for control system design of the scramjet-powered aerospace vehicle.
Segmentation-assisted detection of dirt impairments in archived film sequences.
Ren, Jinchang; Vlachos, Theodore
2007-04-01
In this correspondence, a novel segmentation-assisted method for film-dirt detection is proposed. We exploit the fact that film dirt manifests in the spatial domain as a cluster of connected pixels whose intensity differs substantially from that of its neighborhood, and we employ a segmentation-based approach to identify this type of structure. A key feature of our approach is the computation of a measure of confidence attached to detected dirt regions, which can be utilized for performance fine tuning. Another important feature of our algorithm is the avoidance of the computational complexity associated with motion estimation. Our experimental framework benefits from the availability of manually derived as well as objective ground-truth data obtained using infrared scanning. Our results demonstrate that the proposed method compares favorably with standard spatial, temporal, and multistage median-filtering approaches and provides efficient and robust detection for a wide variety of test materials.
Zhang, Cunji; Yao, Xifan; Zhang, Jianming; Jin, Hong
2016-05-31
Tool breakage causes losses of surface polishing and dimensional accuracy for machined part, or possible damage to a workpiece or machine. Tool Condition Monitoring (TCM) is considerably vital in the manufacturing industry. In this paper, an indirect TCM approach is introduced with a wireless triaxial accelerometer. The vibrations in the three vertical directions (x, y and z) are acquired during milling operations, and the raw signals are de-noised by wavelet analysis. These features of de-noised signals are extracted in the time, frequency and time-frequency domains. The key features are selected based on Pearson's Correlation Coefficient (PCC). The Neuro-Fuzzy Network (NFN) is adopted to predict the tool wear and Remaining Useful Life (RUL). In comparison with Back Propagation Neural Network (BPNN) and Radial Basis Function Network (RBFN), the results show that the NFN has the best performance in the prediction of tool wear and RUL.
Predicting DNA hybridization kinetics from sequence
NASA Astrophysics Data System (ADS)
Zhang, Jinny X.; Fang, John Z.; Duan, Wei; Wu, Lucia R.; Zhang, Angela W.; Dalchau, Neil; Yordanov, Boyan; Petersen, Rasmus; Phillips, Andrew; Zhang, David Yu
2018-01-01
Hybridization is a key molecular process in biology and biotechnology, but so far there is no predictive model for accurately determining hybridization rate constants based on sequence information. Here, we report a weighted neighbour voting (WNV) prediction algorithm, in which the hybridization rate constant of an unknown sequence is predicted based on similarity reactions with known rate constants. To construct this algorithm we first performed 210 fluorescence kinetics experiments to observe the hybridization kinetics of 100 different DNA target and probe pairs (36 nt sub-sequences of the CYCS and VEGF genes) at temperatures ranging from 28 to 55 °C. Automated feature selection and weighting optimization resulted in a final six-feature WNV model, which can predict hybridization rate constants of new sequences to within a factor of 3 with ∼91% accuracy, based on leave-one-out cross-validation. Accurate prediction of hybridization kinetics allows the design of efficient probe sequences for genomics research.
NASA Astrophysics Data System (ADS)
Tereshchenko, E. D.; Yurik, R. Yu.; Yeoman, T. K.; Robinson, T. R.
2008-11-01
We present the first results of observations of the stimulated electromagnetic emission (SEE) in the ionosphere modified by the Space Plasma Exploration by Active Radar (SPEAR) heating facility. Observation of the SEE is the key method of ground-based diagnostics of the ionospheric plasma disturbances due to high-power HF radiation. The presented results were obtained during the heating campaign performed at the SPEAR facility in February-March 2007. Prominent SEE special features were observed in periods in which the critical frequency of the F 2 layer was higher than the pump-wave frequency (4.45 MHz). As an example, such special features as the downshifted maximum and the broad continuum in the region of negative detunings from the pump-wave frequency are presented. Observations clearly demonstrate that the ionosphere was efficiently excited by the SPEAR heating facility despite the comparatively low pump-wave power.
Accidental Turbulent Discharge Rate Estimation from Videos
NASA Astrophysics Data System (ADS)
Ibarra, Eric; Shaffer, Franklin; Savaş, Ömer
2015-11-01
A technique to estimate the volumetric discharge rate in accidental oil releases using high speed video streams is described. The essence of the method is similar to PIV processing, however the cross correlation is carried out on the visible features of the efflux, which are usually turbulent, opaque and immiscible. The key step in the process is to perform a pixelwise time filtering on the video stream, in which the parameters are commensurate with the scales of the large eddies. The velocity field extracted from the shell of visible features is then used to construct an approximate velocity profile within the discharge. The technique has been tested on laboratory experiments using both water and oil jets at Re ~105 . The technique is accurate to 20%, which is sufficient for initial responders to deploy adequate resources for containment. The software package requires minimal user input and is intended for deployment on an ROV in the field. Supported by DOI via NETL.
NASA Astrophysics Data System (ADS)
Khan, Yousaf; Afridi, Muhammad Idrees; Khan, Ahmed Mudassir; Rehman, Waheed Ur; Khan, Jahanzeb
2014-09-01
Hybrid wavelength-division multiplexed/time-division multiplexed passive optical access networks (WDM/TDM-PONs) combine the advance features of both WDM and TDM PONs to provide a cost-effective access network solution. We demonstrate and analyze the transmission performances and power budget issues of a colorless hybrid WDM/TDM-PON scheme. A 10-Gb/s downstream differential phase shift keying (DPSK) and remodulated upstream on/off keying (OOK) data signals are transmitted over 25 km standard single mode fiber. Simulation results show error free transmission having adequate power margins in both downstream and upstream transmission, which prove the applicability of the proposed scheme to future passive optical access networks. The power budget confines both the PON splitting ratio and the distance between the Optical Line Terminal (OLT) and Optical Network Unit (ONU).
Scalable Molecular Dynamics with NAMD
Phillips, James C.; Braun, Rosemary; Wang, Wei; Gumbart, James; Tajkhorshid, Emad; Villa, Elizabeth; Chipot, Christophe; Skeel, Robert D.; Kalé, Laxmikant; Schulten, Klaus
2008-01-01
NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD scales to hundreds of processors on high-end parallel platforms, as well as tens of processors on low-cost commodity clusters, and also runs on individual desktop and laptop computers. NAMD works with AMBER and CHARMM potential functions, parameters, and file formats. This paper, directed to novices as well as experts, first introduces concepts and methods used in the NAMD program, describing the classical molecular dynamics force field, equations of motion, and integration methods along with the efficient electrostatics evaluation algorithms employed and temperature and pressure controls used. Features for steering the simulation across barriers and for calculating both alchemical and conformational free energy differences are presented. The motivations for and a roadmap to the internal design of NAMD, implemented in C++ and based on Charm++ parallel objects, are outlined. The factors affecting the serial and parallel performance of a simulation are discussed. Next, typical NAMD use is illustrated with representative applications to a small, a medium, and a large biomolecular system, highlighting particular features of NAMD, e.g., the Tcl scripting language. Finally, the paper provides a list of the key features of NAMD and discusses the benefits of combining NAMD with the molecular graphics/sequence analysis software VMD and the grid computing/collaboratory software BioCoRE. NAMD is distributed free of charge with source code at www.ks.uiuc.edu. PMID:16222654
NASA Astrophysics Data System (ADS)
van Es, Maarten H.; Mohtashami, Abbas; Piras, Daniele; Sadeghian, Hamed
2018-03-01
Nondestructive subsurface nanoimaging through optically opaque media is considered to be extremely challenging and is essential for several semiconductor metrology applications including overlay and alignment and buried void and defect characterization. The current key challenge in overlay and alignment is the measurement of targets that are covered by optically opaque layers. Moreover, with the device dimensions moving to the smaller nodes and the issue of the so-called loading effect causing offsets between between targets and product features, it is increasingly desirable to perform alignment and overlay on product features or so-called on-cell overlay, which requires higher lateral resolution than optical methods can provide. Our recently developed technique known as SubSurface Ultrasonic Resonance Force Microscopy (SSURFM) has shown the capability for high-resolution imaging of structures below a surface based on (visco-)elasticity of the constituent materials and as such is a promising technique to perform overlay and alignment with high resolution in upcoming production nodes. In this paper, we describe the developed SSURFM technique and the experimental results on imaging buried features through various layers and the ability to detect objects with resolution below 10 nm. In summary, the experimental results show that the SSURFM is a potential solution for on-cell overlay and alignment as well as detecting buried defects or voids and generally metrology through optically opaque layers.
Search performance is better predicted by tileability than presence of a unique basic feature.
Chang, Honghua; Rosenholtz, Ruth
2016-08-01
Traditional models of visual search such as feature integration theory (FIT; Treisman & Gelade, 1980), have suggested that a key factor determining task difficulty consists of whether or not the search target contains a "basic feature" not found in the other display items (distractors). Here we discriminate between such traditional models and our recent texture tiling model (TTM) of search (Rosenholtz, Huang, Raj, Balas, & Ilie, 2012b), by designing new experiments that directly pit these models against each other. Doing so is nontrivial, for two reasons. First, the visual representation in TTM is fully specified, and makes clear testable predictions, but its complexity makes getting intuitions difficult. Here we elucidate a rule of thumb for TTM, which enables us to easily design new and interesting search experiments. FIT, on the other hand, is somewhat ill-defined and hard to pin down. To get around this, rather than designing totally new search experiments, we start with five classic experiments that FIT already claims to explain: T among Ls, 2 among 5s, Q among Os, O among Qs, and an orientation/luminance-contrast conjunction search. We find that fairly subtle changes in these search tasks lead to significant changes in performance, in a direction predicted by TTM, providing definitive evidence in favor of the texture tiling model as opposed to traditional views of search.
Knowledge-transfer learning for prediction of matrix metalloprotease substrate-cleavage sites.
Wang, Yanan; Song, Jiangning; Marquez-Lago, Tatiana T; Leier, André; Li, Chen; Lithgow, Trevor; Webb, Geoffrey I; Shen, Hong-Bin
2017-07-18
Matrix Metalloproteases (MMPs) are an important family of proteases that play crucial roles in key cellular and disease processes. Therefore, MMPs constitute important targets for drug design, development and delivery. Advanced proteomic technologies have identified type-specific target substrates; however, the complete repertoire of MMP substrates remains uncharacterized. Indeed, computational prediction of substrate-cleavage sites associated with MMPs is a challenging problem. This holds especially true when considering MMPs with few experimentally verified cleavage sites, such as for MMP-2, -3, -7, and -8. To fill this gap, we propose a new knowledge-transfer computational framework which effectively utilizes the hidden shared knowledge from some MMP types to enhance predictions of other, distinct target substrate-cleavage sites. Our computational framework uses support vector machines combined with transfer machine learning and feature selection. To demonstrate the value of the model, we extracted a variety of substrate sequence-derived features and compared the performance of our method using both 5-fold cross-validation and independent tests. The results show that our transfer-learning-based method provides a robust performance, which is at least comparable to traditional feature-selection methods for prediction of MMP-2, -3, -7, -8, -9 and -12 substrate-cleavage sites on independent tests. The results also demonstrate that our proposed computational framework provides a useful alternative for the characterization of sequence-level determinants of MMP-substrate specificity.
Gerczak, Tyler J.; Hunn, John D.; Lowden, Richard A.; ...
2016-08-15
Tristructural isotropic (TRISO) coated particle fuel is a promising fuel form for advanced reactor concepts such as high temperature gas-cooled reactors (HTGR) and is being developed domestically under the US Department of Energy’s Nuclear Reactor Technologies Initiative in support of Advanced Reactor Technologies. The fuel development and qualification plan includes a series of fuel irradiations to demonstrate fuel performance from the laboratory to commercial scale. The first irradiation campaign, AGR-1, included four separate TRISO fuel variants composed of multiple, laboratory-scale coater batches. The second irradiation campaign, AGR-2, included TRISO fuel particles fabricated by BWX Technologies with a larger coater representativemore » of an industrial-scale system. The SiC layers of as-fabricated particles from the AGR-1 and AGR-2 irradiation campaigns have been investigated by electron backscatter diffraction (EBSD) to provide key information about the microstructural features relevant to fuel performance. The results of a comprehensive study of multiple particles from all constituent batches are reported. The observations indicate that there were microstructural differences between variants and among constituent batches in a single variant. Finally, insights on the influence of microstructure on the effective diffusivity of key fission products in the SiC layer are also discussed.« less
Yu, Kaixin; Wang, Xuetong; Li, Qiongling; Zhang, Xiaohui; Li, Xinwei; Li, Shuyu
2018-01-01
Morphological brain network plays a key role in investigating abnormalities in neurological diseases such as mild cognitive impairment (MCI) and Alzheimer's disease (AD). However, most of the morphological brain network construction methods only considered a single morphological feature. Each type of morphological feature has specific neurological and genetic underpinnings. A combination of morphological features has been proven to have better diagnostic performance compared with a single feature, which suggests that an individual morphological brain network based on multiple morphological features would be beneficial in disease diagnosis. Here, we proposed a novel method to construct individual morphological brain networks for two datasets by calculating the exponential function of multivariate Euclidean distance as the evaluation of similarity between two regions. The first dataset included 24 healthy subjects who were scanned twice within a 3-month period. The topological properties of these brain networks were analyzed and compared with previous studies that used different methods and modalities. Small world property was observed in all of the subjects, and the high reproducibility indicated the robustness of our method. The second dataset included 170 patients with MCI (86 stable MCI and 84 progressive MCI cases) and 169 normal controls (NC). The edge features extracted from the individual morphological brain networks were used to distinguish MCI from NC and separate MCI subgroups (progressive vs. stable) through the support vector machine in order to validate our method. The results showed that our method achieved an accuracy of 79.65% (MCI vs. NC) and 70.59% (stable MCI vs. progressive MCI) in a one-dimension situation. In a multiple-dimension situation, our method improved the classification performance with an accuracy of 80.53% (MCI vs. NC) and 77.06% (stable MCI vs. progressive MCI) compared with the method using a single feature. The results indicated that our method could effectively construct an individual morphological brain network based on multiple morphological features and could accurately discriminate MCI from NC and stable MCI from progressive MCI, and may provide a valuable tool for the investigation of individual morphological brain networks.
Liang, Yunyun; Liu, Sanyang; Zhang, Shengli
2015-01-01
Prediction of protein structural classes for low-similarity sequences is useful for understanding fold patterns, regulation, functions, and interactions of proteins. It is well known that feature extraction is significant to prediction of protein structural class and it mainly uses protein primary sequence, predicted secondary structure sequence, and position-specific scoring matrix (PSSM). Currently, prediction solely based on the PSSM has played a key role in improving the prediction accuracy. In this paper, we propose a novel method called CSP-SegPseP-SegACP by fusing consensus sequence (CS), segmented PsePSSM, and segmented autocovariance transformation (ACT) based on PSSM. Three widely used low-similarity datasets (1189, 25PDB, and 640) are adopted in this paper. Then a 700-dimensional (700D) feature vector is constructed and the dimension is decreased to 224D by using principal component analysis (PCA). To verify the performance of our method, rigorous jackknife cross-validation tests are performed on 1189, 25PDB, and 640 datasets. Comparison of our results with the existing PSSM-based methods demonstrates that our method achieves the favorable and competitive performance. This will offer an important complementary to other PSSM-based methods for prediction of protein structural classes for low-similarity sequences.
Score-moment combined linear discrimination analysis (SMC-LDA) as an improved discrimination method.
Han, Jintae; Chung, Hoeil; Han, Sung-Hwan; Yoon, Moon-Young
2007-01-01
A new discrimination method called the score-moment combined linear discrimination analysis (SMC-LDA) has been developed and its performance has been evaluated using three practical spectroscopic datasets. The key concept of SMC-LDA was to use not only the score from principal component analysis (PCA), but also the moment of the spectrum, as inputs for LDA to improve discrimination. Along with conventional score, moment is used in spectroscopic fields as an effective alternative for spectral feature representation. Three different approaches were considered. Initially, the score generated from PCA was projected onto a two-dimensional feature space by maximizing Fisher's criterion function (conventional PCA-LDA). Next, the same procedure was performed using only moment. Finally, both score and moment were utilized simultaneously for LDA. To evaluate discrimination performances, three different spectroscopic datasets were employed: (1) infrared (IR) spectra of normal and malignant stomach tissue, (2) near-infrared (NIR) spectra of diesel and light gas oil (LGO) and (3) Raman spectra of Chinese and Korean ginseng. For each case, the best discrimination results were achieved when both score and moment were used for LDA (SMC-LDA). Since the spectral representation character of moment was different from that of score, inclusion of both score and moment for LDA provided more diversified and descriptive information.
Keywords image retrieval in historical handwritten Arabic documents
NASA Astrophysics Data System (ADS)
Saabni, Raid; El-Sana, Jihad
2013-01-01
A system is presented for spotting and searching keywords in handwritten Arabic documents. A slightly modified dynamic time warping algorithm is used to measure similarities between words. Two sets of features are generated from the outer contour of the words/word-parts. The first set is based on the angles between nodes on the contour and the second set is based on the shape context features taken from the outer contour. To recognize a given word, the segmentation-free approach is partially adopted, i.e., continuous word parts are used as the basic alphabet, instead of individual characters or complete words. Additional strokes, such as dots and detached short segments, are classified and used in a postprocessing step to determine the final comparison decision. The search for a keyword is performed by the search for its word parts given in the correct order. The performance of the presented system was very encouraging in terms of efficiency and match rates. To evaluate the presented system its performance is compared to three different systems. Unfortunately, there are no publicly available standard datasets with ground truth for testing Arabic key word searching systems. Therefore, a private set of images partially taken from Juma'a Al-Majid Center in Dubai for evaluation is used, while using a slightly modified version of the IFN/ENIT database for training.
Pattern recognition approach to the subsequent event of damaging earthquakes in Italy
NASA Astrophysics Data System (ADS)
Gentili, S.; Di Giovambattista, R.
2017-05-01
In this study, we investigate the occurrence of large aftershocks following the most significant earthquakes that occurred in Italy after 1980. In accordance with previous studies (Vorobieva and Panza, 1993; Vorobieva, 1999), we group clusters associated with mainshocks into two categories: ;type A; if, given a main shock of magnitude M, the subsequent strongest earthquake in the cluster has magnitude ≥M - 1 or type B otherwise. In this paper, we apply a pattern recognition approach using statistical features to foresee the class of the analysed clusters. The classification of the two categories is based on some features of the time, space, and magnitude distribution of the aftershocks. Specifically, we analyse the temporal evolution of the radiated energy at different elapsed times after the mainshock, the spatio-temporal evolution of the aftershocks occurring within a few days, and the probability of a strong earthquake. An attempt is made to classify the studied region into smaller seismic zones with a prevalence of type A and B clusters. We demonstrate that the two types of clusters have distinct preferred geographic locations inside the Italian territory that likely reflected key properties of the deforming regions, different crustal domains and faulting style. We use decision trees as classifiers of single features to characterize the features depending on the cluster type. The performance of the classification is tested by the Leave-One-Out method. The analysis is performed on different time-spans after the mainshock to simulate the dependence of the accuracy on the information available as data increased over a longer period with increasing time after the mainshock.
ALLY: An operator's associate for satellite ground control systems
NASA Technical Reports Server (NTRS)
Bushman, J. B.; Mitchell, Christine M.; Jones, P. M.; Rubin, K. S.
1991-01-01
The key characteristics of an intelligent advisory system is explored. A central feature is that human-machine cooperation should be based on a metaphor of human-to-human cooperation. ALLY, a computer-based operator's associate which is based on a preliminary theory of human-to-human cooperation, is discussed. ALLY assists the operator in carrying out the supervisory control functions for a simulated NASA ground control system. Experimental evaluation of ALLY indicates that operators using ALLY performed at least as well as they did when using a human associate and in some cases even better.
Parametric Representation of the Speaker's Lips for Multimodal Sign Language and Speech Recognition
NASA Astrophysics Data System (ADS)
Ryumin, D.; Karpov, A. A.
2017-05-01
In this article, we propose a new method for parametric representation of human's lips region. The functional diagram of the method is described and implementation details with the explanation of its key stages and features are given. The results of automatic detection of the regions of interest are illustrated. A speed of the method work using several computers with different performances is reported. This universal method allows applying parametrical representation of the speaker's lipsfor the tasks of biometrics, computer vision, machine learning, and automatic recognition of face, elements of sign languages, and audio-visual speech, including lip-reading.
NASA Astrophysics Data System (ADS)
Maestro, P.; Gaffet, E.; Le Caër, G.; Mocellin, A.; Reynaud, E.; Rouxel, T.; Soulard, M.; Patarin, J.; Thilly, L.; Lecouturier, F.
Inorganic reinforcements are used in rubber, and in particular in tyre treads for light vehicles, in order to improve the compromise between three key features of tyres: road holding performance or road adherence, especially when the road is wet or snow-covered (road safety), roll resistance (petrol consumption), and resistance to wear (lifetime of the tyre). Over the last ten years, highly dispersible silicas (HDS) developed by Rhodia have been more and more widely used as a substitute for the traditionally used carbon black. The advantage with HDS materials is that they improve road holding and reduce roll resistance, while maintaining the same level of resistance to wear.
Low-Cost, High-Performance Cryocoolers for In-Situ Propellant Production
NASA Technical Reports Server (NTRS)
Martin, J. L.; Corey, J. A.; Peters, T. A.
1999-01-01
A key feature of many In-Situ Resource Utilization (ISRU) schemes is the production of rocket fuel and oxidizer from the Martian atmosphere. Many of the fuels under consideration will require cryogenic cooling for efficient long-term storage. Although significant research has been focused on the techniques for producing the fuels from Martian resources, little effort has been expended on the development of cryocoolers to efficiently liquefy these fuels. This paper describes the design of a pulse tube liquefier optimized for liquefying oxygen produced by an In-Situ Propellant Production (ISPP) plant on Mars.
Low-Cost High-Performance Cryocoolers for In-Situ Propellant Production
NASA Technical Reports Server (NTRS)
Martin, J. L.; Corey, J. A.; Peters, T. A.
1999-01-01
A key feature of many In-Situ Resource Utilization (ISRU) schemes is the production of rocket fuel and oxidizer from the Martian atmosphere. Many of the fuels under consideration will require cryogenic cooling for efficient long-term storage. Although significant research has been focused on the techniques for producing the fuels from Martian resources, little effort has been expended on the development of cryocoolers to efficiently liquefy these fuels. This paper describes the design of a pulse tube liquefier optimized for liquefying oxygen produced by an In-Situ Propellant Production (ISPP) plant on Mars.
PubMed-EX: a web browser extension to enhance PubMed search with text mining features.
Tsai, Richard Tzong-Han; Dai, Hong-Jie; Lai, Po-Ting; Huang, Chi-Hsin
2009-11-15
PubMed-EX is a browser extension that marks up PubMed search results with additional text-mining information. PubMed-EX's page mark-up, which includes section categorization and gene/disease and relation mark-up, can help researchers to quickly focus on key terms and provide additional information on them. All text processing is performed server-side, freeing up user resources. PubMed-EX is freely available at http://bws.iis.sinica.edu.tw/PubMed-EX and http://iisr.cse.yzu.edu.tw:8000/PubMed-EX/.
A unified teleoperated-autonomous dual-arm robotic system
NASA Technical Reports Server (NTRS)
Hayati, Samad; Lee, Thomas S.; Tso, Kam Sing; Backes, Paul G.; Lloyd, John
1991-01-01
A description is given of complete robot control facility built as part of a NASA telerobotics program to develop a state-of-the-art robot control environment for performing experiments in the repair and assembly of spacelike hardware to gain practical knowledge of such work and to improve the associated technology. The basic architecture of the manipulator control subsystem is presented. The multiarm Robot Control C Library (RCCL), a key software component of the system, is described, along with its implementation on a Sun-4 computer. The system's simulation capability is also described, and the teleoperation and shared control features are explained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashenfelter, J.; Jaffe, D.; Diwan, M. V.
A meter-long, 23-liter EJ-309 liquid scintillator detector has been constructed to study the light collection and pulse-shape discrimination performance of elongated scintillator cells for the PROSPECT reactor antineutrino experiment. The magnitude and uniformity of light collection and neutron-gamma discrimination power in the energy range of antineutrino inverse beta decay products have been studied using gamma and spontaneous fission calibration sources deployed along the cell axis. We also study neutron-gamma discrimination and light collection abilities for differing PMT and reflector configurations. As a result, key design features for optimizing MeV-scale response and background rejection capabilities are identified.
Proton-Proton and Proton-Antiproton Colliders
NASA Astrophysics Data System (ADS)
Scandale, Walter
In the last five decades, proton-proton and proton-antiproton colliders have been the most powerful tools for high energy physics investigations. They have also deeply catalyzed innovation in accelerator physics and technology. Among the large number of proposed colliders, only four have really succeeded in becoming operational: the ISR, the SppbarS, the Tevatron and the LHC. Another hadron collider, RHIC, originally conceived for ion-ion collisions, has also been operated part-time with polarized protons. Although a vast literature documenting them is available, this paper is intended to provide a quick synthesis of their main features and key performance.
Proton-Proton and Proton-Antiproton Colliders
NASA Astrophysics Data System (ADS)
Scandale, Walter
2014-04-01
In the last five decades, proton-proton and proton-antiproton colliders have been the most powerful tools for high energy physics investigations. They have also deeply catalyzed innovation in accelerator physics and technology. Among the large number of proposed colliders, only four have really succeeded in becoming operational: the ISR, the SppbarS, the Tevatron and the LHC. Another hadron collider, RHIC, originally conceived for ion-ion collisions, has also been operated part-time with polarized protons. Although a vast literature documenting them is available, this paper is intended to provide a quick synthesis of their main features and key performance.
Proton-Proton and Proton-Antiproton Colliders
NASA Astrophysics Data System (ADS)
Scandale, Walter
2015-02-01
In the last five decades, proton-proton and proton-antiproton colliders have been the most powerful tools for high energy physics investigations. They have also deeply catalyzed innovation in accelerator physics and technology. Among the large number of proposed colliders, only four have really succeeded in becoming operational: the ISR, the SppbarS, the Tevatron and the LHC. Another hadron collider, RHIC, originally conceived for ion-ion collisions, has also been operated part-time with polarized protons. Although a vast literature documenting them is available, this paper is intended to provide a quick synthesis of their main features and key performance.
NASA Astrophysics Data System (ADS)
Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir; Helvie, Mark A.; Kim, Renaid
2017-03-01
Understanding the key radiogenomic associations for breast cancer between DCE-MRI and micro-RNA expressions is the foundation for the discovery of radiomic features as biomarkers for assessing tumor progression and prognosis. We conducted a study to analyze the radiogenomic associations for breast cancer using the TCGA-TCIA data set. The core idea that tumor etiology is a function of the behavior of miRNAs is used to build the regression models. The associations based on regression are analyzed for three study outcomes: diagnosis, prognosis, and treatment. The diagnosis group consists of miRNAs associated with clinicopathologic features of breast cancer and significant aberration of expression in breast cancer patients. The prognosis group consists of miRNAs which are closely associated with tumor suppression and regulation of cell proliferation and differentiation. The treatment group consists of miRNAs that contribute significantly to the regulation of metastasis thereby having the potential to be part of therapeutic mechanisms. As a first step, important miRNA expressions were identified and their ability to classify the clinical phenotypes based on the study outcomes was evaluated using the area under the ROC curve (AUC) as a figure-of-merit. The key mapping between the selected miRNAs and radiomic features were determined using least absolute shrinkage and selection operator (LASSO) regression analysis within a two-loop leave-one-out cross-validation strategy. These key associations indicated a number of radiomic features from DCE-MRI to be potential biomarkers for the three study outcomes.
Local Multi-Grouped Binary Descriptor With Ring-Based Pooling Configuration and Optimization.
Gao, Yongqiang; Huang, Weilin; Qiao, Yu
2015-12-01
Local binary descriptors are attracting increasingly attention due to their great advantages in computational speed, which are able to achieve real-time performance in numerous image/vision applications. Various methods have been proposed to learn data-dependent binary descriptors. However, most existing binary descriptors aim overly at computational simplicity at the expense of significant information loss which causes ambiguity in similarity measure using Hamming distance. In this paper, by considering multiple features might share complementary information, we present a novel local binary descriptor, referred as ring-based multi-grouped descriptor (RMGD), to successfully bridge the performance gap between current binary and floated-point descriptors. Our contributions are twofold. First, we introduce a new pooling configuration based on spatial ring-region sampling, allowing for involving binary tests on the full set of pairwise regions with different shapes, scales, and distances. This leads to a more meaningful description than the existing methods which normally apply a limited set of pooling configurations. Then, an extended Adaboost is proposed for an efficient bit selection by emphasizing high variance and low correlation, achieving a highly compact representation. Second, the RMGD is computed from multiple image properties where binary strings are extracted. We cast multi-grouped features integration as rankSVM or sparse support vector machine learning problem, so that different features can compensate strongly for each other, which is the key to discriminativeness and robustness. The performance of the RMGD was evaluated on a number of publicly available benchmarks, where the RMGD outperforms the state-of-the-art binary descriptors significantly.
Boster, Jamie B; McCarthy, John W
2018-05-01
The purpose of this study was to gain insight from speech-language pathologists (SLPs) and parents of children with autism spectrum disorder (ASD) regarding appealing features of augmentative and alternative communication (AAC) applications. Two separate 1-hour focus groups were conducted with 8 SLPs and 5 parents of children with ASD to identify appealing design features of AAC Apps, their benefits and potential concerns. Participants were shown novel interface designs for communication mode, play mode and incentive systems. Participants responded to poll questions and provided benefits and drawbacks of the features as part of structured discussion. SLPs and parents identified a range of appealing features in communication mode (customization, animation and colour-coding) as well as in play mode (games and videos). SLPs preferred interfaces that supported motor planning and instruction while parents preferred those features such as character assistants that would appeal to their child. Overall SLPs and parents agreed on features for future AAC Apps. SLPs and parents have valuable input in regards to future AAC app design informed by their experiences with children with ASD. Both groups are key stakeholders in the design process and should be included in future design and research endeavors. Implications for Rehabilitation AAC applications for the iPad are often designed based on previous devices without consideration of new features. Ensuring the design of new interfaces are appealing and beneficial for children with ASD can potentially further support their communication. This study demonstrates how key stakeholders in AAC including speech language pathologists and parents can provide information to support the development of future AAC interface designs. Key stakeholders may be an untapped resource in the development of future AAC interfaces for children with ASD.
Cemento-osseous dysplasia of the jaw bones: key radiographic features
Alsufyani, NA; Lam, EWN
2011-01-01
Objective The purpose of this study is to assess possible diagnostic differences between general dentists (GPs) and oral and maxillofacial radiologists (RGs) in the identification of pathognomonic radiographic features of cemento-osseous dysplasia (COD) and its interpretation. Methods Using a systematic objective survey instrument, 3 RGs and 3 GPs reviewed 50 image sets of COD and similarly appearing entities (dense bone island, cementoblastoma, cemento-ossifying fibroma, fibrous dysplasia, complex odontoma and sclerosing osteitis). Participants were asked to identify the presence or absence of radiographic features and then to make an interpretation of the images. Results RGs identified a well-defined border (odds ratio (OR) 6.67, P < 0.05); radiolucent periphery (OR 8.28, P < 0.005); bilateral occurrence (OR 10.23, P < 0.01); mixed radiolucent/radiopaque internal structure (OR 10.53, P < 0.01); the absence of non-concentric bony expansion (OR 7.63, P < 0.05); and the association with anterior and posterior teeth (OR 4.43, P < 0.05) as key features of COD. Consequently, RGs were able to correctly interpret 79.3% of COD cases. In contrast, GPs identified the absence of root resorption (OR 4.52, P < 0.05) and the association with anterior and posterior teeth (OR 3.22, P = 0.005) as the only key features of COD and were able to correctly interpret 38.7% of COD cases. Conclusions There are statistically significant differences between RGs and GPs in the identification and interpretation of the radiographic features associated with COD (P < 0.001). We conclude that COD is radiographically discernable from other similarly appearing entities only if the characteristic radiographic features are correctly identified and then correctly interpreted. PMID:21346079
Cemento-osseous dysplasia of the jaw bones: key radiographic features.
Alsufyani, N A; Lam, E W N
2011-03-01
The purpose of this study is to assess possible diagnostic differences between general dentists (GPs) and oral and maxillofacial radiologists (RGs) in the identification of pathognomonic radiographic features of cemento-osseous dysplasia (COD) and its interpretation. Using a systematic objective survey instrument, 3 RGs and 3 GPs reviewed 50 image sets of COD and similarly appearing entities (dense bone island, cementoblastoma, cemento-ossifying fibroma, fibrous dysplasia, complex odontoma and sclerosing osteitis). Participants were asked to identify the presence or absence of radiographic features and then to make an interpretation of the images. RGs identified a well-defined border (odds ratio (OR) 6.67, P < 0.05); radiolucent periphery (OR 8.28, P < 0.005); bilateral occurrence (OR 10.23, P < 0.01); mixed radiolucent/radiopaque internal structure (OR 10.53, P < 0.01); the absence of non-concentric bony expansion (OR 7.63, P < 0.05); and the association with anterior and posterior teeth (OR 4.43, P < 0.05) as key features of COD. Consequently, RGs were able to correctly interpret 79.3% of COD cases. In contrast, GPs identified the absence of root resorption (OR 4.52, P < 0.05) and the association with anterior and posterior teeth (OR 3.22, P = 0.005) as the only key features of COD and were able to correctly interpret 38.7% of COD cases. There are statistically significant differences between RGs and GPs in the identification and interpretation of the radiographic features associated with COD (P < 0.001). We conclude that COD is radiographically discernable from other similarly appearing entities only if the characteristic radiographic features are correctly identified and then correctly interpreted.
NASA Astrophysics Data System (ADS)
Nengker, T.; Choudhary, A.; Dimri, A. P.
2018-04-01
The ability of an ensemble of five regional climate models (hereafter RCMs) under Coordinated Regional Climate Downscaling Experiments-South Asia (hereafter, CORDEX-SA) in simulating the key features of present day near surface mean air temperature (Tmean) climatology (1970-2005) over the Himalayan region is studied. The purpose of this paper is to understand the consistency in the performance of models across the ensemble, space and seasons. For this a number of statistical measures like trend, correlation, variance, probability distribution function etc. are applied to evaluate the performance of models against observation and simultaneously the underlying uncertainties between them for four different seasons. The most evident finding from the study is the presence of a large cold bias (-6 to -8 °C) which is systematically seen across all the models and across space and time over the Himalayan region. However, these RCMs with its fine resolution perform extremely well in capturing the spatial distribution of the temperature features as indicated by a consistently high spatial correlation (greater than 0.9) with the observation in all seasons. In spite of underestimation in simulated temperature and general intensification of cold bias with increasing elevation the models show a greater rate of warming than the observation throughout entire altitudinal stretch of study region. During winter, the simulated rate of warming gets even higher at high altitudes. Moreover, a seasonal response of model performance and its spatial variability to elevation is found.
For Dr. Nancy Snyderman's Parents, Staying Close to Family Is Key
... Issues Feature: Senior Living For Dr. Nancy Snyderman's Parents, Staying Close to Family Is Key Past Issues / ... home. "Watching my children grow closer to my parents has been a blessing, and having us nearby ...
Handwriting: Feature Correlation Analysis for Biometric Hashes
NASA Astrophysics Data System (ADS)
Vielhauer, Claus; Steinmetz, Ralf
2004-12-01
In the application domain of electronic commerce, biometric authentication can provide one possible solution for the key management problem. Besides server-based approaches, methods of deriving digital keys directly from biometric measures appear to be advantageous. In this paper, we analyze one of our recently published specific algorithms of this category based on behavioral biometrics of handwriting, the biometric hash. Our interest is to investigate to which degree each of the underlying feature parameters contributes to the overall intrapersonal stability and interpersonal value space. We will briefly discuss related work in feature evaluation and introduce a new methodology based on three components: the intrapersonal scatter (deviation), the interpersonal entropy, and the correlation between both measures. Evaluation of the technique is presented based on two data sets of different size. The method presented will allow determination of effects of parameterization of the biometric system, estimation of value space boundaries, and comparison with other feature selection approaches.
A novel feature extraction scheme with ensemble coding for protein-protein interaction prediction.
Du, Xiuquan; Cheng, Jiaxing; Zheng, Tingting; Duan, Zheng; Qian, Fulan
2014-07-18
Protein-protein interactions (PPIs) play key roles in most cellular processes, such as cell metabolism, immune response, endocrine function, DNA replication, and transcription regulation. PPI prediction is one of the most challenging problems in functional genomics. Although PPI data have been increasing because of the development of high-throughput technologies and computational methods, many problems are still far from being solved. In this study, a novel predictor was designed by using the Random Forest (RF) algorithm with the ensemble coding (EC) method. To reduce computational time, a feature selection method (DX) was adopted to rank the features and search the optimal feature combination. The DXEC method integrates many features and physicochemical/biochemical properties to predict PPIs. On the Gold Yeast dataset, the DXEC method achieves 67.2% overall precision, 80.74% recall, and 70.67% accuracy. On the Silver Yeast dataset, the DXEC method achieves 76.93% precision, 77.98% recall, and 77.27% accuracy. On the human dataset, the prediction accuracy reaches 80% for the DXEC-RF method. We extended the experiment to a bigger and more realistic dataset that maintains 50% recall on the Yeast All dataset and 80% recall on the Human All dataset. These results show that the DXEC method is suitable for performing PPI prediction. The prediction service of the DXEC-RF classifier is available at http://ailab.ahu.edu.cn:8087/ DXECPPI/index.jsp.
Continuous QKD and high speed data encryption
NASA Astrophysics Data System (ADS)
Zbinden, Hugo; Walenta, Nino; Guinnard, Olivier; Houlmann, Raphael; Wen, Charles Lim Ci; Korzh, Boris; Lunghi, Tommaso; Gisin, Nicolas; Burg, Andreas; Constantin, Jeremy; Legré, Matthieu; Trinkler, Patrick; Caselunghe, Dario; Kulesza, Natalia; Trolliet, Gregory; Vannel, Fabien; Junod, Pascal; Auberson, Olivier; Graf, Yoan; Curchod, Gilles; Habegger, Gilles; Messerli, Etienne; Portmann, Christopher; Henzen, Luca; Keller, Christoph; Pendl, Christian; Mühlberghuber, Michael; Roth, Christoph; Felber, Norbert; Gürkaynak, Frank; Schöni, Daniel; Muheim, Beat
2013-10-01
We present the results of a Swiss project dedicated to the development of high speed quantum key distribution and data encryption. The QKD engine features fully automated key exchange, hardware key distillation based on finite key security analysis, efficient authentication and wavelength division multiplexing of the quantum and the classical channel and one-time pas encryption. The encryption device allows authenticated symmetric key encryption (e.g AES) at rates of up to 100 Gb/s. A new quantum key can uploaded up to 1000 times second from the QKD engine.
Feature Selection for Chemical Sensor Arrays Using Mutual Information
Wang, X. Rosalind; Lizier, Joseph T.; Nowotny, Thomas; Berna, Amalia Z.; Prokopenko, Mikhail; Trowell, Stephen C.
2014-01-01
We address the problem of feature selection for classifying a diverse set of chemicals using an array of metal oxide sensors. Our aim is to evaluate a filter approach to feature selection with reference to previous work, which used a wrapper approach on the same data set, and established best features and upper bounds on classification performance. We selected feature sets that exhibit the maximal mutual information with the identity of the chemicals. The selected features closely match those found to perform well in the previous study using a wrapper approach to conduct an exhaustive search of all permitted feature combinations. By comparing the classification performance of support vector machines (using features selected by mutual information) with the performance observed in the previous study, we found that while our approach does not always give the maximum possible classification performance, it always selects features that achieve classification performance approaching the optimum obtained by exhaustive search. We performed further classification using the selected feature set with some common classifiers and found that, for the selected features, Bayesian Networks gave the best performance. Finally, we compared the observed classification performances with the performance of classifiers using randomly selected features. We found that the selected features consistently outperformed randomly selected features for all tested classifiers. The mutual information filter approach is therefore a computationally efficient method for selecting near optimal features for chemical sensor arrays. PMID:24595058
The Distance from Isolation: Why Communities Are the Logical Conclusion in e-Learning
ERIC Educational Resources Information Center
Weller, Martin
2007-01-01
This paper argues that the internet is built around key technology design features of openness, robustness and decentralisation. These design features have transformed into social features, which are embodied within the cultural values of the internet. By examining applications that have become popular on the net, the importance of these values is…
Some Effects of Procedural Variations on Choice Responding in Concurrent Chains
ERIC Educational Resources Information Center
Moore, J.
2009-01-01
The present research used pigeons in a three-key operant chamber and varied procedural features pertaining to both initial and terminal links of concurrent chains. The initial links randomly alternated on the side keys during a session, while the terminal links always appeared on the center key. Both equal and unequal initial-link schedules were…
ClinicalKey: a point-of-care search engine.
Vardell, Emily
2013-01-01
ClinicalKey is a new point-of-care resource for health care professionals. Through controlled vocabulary, ClinicalKey offers a cross section of resources on diseases and procedures, from journals to e-books and practice guidelines to patient education. A sample search was conducted to demonstrate the features of the database, and a comparison with similar tools is presented.
Takahashi, Yuma; Kagawa, Kotaro; Svensson, Erik I; Kawata, Masakado
2014-07-18
The effect of evolutionary changes in traits and phenotypic/genetic diversity on ecological dynamics has received much theoretical attention; however, the mechanisms and ecological consequences are usually unknown. Female-limited colour polymorphism in damselflies is a counter-adaptation to male mating harassment, and thus, is expected to alter population dynamics through relaxing sexual conflict. Here we show the side effect of the evolution of female morph diversity on population performance (for example, population productivity and sustainability) in damselflies. Our theoretical model incorporating key features of the sexual interaction predicts that the evolution of increased phenotypic diversity will reduce overall fitness costs to females from sexual conflict, which in turn will increase productivity, density and stability of a population. Field data and mesocosm experiments support these model predictions. Our study suggests that increased phenotypic diversity can enhance population performance that can potentially reduce extinction rates and thereby influence macroevolutionary processes.
NASA Astrophysics Data System (ADS)
Plaimer, Martin; Breitfuß, Christoph; Sinz, Wolfgang; Heindl, Simon F.; Ellersdorfer, Christian; Steffan, Hermann; Wilkening, Martin; Hennige, Volker; Tatschl, Reinhard; Geier, Alexander; Schramm, Christian; Freunberger, Stefan A.
2016-02-01
Lithium-ion batteries are in widespread use in electric vehicles and hybrid vehicles. Besides features like energy density, cost, lifetime, and recyclability the safety of a battery system is of prime importance. The separator material impacts all these properties and requires therefore an informed selection. The interplay between the mechanical and electrochemical properties as key selection criteria is investigated. Mechanical properties were investigated using tensile and puncture penetration tests at abuse relevant conditions. To investigate the electrochemical performance in terms of effective conductivity a method based on impedance spectroscopy was introduced. This methodology is applied to evaluate ten commercial separators which allows for a trade-off analysis of mechanical versus electrochemical performance. Based on the results, and in combination with other factors, this offers an effective approach to select suitable separators for automotive applications.
Procelewska, Joanna; Galilea, Javier Llamas; Clerc, Frederic; Farrusseng, David; Schüth, Ferdi
2007-01-01
The objective of this work is the construction of a correlation between characteristics of heterogeneous catalysts, encoded in a descriptor vector, and their experimentally measured performances in the propene oxidation reaction. In this paper the key issue in the modeling process, namely the selection of adequate input variables, is explored. Several data-driven feature selection strategies were applied in order to obtain an estimate of the differences in variance and information content of various attributes, furthermore to compare their relative importance. Quantitative property activity relationship techniques using probabilistic neural networks have been used for the creation of various semi-empirical models. Finally, a robust classification model, assigning selected attributes of solid compounds as input to an appropriate performance class in the model reaction was obtained. It has been evident that the mathematical support for the primary attributes set proposed by chemists can be highly desirable.
New Cogging Torque Reduction Methods for Permanent Magnet Machine
NASA Astrophysics Data System (ADS)
Bahrim, F. S.; Sulaiman, E.; Kumar, R.; Jusoh, L. I.
2017-08-01
Permanent magnet type motors (PMs) especially permanent magnet synchronous motor (PMSM) are expanding its limbs in industrial application system and widely used in various applications. The key features of this machine include high power and torque density, extending speed range, high efficiency, better dynamic performance and good flux-weakening capability. Nevertheless, high in cogging torque, which may cause noise and vibration, is one of the threat of the machine performance. Therefore, with the aid of 3-D finite element analysis (FEA) and simulation using JMAG Designer, this paper proposed new method for cogging torque reduction. Based on the simulation, methods of combining the skewing with radial pole pairing method and skewing with axial pole pairing method reduces the cogging torque effect up to 71.86% and 65.69% simultaneously.
Asynchronous transfer mode link performance over ground networks
NASA Technical Reports Server (NTRS)
Chow, E. T.; Markley, R. W.
1993-01-01
The results of an experiment to determine the feasibility of using asynchronous transfer mode (ATM) technology to support advanced spacecraft missions that require high-rate ground communications and, in particular, full-motion video are reported. Potential nodes in such a ground network include Deep Space Network (DSN) antenna stations, the Jet Propulsion Laboratory, and a set of national and international end users. The experiment simulated a lunar microrover, lunar lander, the DSN ground communications system, and distributed science users. The users were equipped with video-capable workstations. A key feature was an optical fiber link between two high-performance workstations equipped with ATM interfaces. Video was also transmitted through JPL's institutional network to a user 8 km from the experiment. Variations in video depending on the networks and computers were observed, the results are reported.
Chiral Maxwell demon in a quantum Hall system with a localized impurity
NASA Astrophysics Data System (ADS)
Rosselló, Guillem; López, Rosa; Platero, Gloria
2017-08-01
We investigate the role of chirality on the performance of a Maxwell demon implemented in a quantum Hall bar with a localized impurity. Within a stochastic thermodynamics description, we investigate the ability of such a demon to drive a current against a bias. We show that the ability of the demon to perform is directly related to its ability to extract information from the system. The key features of the proposed Maxwell demon are the topological properties of the quantum Hall system. The asymmetry of the electronic interactions felt at the localized state when the magnetic field is reversed joined to the fact that we consider energy-dependent (and asymmetric) tunneling barriers that connect such state with the Hall edge modes allow the demon to properly work.
From quantum physics to digital communication: Single sideband continuous phase modulation
NASA Astrophysics Data System (ADS)
Farès, Haïfa; Christian Glattli, D.; Louët, Yves; Palicot, Jacques; Moy, Christophe; Roulleau, Preden
2018-01-01
In the present paper, we propose a new frequency-shift keying continuous phase modulation (FSK-CPM) scheme having, by essence, the interesting feature of single-sideband (SSB) spectrum providing a very compact frequency occupation. First, the original principle, inspired from quantum physics (levitons), is presented. Besides, we address the problem of low-complexity coherent detection of this new waveform, based on orthonormal wave functions used to perform matched filtering for efficient demodulation. Consequently, this shows that the proposed modulation can operate using existing digital communication technology, since only well-known operations are performed (e.g., filtering, integration). This SSB property can be exploited to allow large bit rates transmissions at low carrier frequency without caring about image frequency degradation effects typical of ordinary double-sideband signals. xml:lang="fr"
Fast converging minimum probability of error neural network receivers for DS-CDMA communications.
Matyjas, John D; Psaromiligkos, Ioannis N; Batalama, Stella N; Medley, Michael J
2004-03-01
We consider a multilayer perceptron neural network (NN) receiver architecture for the recovery of the information bits of a direct-sequence code-division-multiple-access (DS-CDMA) user. We develop a fast converging adaptive training algorithm that minimizes the bit-error rate (BER) at the output of the receiver. The adaptive algorithm has three key features: i) it incorporates the BER, i.e., the ultimate performance evaluation measure, directly into the learning process, ii) it utilizes constraints that are derived from the properties of the optimum single-user decision boundary for additive white Gaussian noise (AWGN) multiple-access channels, and iii) it embeds importance sampling (IS) principles directly into the receiver optimization process. Simulation studies illustrate the BER performance of the proposed scheme.
Trust from the past: Bayesian Personalized Ranking based Link Prediction in Knowledge Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Baichuan; Choudhury, Sutanay; Al-Hasan, Mohammad
2016-02-01
Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on large-scale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in termsmore » of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level of accuracy.« less
An on-board pedestrian detection and warning system with features of side pedestrian
NASA Astrophysics Data System (ADS)
Cheng, Ruzhong; Zhao, Yong; Wong, ChupChung; Chan, KwokPo; Xu, Jiayao; Wang, Xin'an
2012-01-01
Automotive Active Safety(AAS) is the main branch of intelligence automobile study and pedestrian detection is the key problem of AAS, because it is related with the casualties of most vehicle accidents. For on-board pedestrian detection algorithms, the main problem is to balance efficiency and accuracy to make the on-board system available in real scenes, so an on-board pedestrian detection and warning system with the algorithm considered the features of side pedestrian is proposed. The system includes two modules, pedestrian detecting and warning module. Haar feature and a cascade of stage classifiers trained by Adaboost are first applied, and then HOG feature and SVM classifier are used to refine false positives. To make these time-consuming algorithms available in real-time use, a divide-window method together with operator context scanning(OCS) method are applied to increase efficiency. To merge the velocity information of the automotive, the distance of the detected pedestrian is also obtained, so the system could judge if there is a potential danger for the pedestrian in the front. With a new dataset captured in urban environment with side pedestrians on zebra, the embedded system and its algorithm perform an on-board available result on side pedestrian detection.
Differentiation of Glioblastoma and Lymphoma Using Feature Extraction and Support Vector Machine.
Yang, Zhangjing; Feng, Piaopiao; Wen, Tian; Wan, Minghua; Hong, Xunning
2017-01-01
Differentiation of glioblastoma multiformes (GBMs) and lymphomas using multi-sequence magnetic resonance imaging (MRI) is an important task that is valuable for treatment planning. However, this task is a challenge because GBMs and lymphomas may have a similar appearance in MRI images. This similarity may lead to misclassification and could affect the treatment results. In this paper, we propose a semi-automatic method based on multi-sequence MRI to differentiate these two types of brain tumors. Our method consists of three steps: 1) the key slice is selected from 3D MRIs and region of interests (ROIs) are drawn around the tumor region; 2) different features are extracted based on prior clinical knowledge and validated using a t-test; and 3) features that are helpful for classification are used to build an original feature vector and a support vector machine is applied to perform classification. In total, 58 GBM cases and 37 lymphoma cases are used to validate our method. A leave-one-out crossvalidation strategy is adopted in our experiments. The global accuracy of our method was determined as 96.84%, which indicates that our method is effective for the differentiation of GBM and lymphoma and can be applied in clinical diagnosis. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Quantum key distribution: vulnerable if imperfectly implemented
NASA Astrophysics Data System (ADS)
Leuchs, G.
2013-10-01
We report several vulnerabilities found in Clavis2, the flagship quantum key distribution (QKD) system from ID Quantique. We show the hacking of a calibration sequence run by Clavis2 to synchronize the Alice and Bob devices before performing the secret key exchange. This hack induces a temporal detection efficiency mismatch in Bob that can allow Eve to break the security of the cryptosystem using faked states. We also experimentally investigate the superlinear behaviour in the single-photon detectors (SPDs) used by Bob. Due to this superlinearity, the SPDs feature an actual multi-photon detection probability which is generally higher than the theoretically-modelled value. We show how this increases the risk of detector control attacks on QKD systems (including Clavis2) employing such SPDs. Finally, we review the experimental feasibility of Trojan-horse attacks. In the case of Clavis2, the objective is to read Bob's phase modulator to acquire knowledge of his basis choice as this information suffices for constructing the raw key in the Scarani-Acin-Ribordy-Gisin 2004 (SARG04) protocol. We work in close collaboration with ID Quantique and for all these loopholes, we notified them in advance. Wherever possible, we or ID Quantique proposed countermeasures and they implemented suitable patches and upgrade their systems.
PANTHER. Pattern ANalytics To support High-performance Exploitation and Reasoning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czuchlewski, Kristina Rodriguez; Hart, William E.
Sandia has approached the analysis of big datasets with an integrated methodology that uses computer science, image processing, and human factors to exploit critical patterns and relationships in large datasets despite the variety and rapidity of information. The work is part of a three-year LDRD Grand Challenge called PANTHER (Pattern ANalytics To support High-performance Exploitation and Reasoning). To maximize data analysis capability, Sandia pursued scientific advances across three key technical domains: (1) geospatial-temporal feature extraction via image segmentation and classification; (2) geospatial-temporal analysis capabilities tailored to identify and process new signatures more efficiently; and (3) domain- relevant models of humanmore » perception and cognition informing the design of analytic systems. Our integrated results include advances in geographical information systems (GIS) in which we discover activity patterns in noisy, spatial-temporal datasets using geospatial-temporal semantic graphs. We employed computational geometry and machine learning to allow us to extract and predict spatial-temporal patterns and outliers from large aircraft and maritime trajectory datasets. We automatically extracted static and ephemeral features from real, noisy synthetic aperture radar imagery for ingestion into a geospatial-temporal semantic graph. We worked with analysts and investigated analytic workflows to (1) determine how experiential knowledge evolves and is deployed in high-demand, high-throughput visual search workflows, and (2) better understand visual search performance and attention. Through PANTHER, Sandia's fundamental rethinking of key aspects of geospatial data analysis permits the extraction of much richer information from large amounts of data. The project results enable analysts to examine mountains of historical and current data that would otherwise go untouched, while also gaining meaningful, measurable, and defensible insights into overlooked relationships and patterns. The capability is directly relevant to the nation's nonproliferation remote-sensing activities and has broad national security applications for military and intelligence- gathering organizations.« less
NASA Astrophysics Data System (ADS)
Chirra, Prathyush; Leo, Patrick; Yim, Michael; Bloch, B. Nicolas; Rastinehad, Ardeshir R.; Purysko, Andrei; Rosen, Mark; Madabhushi, Anant; Viswanath, Satish
2018-02-01
The recent advent of radiomics has enabled the development of prognostic and predictive tools which use routine imaging, but a key question that still remains is how reproducible these features may be across multiple sites and scanners. This is especially relevant in the context of MRI data, where signal intensity values lack tissue specific, quantitative meaning, as well as being dependent on acquisition parameters (magnetic field strength, image resolution, type of receiver coil). In this paper we present the first empirical study of the reproducibility of 5 different radiomic feature families in a multi-site setting; specifically, for characterizing prostate MRI appearance. Our cohort comprised 147 patient T2w MRI datasets from 4 different sites, all of which were first pre-processed to correct acquisition-related for artifacts such as bias field, differing voxel resolutions, as well as intensity drift (non-standardness). 406 3D voxel wise radiomic features were extracted and evaluated in a cross-site setting to determine how reproducible they were within a relatively homogeneous non-tumor tissue region; using 2 different measures of reproducibility: Multivariate Coefficient of Variation and Instability Score. Our results demonstrated that Haralick features were most reproducible between all 4 sites. By comparison, Laws features were among the least reproducible between sites, as well as performing highly variably across their entire parameter space. Similarly, the Gabor feature family demonstrated good cross-site reproducibility, but for certain parameter combinations alone. These trends indicate that despite extensive pre-processing, only a subset of radiomic features and associated parameters may be reproducible enough for use within radiomics-based machine learning classifier schemes.
Arooj, Mahreen; Sakkiah, Sugunadevi; Cao, Guang ping; Lee, Keun Woo
2013-01-01
Due to the diligence of inherent redundancy and robustness in many biological networks and pathways, multitarget inhibitors present a new prospect in the pharmaceutical industry for treatment of complex diseases. Nevertheless, to design multitarget inhibitors is concurrently a great challenge for medicinal chemists. We have developed a novel computational approach by integrating the affinity predictions from structure-based virtual screening with dual ligand-based pharmacophore to discover potential dual inhibitors of human Thymidylate synthase (hTS) and human dihydrofolate reductase (hDHFR). These are the key enzymes in folate metabolic pathway that is necessary for the biosynthesis of RNA, DNA, and protein. Their inhibition has found clinical utility as antitumor, antimicrobial, and antiprotozoal agents. A druglike database was utilized to perform dual-target docking studies. Hits identified through docking experiments were mapped over a dual pharmacophore which was developed from experimentally known dual inhibitors of hTS and hDHFR. Pharmacophore mapping procedure helped us in eliminating the compounds which do not possess basic chemical features necessary for dual inhibition. Finally, three structurally diverse hit compounds that showed key interactions at both active sites, mapped well upon the dual pharmacophore, and exhibited lowest binding energies were regarded as possible dual inhibitors of hTS and hDHFR. Furthermore, optimization studies were performed for final dual hit compound and eight optimized dual hits demonstrating excellent binding features at target systems were also regarded as possible dual inhibitors of hTS and hDHFR. In general, the strategy used in the current study could be a promising computational approach and may be generally applicable to other dual target drug designs.
Arooj, Mahreen; Sakkiah, Sugunadevi; Cao, Guang ping; Lee, Keun Woo
2013-01-01
Due to the diligence of inherent redundancy and robustness in many biological networks and pathways, multitarget inhibitors present a new prospect in the pharmaceutical industry for treatment of complex diseases. Nevertheless, to design multitarget inhibitors is concurrently a great challenge for medicinal chemists. We have developed a novel computational approach by integrating the affinity predictions from structure-based virtual screening with dual ligand-based pharmacophore to discover potential dual inhibitors of human Thymidylate synthase (hTS) and human dihydrofolate reductase (hDHFR). These are the key enzymes in folate metabolic pathway that is necessary for the biosynthesis of RNA, DNA, and protein. Their inhibition has found clinical utility as antitumor, antimicrobial, and antiprotozoal agents. A druglike database was utilized to perform dual-target docking studies. Hits identified through docking experiments were mapped over a dual pharmacophore which was developed from experimentally known dual inhibitors of hTS and hDHFR. Pharmacophore mapping procedure helped us in eliminating the compounds which do not possess basic chemical features necessary for dual inhibition. Finally, three structurally diverse hit compounds that showed key interactions at both active sites, mapped well upon the dual pharmacophore, and exhibited lowest binding energies were regarded as possible dual inhibitors of hTS and hDHFR. Furthermore, optimization studies were performed for final dual hit compound and eight optimized dual hits demonstrating excellent binding features at target systems were also regarded as possible dual inhibitors of hTS and hDHFR. In general, the strategy used in the current study could be a promising computational approach and may be generally applicable to other dual target drug designs. PMID:23577115
Oceans 2.0: Interactive tools for the Visualization of Multi-dimensional Ocean Sensor Data
NASA Astrophysics Data System (ADS)
Biffard, B.; Valenzuela, M.; Conley, P.; MacArthur, M.; Tredger, S.; Guillemot, E.; Pirenne, B.
2016-12-01
Ocean Networks Canada (ONC) operates ocean observatories on all three of Canada's coasts. The instruments produce 280 gigabytes of data per day with 1/2 petabyte archived so far. In 2015, 13 terabytes were downloaded by over 500 users from across the world. ONC's data management system is referred to as "Oceans 2.0" owing to its interactive, participative features. A key element of Oceans 2.0 is real time data acquisition and processing: custom device drivers implement the input-output protocol of each instrument. Automatic parsing and calibration takes place on the fly, followed by event detection and quality control. All raw data are stored in a file archive, while the processed data are copied to fast databases. Interactive access to processed data is provided through data download and visualization/quick look features that are adapted to diverse data types (scalar, acoustic, video, multi-dimensional, etc). Data may be post or re-processed to add features, analysis or correct errors, update calibrations, etc. A robust storage structure has been developed consisting of an extensive file system and a no-SQL database (Cassandra). Cassandra is a node-based open source distributed database management system. It is scalable and offers improved performance for big data. A key feature is data summarization. The system has also been integrated with web services and an ERDDAP OPeNDAP server, capable of serving scalar and multidimensional data from Cassandra for fixed or mobile devices.A complex data viewer has been developed making use of the big data capability to interactively display live or historic echo sounder and acoustic Doppler current profiler data, where users can scroll, apply processing filters and zoom through gigabytes of data with simple interactions. This new technology brings scientists one step closer to a comprehensive, web-based data analysis environment in which visual assessment, filtering, event detection and annotation can be integrated.
Botsis, T.; Woo, E. J.; Ball, R.
2013-01-01
Background We previously demonstrated that a general purpose text mining system, the Vaccine adverse event Text Mining (VaeTM) system, could be used to automatically classify reports of an-aphylaxis for post-marketing safety surveillance of vaccines. Objective To evaluate the ability of VaeTM to classify reports to the Vaccine Adverse Event Reporting System (VAERS) of possible Guillain-Barré Syndrome (GBS). Methods We used VaeTM to extract the key diagnostic features from the text of reports in VAERS. Then, we applied the Brighton Collaboration (BC) case definition for GBS, and an information retrieval strategy (i.e. the vector space model) to quantify the specific information that is included in the key features extracted by VaeTM and compared it with the encoded information that is already stored in VAERS as Medical Dictionary for Regulatory Activities (MedDRA) Preferred Terms (PTs). We also evaluated the contribution of the primary (diagnosis and cause of death) and secondary (second level diagnosis and symptoms) diagnostic VaeTM-based features to the total VaeTM-based information. Results MedDRA captured more information and better supported the classification of reports for GBS than VaeTM (AUC: 0.904 vs. 0.777); the lower performance of VaeTM is likely due to the lack of extraction by VaeTM of specific laboratory results that are included in the BC criteria for GBS. On the other hand, the VaeTM-based classification exhibited greater specificity than the MedDRA-based approach (94.96% vs. 87.65%). Most of the VaeTM-based information was contained in the secondary diagnostic features. Conclusion For GBS, clinical signs and symptoms alone are not sufficient to match MedDRA coding for purposes of case classification, but are preferred if specificity is the priority. PMID:23650490
Cost-effectiveness of Rotavirus vaccination in Vietnam
Kim, Sun-Young; Goldie, Sue J; Salomon, Joshua A
2009-01-01
Background Rotavirus is the most common cause of severe diarrhea leading to hospitalization or disease-specific death among young children. New rotavirus vaccines have recently been approved. Some previous studies have provided broad qualitative insights into the health and economic consequences of introducing the vaccines into low-income countries, representing several features of rotavirus infection, such as varying degrees of severity and age-dependency of clinical manifestation, in their model-based analyses. We extend this work to reflect additional features of rotavirus (e.g., the possibility of reinfection and varying degrees of partial immunity conferred by natural infection), and assess the influence of the features on the cost-effectiveness of rotavirus vaccination. Methods We developed a Markov model that reflects key features of rotavirus infection, using the most recent data available. We applied the model to the 2004 Vietnamese birth cohort and re-evaluated the cost-effectiveness (2004 US dollars per disability-adjusted life year [DALY]) of rotavirus vaccination (Rotarix®) compared to no vaccination, from both societal and health care system perspectives. We conducted univariate sensitivity analyses and also performed a probabilistic sensitivity analysis, based on Monte Carlo simulations drawing parameter values from the distributions assigned to key uncertain parameters. Results Rotavirus vaccination would not completely protect young children against rotavirus infection due to the partial nature of vaccine immunity, but would effectively reduce severe cases of rotavirus gastroenteritis (outpatient visits, hospitalizations, or deaths) by about 67% over the first 5 years of life. Under base-case assumptions (94% coverage and $5 per dose), the incremental cost per DALY averted from vaccination compared to no vaccination would be $540 from the societal perspective and $550 from the health care system perspective. Conclusion Introducing rotavirus vaccines would be a cost-effective public health intervention in Vietnam. However, given the uncertainty about vaccine efficacy and potential changes in rotavirus epidemiology in local settings, further clinical research and re-evaluation of rotavirus vaccination programs may be necessary as new information emerges. PMID:19159483
Optimal Dynamic Sub-Threshold Technique for Extreme Low Power Consumption for VLSI
NASA Technical Reports Server (NTRS)
Duong, Tuan A.
2012-01-01
For miniaturization of electronics systems, power consumption plays a key role in the realm of constraints. Considering the very large scale integration (VLSI) design aspect, as transistor feature size is decreased to 50 nm and below, there is sizable increase in the number of transistors as more functional building blocks are embedded in the same chip. However, the consequent increase in power consumption (dynamic and leakage) will serve as a key constraint to inhibit the advantages of transistor feature size reduction. Power consumption can be reduced by minimizing the voltage supply (for dynamic power consumption) and/or increasing threshold voltage (V(sub th), for reducing leakage power). When the feature size of the transistor is reduced, supply voltage (V(sub dd)) and threshold voltage (V(sub th)) are also reduced accordingly; then, the leakage current becomes a bigger factor of the total power consumption. To maintain low power consumption, operation of electronics at sub-threshold levels can be a potentially strong contender; however, there are two obstacles to be faced: more leakage current per transistor will cause more leakage power consumption, and slow response time when the transistor is operated in weak inversion region. To enable low power consumption and yet obtain high performance, the CMOS (complementary metal oxide semiconductor) transistor as a basic element is viewed and controlled as a four-terminal device: source, drain, gate, and body, as differentiated from the traditional approach with three terminals: i.e., source and body, drain, and gate. This technique features multiple voltage sources to supply the dynamic control, and uses dynamic control to enable low-threshold voltage when the channel (N or P) is active, for speed response enhancement and high threshold voltage, and when the transistor channel (N or P) is inactive, to reduce the leakage current for low-leakage power consumption.
Towards an international taxonomy of integrated primary care: a Delphi consensus approach.
Valentijn, Pim P; Vrijhoef, Hubertus J M; Ruwaard, Dirk; Boesveld, Inge; Arends, Rosa Y; Bruijnzeels, Marc A
2015-05-22
Developing integrated service models in a primary care setting is considered an essential strategy for establishing a sustainable and affordable health care system. The Rainbow Model of Integrated Care (RMIC) describes the theoretical foundations of integrated primary care. The aim of this study is to refine the RMIC by developing a consensus-based taxonomy of key features. First, the appropriateness of previously identified key features was retested by conducting an international Delphi study that was built on the results of a previous national Delphi study. Second, categorisation of the features among the RMIC integrated care domains was assessed in a second international Delphi study. Finally, a taxonomy was constructed by the researchers based on the results of the three Delphi studies. The final taxonomy consists of 21 key features distributed over eight integration domains which are organised into three main categories: scope (person-focused vs. population-based), type (clinical, professional, organisational and system) and enablers (functional vs. normative) of an integrated primary care service model. The taxonomy provides a crucial differentiation that clarifies and supports implementation, policy formulation and research regarding the organisation of integrated primary care. Further research is needed to develop instruments based on the taxonomy that can reveal the realm of integrated primary care in practice.
Image feature extraction based on the camouflage effectiveness evaluation
NASA Astrophysics Data System (ADS)
Yuan, Xin; Lv, Xuliang; Li, Ling; Wang, Xinzhu; Zhang, Zhi
2018-04-01
The key step of camouflage effectiveness evaluation is how to combine the human visual physiological features, psychological features to select effectively evaluation indexes. Based on the predecessors' camo comprehensive evaluation method, this paper chooses the suitable indexes combining with the image quality awareness, and optimizes those indexes combining with human subjective perception. Thus, it perfects the theory of index extraction.
Broughton, Mary C.; Davidson, Jane W.
2016-01-01
Musicians' expressive bodily movements can influence observers' perception of performance. Furthermore, individual differences in observers' music and motor expertise can shape how they perceive and respond to music performance. However, few studies have investigated the bodily movements that different observers of music performance perceive as expressive, in order to understand how they might relate to the music being produced, and the particular instrument type. In this paper, we focus on marimba performance through two case studies—one solo and one collaborative context. This study aims to investigate the existence of a core repertoire of marimba performance expressive bodily movements, identify key music-related features associated with the core repertoire, and explore how observers' perception of expressive bodily movements might vary according to individual differences in their music and motor expertise. Of the six professional musicians who observed and analyzed the marimba performances, three were percussionists and experienced marimba players. Following training, observers implemented the Laban effort-shape movement analysis system to analyze marimba players' bodily movements that they perceived as expressive in audio-visual recordings of performance. Observations that were agreed by all participants as being the same type of action at the same location in the performance recording were examined in each case study, then across the two studies. A small repertoire of bodily movements emerged that the observers perceived as being expressive. Movements were primarily allied to elements of the music structure, technique, and expressive interpretation, however, these elements appeared to be interactive. A type of body sway movement and more localized sound generating actions were perceived as expressive. These movements co-occurred and also appeared separately. Individual participant data revealed slightly more variety in the types and locations of actions observed, with judges revealing preferences for observing particular types of expressive bodily movements. The particular expressive bodily movements that are produced and perceived in marimba performance appear to be shaped by music-related and sound generating features, musical context, and observer music and motor expertise. With an understanding of bodily movements that are generated and perceived as expressive, embodied music performance training programs might be developed to enhance expressive performer-audience communication. PMID:27630585
Broughton, Mary C; Davidson, Jane W
2016-01-01
Musicians' expressive bodily movements can influence observers' perception of performance. Furthermore, individual differences in observers' music and motor expertise can shape how they perceive and respond to music performance. However, few studies have investigated the bodily movements that different observers of music performance perceive as expressive, in order to understand how they might relate to the music being produced, and the particular instrument type. In this paper, we focus on marimba performance through two case studies-one solo and one collaborative context. This study aims to investigate the existence of a core repertoire of marimba performance expressive bodily movements, identify key music-related features associated with the core repertoire, and explore how observers' perception of expressive bodily movements might vary according to individual differences in their music and motor expertise. Of the six professional musicians who observed and analyzed the marimba performances, three were percussionists and experienced marimba players. Following training, observers implemented the Laban effort-shape movement analysis system to analyze marimba players' bodily movements that they perceived as expressive in audio-visual recordings of performance. Observations that were agreed by all participants as being the same type of action at the same location in the performance recording were examined in each case study, then across the two studies. A small repertoire of bodily movements emerged that the observers perceived as being expressive. Movements were primarily allied to elements of the music structure, technique, and expressive interpretation, however, these elements appeared to be interactive. A type of body sway movement and more localized sound generating actions were perceived as expressive. These movements co-occurred and also appeared separately. Individual participant data revealed slightly more variety in the types and locations of actions observed, with judges revealing preferences for observing particular types of expressive bodily movements. The particular expressive bodily movements that are produced and perceived in marimba performance appear to be shaped by music-related and sound generating features, musical context, and observer music and motor expertise. With an understanding of bodily movements that are generated and perceived as expressive, embodied music performance training programs might be developed to enhance expressive performer-audience communication.
MonoSLAM: real-time single camera SLAM.
Davison, Andrew J; Reid, Ian D; Molton, Nicholas D; Stasse, Olivier
2007-06-01
We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.
Willams, A Mark; Hodges, Nicola J; North, Jamie S; Barton, Gabor
2006-01-01
The perceptual-cognitive information used to support pattern-recognition skill in soccer was examined. In experiment 1, skilled players were quicker and more accurate than less-skilled players at recognising familiar and unfamiliar soccer action sequences presented on film. In experiment 2, these action sequences were converted into point-light displays, with superficial display features removed and the positions of players and the relational information between them made more salient. Skilled players were more accurate than less-skilled players in recognising sequences presented in point-light form, implying that each pattern of play can be defined by the unique relations between players. In experiment 3, various offensive and defensive players were occluded for the duration of each trial in an attempt to identify the most important sources of information underpinning successful performance. A decrease in response accuracy was observed under occluded compared with non-occluded conditions and the expertise effect was no longer observed. The relational information between certain key players, team-mates and their defensive counterparts may provide the essential information for effective pattern-recognition skill in soccer. Structural feature analysis, temporal phase relations, and knowledge-based information are effectively integrated to facilitate pattern recognition in dynamic sport tasks.
Enhancing business intelligence by means of suggestive reviews.
Qazi, Atika; Raj, Ram Gopal; Tahir, Muhammad; Cambria, Erik; Syed, Karim Bux Shah
2014-01-01
Appropriate identification and classification of online reviews to satisfy the needs of current and potential users pose a critical challenge for the business environment. This paper focuses on a specific kind of reviews: the suggestive type. Suggestions have a significant influence on both consumers' choices and designers' understanding and, hence, they are key for tasks such as brand positioning and social media marketing. The proposed approach consists of three main steps: (1) classify comparative and suggestive sentences; (2) categorize suggestive sentences into different types, either explicit or implicit locutions; (3) perform sentiment analysis on the classified reviews. A range of supervised machine learning approaches and feature sets are evaluated to tackle the problem of suggestive opinion mining. Experimental results for all three tasks are obtained on a dataset of mobile phone reviews and demonstrate that extending a bag-of-words representation with suggestive and comparative patterns is ideal for distinguishing suggestive sentences. In particular, it is observed that classifying suggestive sentences into implicit and explicit locutions works best when using a mixed sequential rule feature representation. Sentiment analysis achieves maximum performance when employing additional preprocessing in the form of negation handling and target masking, combined with sentiment lexicons.
Raboshchuk, Ganna; Nadeu, Climent; Jancovic, Peter; Lilja, Alex Peiro; Kokuer, Munevver; Munoz Mahamud, Blanca; Riverola De Veciana, Ana
2018-01-01
A large number of alarm sounds triggered by biomedical equipment occur frequently in the noisy environment of a neonatal intensive care unit (NICU) and play a key role in providing healthcare. In this paper, our work on the development of an automatic system for detection of acoustic alarms in that difficult environment is presented. Such automatic detection system is needed for the investigation of how a preterm infant reacts to auditory stimuli of the NICU environment and for an improved real-time patient monitoring. The approach presented in this paper consists of using the available knowledge about each alarm class in the design of the detection system. The information about the frequency structure is used in the feature extraction stage, and the time structure knowledge is incorporated at the post-processing stage. Several alternative methods are compared for feature extraction, modeling, and post-processing. The detection performance is evaluated with real data recorded in the NICU of the hospital, and by using both frame-level and period-level metrics. The experimental results show that the inclusion of both spectral and temporal information allows to improve the baseline detection performance by more than 60%.
Britton, P N; Eastwood, K; Paterson, B; Durrheim, D N; Dale, R C; Cheng, A C; Kenedi, C; Brew, B J; Burrow, J; Nagree, Y; Leman, P; Smith, D W; Read, K; Booy, R; Jones, C A
2015-05-01
Encephalitis is a complex neurological syndrome caused by inflammation of the brain parenchyma. The management of encephalitis is challenging because: the differential diagnosis of encephalopathy is broad; there is often rapid disease progression; it often requires intensive supportive management; and there are many aetiologic agents for which there is no definitive treatment. Patients with possible meningoencephalitis are often encountered in the emergency care environment where clinicians must consider differential diagnoses, perform appropriate investigations and initiate empiric antimicrobials. For patients who require admission to hospital and in whom encephalitis is likely, a staged approach to investigation and management is preferred with the potential involvement of multiple medical specialties. Key considerations in the investigation and management of patients with encephalitis addressed in this guideline include: Which first-line investigations should be performed?; Which aetiologies should be considered possible based on clinical features, risk factors and radiological features?; What tests should be arranged in order to diagnose the common causes of encephalitis?; When to consider empiric antimicrobials and immune modulatory therapies?; and What is the role of brain biopsy? © 2015 Royal Australasian College of Physicians.
A Projection and Density Estimation Method for Knowledge Discovery
Stanski, Adam; Hellwich, Olaf
2012-01-01
A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675
Enhancing Business Intelligence by Means of Suggestive Reviews
Qazi, Atika
2014-01-01
Appropriate identification and classification of online reviews to satisfy the needs of current and potential users pose a critical challenge for the business environment. This paper focuses on a specific kind of reviews: the suggestive type. Suggestions have a significant influence on both consumers' choices and designers' understanding and, hence, they are key for tasks such as brand positioning and social media marketing. The proposed approach consists of three main steps: (1) classify comparative and suggestive sentences; (2) categorize suggestive sentences into different types, either explicit or implicit locutions; (3) perform sentiment analysis on the classified reviews. A range of supervised machine learning approaches and feature sets are evaluated to tackle the problem of suggestive opinion mining. Experimental results for all three tasks are obtained on a dataset of mobile phone reviews and demonstrate that extending a bag-of-words representation with suggestive and comparative patterns is ideal for distinguishing suggestive sentences. In particular, it is observed that classifying suggestive sentences into implicit and explicit locutions works best when using a mixed sequential rule feature representation. Sentiment analysis achieves maximum performance when employing additional preprocessing in the form of negation handling and target masking, combined with sentiment lexicons. PMID:25054188
Nadeu, Climent; Jančovič, Peter; Lilja, Alex Peiró; Köküer, Münevver; Muñoz Mahamud, Blanca; Riverola De Veciana, Ana
2018-01-01
A large number of alarm sounds triggered by biomedical equipment occur frequently in the noisy environment of a neonatal intensive care unit (NICU) and play a key role in providing healthcare. In this paper, our work on the development of an automatic system for detection of acoustic alarms in that difficult environment is presented. Such automatic detection system is needed for the investigation of how a preterm infant reacts to auditory stimuli of the NICU environment and for an improved real-time patient monitoring. The approach presented in this paper consists of using the available knowledge about each alarm class in the design of the detection system. The information about the frequency structure is used in the feature extraction stage, and the time structure knowledge is incorporated at the post-processing stage. Several alternative methods are compared for feature extraction, modeling, and post-processing. The detection performance is evaluated with real data recorded in the NICU of the hospital, and by using both frame-level and period-level metrics. The experimental results show that the inclusion of both spectral and temporal information allows to improve the baseline detection performance by more than 60%. PMID:29404227
NASA Astrophysics Data System (ADS)
Judd, Jeffrey S.
Changes to the global workforce and technological advancements require graduating high school students to be more autonomous, self-directed, and critical in their thinking. To reflect societal changes, current educational reform has focused on developing more problem-based, collaborative, and student-centered classrooms to promote effective self-regulatory learning strategies, with the goal of helping students adapt to future learning situations and become life-long learners. This study identifies key features that may characterize these "powerful learning environments", which I term "high self-regulating learning environments" for ease of discussion, and examine the environment's role on students' motivation and self-regulatory processes. Using direct observation, surveys, and formal and informal interviews, I identified perceptions, motivations, and self-regulatory strategies of 67 students in my high school chemistry classes as they completed academic tasks in both high and low self-regulating learning environments. With social cognitive theory as a theoretical framework, I then examined how students' beliefs and processes changed after they moved from low to a high self-regulating learning environment. Analyses revealed that key features such as task meaning, utility, complexity, and control appeared to play a role in promoting positive changes in students' motivation and self-regulation. As embedded cases, I also included four students identified as high self-regulating, and four students identified as low self-regulating to examine whether the key features of high and low self-regulating learning environments played a similar role in both groups. Analysis of findings indicates that key features did play a significant role in promoting positive changes in both groups, with high self-regulating students' motivation and self-regulatory strategies generally remaining higher than the low self-regulating students; this was the case in both environments. Findings suggest that classroom learning environments and instruction can be modified using variations of these key features to promote specific or various levels of motivation and self-regulatory skill. In this way, educators may tailor their lessons or design their classrooms to better match and develop students' current level of motivation and self-regulation in order to maximize engagement in an academic task.
Sjekavica, Mariela; Haller, Herman; Cerić, Anita
2015-01-01
Building usage is the phase in the building life cycle that is most time-consuming, most functional, most significant due to building purpose and often systematically ignored. Maintenance is the set of activities that ensure the planned duration of facility exploitation phase in accordance with the requirements for quality maintenance of a large number of important building features as well as other elements immanent to the nature of facilities' life. The aim of the study is to show the analysis of the current state of organized, planned and comprehensive managerial approach in hospital utilization and maintenance in the Republic of Croatia, given on the case study of Clinical hospital center in Rijeka. The methodology used consists of relevant literature section of theory of facility utilization, maintenance and management in general, hospital buildings especially, display of practice on case study, and comparison of key performance indicators values obtained through interview with those that author Igal M. Shohet defined in his study by field surveys and statistical analyses. Despite many positive indicators of Clinical hospital center Rijeka maintenance, an additional research is needed in order to define a more complete national hospital maintenance strategy.
Observation duration analysis for Earth surface features from a Moon-based platform
NASA Astrophysics Data System (ADS)
Ye, Hanlin; Guo, Huadong; Liu, Guang; Ren, Yuanzhen
2018-07-01
Earth System Science is a discipline that performs holistic and comprehensive research on various components of the Earth. One of a key issue for the Earth monitoring and observation is to enhance the observation duration, the time intervals during which the Earth surface features can be observed by sensors. In this work, we propose to utilise the Moon as an Earth observation platform. Thanks to the long distance between the Earth and the Moon, and the vast space on the lunar surface which is suitable for sensor installation, this Earth observation platform could have large spatial coverage, long temporal duration, and could perform multi-layer detection of the Earth. The line of sight between a proposed Moon-based platform and the Earth will change with different lunar surface positions; therefore, in this work, the position of the lunar surface was divided into four regions, including one full observation region and three incomplete observation regions. As existing methods are not able to perform global-scale observations, a Boolean matrix method was established to calculate the necessary observation durations from a Moon-based platform. Based on Jet Propulsion Laboratory (JPL) ephemerides and Earth Orientation Parameters (EOP), a formula was developed to describe the geometrical relationship between the Moon-based platform and Earth surface features in the unified spatial coordinate system and the unified time system. In addition, we compared the observation geometries at different positions on the lunar surface and two parameters that are vital to observation duration calculations were considered. Finally, an analysis method was developed. We found that the observation duration of a given Earth surface feature shows little difference regardless of sensor position within the full observation region. However, the observation duration for sensors in the incomplete observation regions is reduced by at least half. In summary, our results demonstrate the suitability of a Moon-based platform located in the full observation region.
Stereo-vision system for finger tracking in breast self-examination
NASA Astrophysics Data System (ADS)
Zeng, Jianchao; Wang, Yue J.; Freedman, Matthew T.; Mun, Seong K.
1997-05-01
Early detection of breast cancer, one of the leading causes of death by cancer for women in the US is key to any strategy designed to reduce breast cancer mortality. Breast self-examination (BSE) is considered as the most cost- effective approach available for early breast cancer detection because it is simple and non-invasive, and a large fraction of breast cancers are actually found by patients using this technique today. In BSE, the patient should use a proper search strategy to cover the whole breast region in order to detect al possible tumors. At present there is no objective approach or clinical data to evaluate the effectiveness of a particular BSE strategy. Even if a particular strategy is determined to be the most effective, training women to use it is still difficult because there is no objective way for them to know whether they are doing it correctly. We have developed a system using vision-based motion tracking technology to gather quantitative data about the breast palpation process for analysis of the BSE technique. By tracking position of the fingers, the system can provide the first objective quantitative data about the BSE process, and thus can improve our knowledge of the technique and help analyze its effectiveness. By visually displaying all the touched position information to the patient as the BSE is being conducted, the system can provide interactive feedback to the patient and create a prototype for a computer-based BSE training system. We propose to use color features, put them on the finger nails and track these features, because in breast palpation the background is the breast itself which is similar to the hand in color. This situation can hinder the ability/efficiency of other features if real time performance is required. To simplify feature extraction process, color transform is utilized instead of RGB values. Although the clinical environment will be well illuminated, normalization of color attributes is applied to compensate for minor changes in illumination. Neighbor search is employed to ensure real time performance, and a three-finger pattern topology is always checked for extracted features to avoid any possible false features. After detecting the features in the images, 3D position parameters of the colored fingers are calculated using the stereo vision principle. In the experiments, a 15 frames/second performance is obtained using an image size of 160 X 120 and an SGI Indy MIPS R4000 workstation. The system is robust and accurate, which confirms the performance and effectiveness of the proposed approach. The system is robust and accurate, which confirms the performance and effectiveness of the proposed approach. The system can be used to quantify search strategy of the palpation and its documentation. With real-time visual feedback, it can be used to train both patients and new physicians to improve their performance of palpation and thus visual feedback, it can be used to train both patients and new physicians to improve their performance of palpation and thus improve the rate of breast tumor detection.
Global conservation outcomes depend on marine protected areas with five key features.
Edgar, Graham J; Stuart-Smith, Rick D; Willis, Trevor J; Kininmonth, Stuart; Baker, Susan C; Banks, Stuart; Barrett, Neville S; Becerro, Mikel A; Bernard, Anthony T F; Berkhout, Just; Buxton, Colin D; Campbell, Stuart J; Cooper, Antonia T; Davey, Marlene; Edgar, Sophie C; Försterra, Günter; Galván, David E; Irigoyen, Alejo J; Kushner, David J; Moura, Rodrigo; Parnell, P Ed; Shears, Nick T; Soler, German; Strain, Elisabeth M A; Thomson, Russell J
2014-02-13
In line with global targets agreed under the Convention on Biological Diversity, the number of marine protected areas (MPAs) is increasing rapidly, yet socio-economic benefits generated by MPAs remain difficult to predict and under debate. MPAs often fail to reach their full potential as a consequence of factors such as illegal harvesting, regulations that legally allow detrimental harvesting, or emigration of animals outside boundaries because of continuous habitat or inadequate size of reserve. Here we show that the conservation benefits of 87 MPAs investigated worldwide increase exponentially with the accumulation of five key features: no take, well enforced, old (>10 years), large (>100 km(2)), and isolated by deep water or sand. Using effective MPAs with four or five key features as an unfished standard, comparisons of underwater survey data from effective MPAs with predictions based on survey data from fished coasts indicate that total fish biomass has declined about two-thirds from historical baselines as a result of fishing. Effective MPAs also had twice as many large (>250 mm total length) fish species per transect, five times more large fish biomass, and fourteen times more shark biomass than fished areas. Most (59%) of the MPAs studied had only one or two key features and were not ecologically distinguishable from fished sites. Our results show that global conservation targets based on area alone will not optimize protection of marine biodiversity. More emphasis is needed on better MPA design, durable management and compliance to ensure that MPAs achieve their desired conservation value.
Global conservation outcomes depend on marine protected areas with five key features
NASA Astrophysics Data System (ADS)
Edgar, Graham J.; Stuart-Smith, Rick D.; Willis, Trevor J.; Kininmonth, Stuart; Baker, Susan C.; Banks, Stuart; Barrett, Neville S.; Becerro, Mikel A.; Bernard, Anthony T. F.; Berkhout, Just; Buxton, Colin D.; Campbell, Stuart J.; Cooper, Antonia T.; Davey, Marlene; Edgar, Sophie C.; Försterra, Günter; Galván, David E.; Irigoyen, Alejo J.; Kushner, David J.; Moura, Rodrigo; Parnell, P. Ed; Shears, Nick T.; Soler, German; Strain, Elisabeth M. A.; Thomson, Russell J.
2014-02-01
In line with global targets agreed under the Convention on Biological Diversity, the number of marine protected areas (MPAs) is increasing rapidly, yet socio-economic benefits generated by MPAs remain difficult to predict and under debate. MPAs often fail to reach their full potential as a consequence of factors such as illegal harvesting, regulations that legally allow detrimental harvesting, or emigration of animals outside boundaries because of continuous habitat or inadequate size of reserve. Here we show that the conservation benefits of 87 MPAs investigated worldwide increase exponentially with the accumulation of five key features: no take, well enforced, old (>10 years), large (>100km2), and isolated by deep water or sand. Using effective MPAs with four or five key features as an unfished standard, comparisons of underwater survey data from effective MPAs with predictions based on survey data from fished coasts indicate that total fish biomass has declined about two-thirds from historical baselines as a result of fishing. Effective MPAs also had twice as many large (>250mm total length) fish species per transect, five times more large fish biomass, and fourteen times more shark biomass than fished areas. Most (59%) of the MPAs studied had only one or two key features and were not ecologically distinguishable from fished sites. Our results show that global conservation targets based on area alone will not optimize protection of marine biodiversity. More emphasis is needed on better MPA design, durable management and compliance to ensure that MPAs achieve their desired conservation value.
NASA Astrophysics Data System (ADS)
Lakshmi, A.; Faheema, A. G. J.; Deodhare, Dipti
2016-05-01
Pedestrian detection is a key problem in night vision processing with a dozen of applications that will positively impact the performance of autonomous systems. Despite significant progress, our study shows that performance of state-of-the-art thermal image pedestrian detectors still has much room for improvement. The purpose of this paper is to overcome the challenge faced by the thermal image pedestrian detectors, which employ intensity based Region Of Interest (ROI) extraction followed by feature based validation. The most striking disadvantage faced by the first module, ROI extraction, is the failed detection of cloth insulted parts. To overcome this setback, this paper employs an algorithm and a principle of region growing pursuit tuned to the scale of the pedestrian. The statistics subtended by the pedestrian drastically vary with the scale and deviation from normality approach facilitates scale detection. Further, the paper offers an adaptive mathematical threshold to resolve the problem of subtracting the background while extracting cloth insulated parts as well. The inherent false positives of the ROI extraction module are limited by the choice of good features in pedestrian validation step. One such feature is curvelet feature, which has found its use extensively in optical images, but has as yet no reported results in thermal images. This has been used to arrive at a pedestrian detector with a reduced false positive rate. This work is the first venture made to scrutinize the utility of curvelet for characterizing pedestrians in thermal images. Attempt has also been made to improve the speed of curvelet transform computation. The classification task is realized through the use of the well known methodology of Support Vector Machines (SVMs). The proposed method is substantiated with qualified evaluation methodologies that permits us to carry out probing and informative comparisons across state-of-the-art features, including deep learning methods, with six standard and in-house databases. With reference to deep learning, our algorithm exhibits comparable performance. More important is that it has significant lower requirements in terms of compute power and memory, thus making it more relevant for depolyment in resource constrained platforms with significant size, weight and power constraints.
Augmented Reality to Preserve Hidden Vestiges in Historical Cities. a Case Study
NASA Astrophysics Data System (ADS)
Martínez, J. L.; Álvareza, S.; Finat, J.; Delgado, F. J.; Finat, J.
2015-02-01
Mobile devices provide an increasingly sophisticated support to enhanced experiences and understanding the remote past in an interactive way. The use of augmented reality technologies allows to develop mobile applications for indoor exploration of virtually reconstructed archaeological places. In our work we have built a virtual reconstruction of a Roman Villa with data arising from an urgent partial excavation which were performed in order to build a car parking in the historical city of Valladolid (Spain). In its current state, the archaeological site is covered by an urban garden. Localization and tracking are performed using a combination of GPS and inertial sensors of the mobile device. In this work we prove how to perform an interactive navigation around the 3D virtual model showing an interpretation of the way it was. The user experience is enhanced by answering some simple questions, performing minor tasks and puzzles which are presented with multimedia contents linked to key features of the archaeological site.
Challenges of Future High-End Computing
NASA Technical Reports Server (NTRS)
Bailey, David; Kutler, Paul (Technical Monitor)
1998-01-01
The next major milestone in high performance computing is a sustained rate of one Pflop/s (also written one petaflops, or 10(circumflex)15 floating-point operations per second). In addition to prodigiously high computational performance, such systems must of necessity feature very large main memories, as well as comparably high I/O bandwidth and huge mass storage facilities. The current consensus of scientists who have studied these issues is that "affordable" petaflops systems may be feasible by the year 2010, assuming that certain key technologies continue to progress at current rates. One important question is whether applications can be structured to perform efficiently on such systems, which are expected to incorporate many thousands of processors and deeply hierarchical memory systems. To answer these questions, advanced performance modeling techniques, including simulation of future architectures and applications, may be required. It may also be necessary to formulate "latency tolerant algorithms" and other completely new algorithmic approaches for certain applications. This talk will give an overview of these challenges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washeleski, Robert L.; Meyer, Edmond J. IV; King, Lyon B.
2013-10-15
Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. Themore » key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.« less
Washeleski, Robert L; Meyer, Edmond J; King, Lyon B
2013-10-01
Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.
Berisha, Visar; Wang, Shuai; LaCross, Amy; Liss, Julie
2015-01-01
Changes in some lexical features of language have been associated with the onset and progression of Alzheimer's disease. Here we describe a method to extract key features from discourse transcripts, which we evaluated on non-scripted news conferences from President Ronald Reagan, who was diagnosed with Alzheimer's disease in 1994, and President George Herbert Walker Bush, who has no known diagnosis of Alzheimer's disease. Key word counts previously associated with cognitive decline in Alzheimer's disease were extracted and regression analyses were conducted. President Reagan showed a significant reduction in the number of unique words over time and a significant increase in conversational fillers and non-specific nouns over time. There was no significant trend in these features for President Bush.
Li, Jie; Li, Lei; Liu, Rui; Lin, Hong-sheng
2012-10-01
The features and advantages of Chinese medicine (CM) in cancer comprehensive treatment have been in the spotlight of experts both at home and abroad. However, how to evaluate the effect of CM more objectively, scientifically and systematically is still the key problem of clinical trial, and also a limitation to the development and internationalization of CM oncology. The change of tumor response evaluation system in conventional medicine is gradually consistent with the features of CM clinical effect, such as they both focus on a combination of soft endpoints (i.e. quality of life, clinical benefit, etc.) and hard endpoints (i.e. tumor remission rate, time to progress, etc.). Although experts have proposed protocols of CM tumor response evaluation criteria and come to an agreement in general, divergences still exist in the importance, quantification and CM feature of the potential endpoints. Thus, establishing a CM characteristic and wildly accepted tumor response evaluation system is the key to promote internationalization of CM oncology, and also provides a more convenient and scientific platform for CM international cooperation and communication.
ERIC Educational Resources Information Center
Campbell, Ruth; And Others
1995-01-01
Studied 4- to 10-year-olds' familiarity judgments of peers. Found that, contrary to adults, external facial features were key. Also found that the switch to adult recognition pattern takes place after the ninth year. (ETB)
ERIC Educational Resources Information Center
March, James G.; Weiner, Stephen S.
2003-01-01
Discusses the complex nature of college leadership especially in terms of community colleges. Claims that the central feature of leadership problems is a deep mismatch between the conceptions of individual leaders and key features of the organizations they lead. Concludes that civilization will not survive without civil leaders. (JS)
Analysis of the Source Physics Experiment SPE4 Prime Using State-Of Parallel Numerical Tools.
NASA Astrophysics Data System (ADS)
Vorobiev, O.; Ezzedine, S. M.; Antoun, T.; Glenn, L.
2015-12-01
This work describes a methodology used for large scale modeling of wave propagation from underground chemical explosions conducted at the Nevada National Security Site (NNSS) fractured granitic rock. We show that the discrete natures of rock masses as well as the spatial variability of the fabric of rock properties are very important to understand ground motions induced by underground explosions. In order to build a credible conceptual model of the subsurface we integrated the geological, geomechanical and geophysical characterizations conducted during recent test at the NNSS as well as historical data from the characterization during the underground nuclear test conducted at the NNSS. Because detailed site characterization is limited, expensive and, in some instances, impossible we have numerically investigated the effects of the characterization gaps on the overall response of the system. We performed several computational studies to identify the key important geologic features specific to fractured media mainly the joints characterized at the NNSS. We have also explored common key features to both geological environments such as saturation and topography and assess which characteristics affect the most the ground motion in the near-field and in the far-field. Stochastic representation of these features based on the field characterizations has been implemented into LLNL's Geodyn-L hydrocode. Simulations were used to guide site characterization efforts in order to provide the essential data to the modeling community. We validate our computational results by comparing the measured and computed ground motion at various ranges for the recently executed SPE4 prime experiment. We have also conducted a comparative study between SPE4 prime and previous experiments SPE1 and SPE3 to assess similarities and differences and draw conclusions on designing SPE5.
Toshiba TDF-500 High Resolution Viewing And Analysis System
NASA Astrophysics Data System (ADS)
Roberts, Barry; Kakegawa, M.; Nishikawa, M.; Oikawa, D.
1988-06-01
A high resolution, operator interactive, medical viewing and analysis system has been developed by Toshiba and Bio-Imaging Research. This system provides many advanced features including high resolution displays, a very large image memory and advanced image processing capability. In particular, the system provides CRT frame buffers capable of update in one frame period, an array processor capable of image processing at operator interactive speeds, and a memory system capable of updating multiple frame buffers at frame rates whilst supporting multiple array processors. The display system provides 1024 x 1536 display resolution at 40Hz frame and 80Hz field rates. In particular, the ability to provide whole or partial update of the screen at the scanning rate is a key feature. This allows multiple viewports or windows in the display buffer with both fixed and cine capability. To support image processing features such as windowing, pan, zoom, minification, filtering, ROI analysis, multiplanar and 3D reconstruction, a high performance CPU is integrated into the system. This CPU is an array processor capable of up to 400 million instructions per second. To support the multiple viewer and array processors' instantaneous high memory bandwidth requirement, an ultra fast memory system is used. This memory system has a bandwidth capability of 400MB/sec and a total capacity of 256MB. This bandwidth is more than adequate to support several high resolution CRT's and also the fast processing unit. This fully integrated approach allows effective real time image processing. The integrated design of viewing system, memory system and array processor are key to the imaging system. It is the intention to describe the architecture of the image system in this paper.
You, Sungyong; Yoo, Seung-Ah; Choi, Susanna; Kim, Ji-Young; Park, Su-Jung; Ji, Jong Dae; Kim, Tae-Hwan; Kim, Ki-Jo; Cho, Chul-Soo; Hwang, Daehee; Kim, Wan-Uk
2014-01-01
Rheumatoid synoviocytes, which consist of fibroblast-like synoviocytes (FLSs) and synovial macrophages (SMs), are crucial for the progression of rheumatoid arthritis (RA). Particularly, FLSs of RA patients (RA-FLSs) exhibit invasive characteristics reminiscent of cancer cells, destroying cartilage and bone. RA-FLSs and SMs originate differently from mesenchymal and myeloid cells, respectively, but share many pathologic functions. However, the molecular signatures and biological networks representing the distinct and shared features of the two cell types are unknown. We performed global transcriptome profiling of FLSs and SMs obtained from RA and osteoarthritis patients. By comparing the transcriptomes, we identified distinct molecular signatures and cellular processes defining invasiveness of RA-FLSs and proinflammatory properties of RA-SMs, respectively. Interestingly, under the interleukin-1β (IL-1β)–stimulated condition, the RA-FLSs newly acquired proinflammatory signature dominant in RA-SMs without losing invasive properties. We next reconstructed a network model that delineates the shared, RA-FLS–dominant (invasive), and RA-SM–dominant (inflammatory) processes. From the network model, we selected 13 genes, including periostin, osteoblast-specific factor (POSTN) and twist basic helix–loop–helix transcription factor 1 (TWIST1), as key regulator candidates responsible for FLS invasiveness. Of note, POSTN and TWIST1 expressions were elevated in independent RA-FLSs and further instigated by IL-1β. Functional assays demonstrated the requirement of POSTN and TWIST1 for migration and invasion of RA-FLSs stimulated with IL-1β. Together, our systems approach to rheumatoid synovitis provides a basis for identifying key regulators responsible for pathological features of RA-FLSs and -SMs, demonstrating how a certain type of cells acquires functional redundancy under chronic inflammatory conditions. PMID:24374632
Geometric characterization and simulation of planar layered elastomeric fibrous biomaterials
Carleton, James B.; D’Amore, Antonio; Feaver, Kristen R.; ...
2014-10-13
Many important biomaterials are composed of multiple layers of networked fibers. While there is a growing interest in modeling and simulation of the mechanical response of these biomaterials, a theoretical foundation for such simulations has yet to be firmly established. Moreover, correctly identifying and matching key geometric features is a critically important first step for performing reliable mechanical simulations. This paper addresses these issues in two ways. First, using methods of geometric probability, we develop theoretical estimates for the mean linear and areal fiber intersection densities for 2-D fibrous networks. These densities are expressed in terms of the fiber densitymore » and the orientation distribution function, both of which are relatively easy-to-measure properties. Secondly, we develop a random walk algorithm for geometric simulation of 2-D fibrous networks which can accurately reproduce the prescribed fiber density and orientation distribution function. Furthermore, the linear and areal fiber intersection densities obtained with the algorithm are in agreement with the theoretical estimates. Both theoretical and computational results are compared with those obtained by post-processing of scanning electron microscope images of actual scaffolds. These comparisons reveal difficulties inherent to resolving fine details of multilayered fibrous networks. Finally, the methods provided herein can provide a rational means to define and generate key geometric features from experimentally measured or prescribed scaffold structural data.« less
Automated visual inspection of brake shoe wear
NASA Astrophysics Data System (ADS)
Lu, Shengfang; Liu, Zhen; Nan, Guo; Zhang, Guangjun
2015-10-01
With the rapid development of high-speed railway, the automated fault inspection is necessary to ensure train's operation safety. Visual technology is paid more attention in trouble detection and maintenance. For a linear CCD camera, Image alignment is the first step in fault detection. To increase the speed of image processing, an improved scale invariant feature transform (SIFT) method is presented. The image is divided into multiple levels of different resolution. Then, we do not stop to extract the feature from the lowest resolution to the highest level until we get sufficient SIFT key points. At that level, the image is registered and aligned quickly. In the stage of inspection, we devote our efforts to finding the trouble of brake shoe, which is one of the key components in brake system on electrical multiple units train (EMU). Its pre-warning on wear limitation is very important in fault detection. In this paper, we propose an automatic inspection approach to detect the fault of brake shoe. Firstly, we use multi-resolution pyramid template matching technology to fast locate the brake shoe. Then, we employ Hough transform to detect the circles of bolts in brake region. Due to the rigid characteristic of structure, we can identify whether the brake shoe has a fault. The experiments demonstrate that the way we propose has a good performance, and can meet the need of practical applications.