Modeling Operations Other Than War: Non-Combatants in Combat Modeling
1994-09-01
supposition that non-combatants are an essential feature in OOTW. The model proposal includes a methodology for civilian unit decision making . The model...combatants are an essential feature in OOTW. The model proposal includes a methodology for civilian unit decision making . Thi- model also includes...numerical example demonstrated that the model appeared to perform in an acceptable manner, in that it produced output within a reasonable range. During the
Interactive classification and content-based retrieval of tissue images
NASA Astrophysics Data System (ADS)
Aksoy, Selim; Marchisio, Giovanni B.; Tusk, Carsten; Koperski, Krzysztof
2002-11-01
We describe a system for interactive classification and retrieval of microscopic tissue images. Our system models tissues in pixel, region and image levels. Pixel level features are generated using unsupervised clustering of color and texture values. Region level features include shape information and statistics of pixel level feature values. Image level features include statistics and spatial relationships of regions. To reduce the gap between low-level features and high-level expert knowledge, we define the concept of prototype regions. The system learns the prototype regions in an image collection using model-based clustering and density estimation. Different tissue types are modeled using spatial relationships of these regions. Spatial relationships are represented by fuzzy membership functions. The system automatically selects significant relationships from training data and builds models which can also be updated using user relevance feedback. A Bayesian framework is used to classify tissues based on these models. Preliminary experiments show that the spatial relationship models we developed provide a flexible and powerful framework for classification and retrieval of tissue images.
Image-Based 3D Face Modeling System
NASA Astrophysics Data System (ADS)
Park, In Kyu; Zhang, Hui; Vezhnevets, Vladimir
2005-12-01
This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2[InlineEquation not available: see fulltext.]3 minutes.
Numerous features have been included to facilitate the modeling process, from model setup and data input, presentation and analysis of results, to easy export of results to spreadsheet programs for additional analysis.
Examining, Documenting, and Modeling the Problem Space of a Variable Domain
2002-06-14
Feature-Oriented Domain Analysis ( FODA ) .............................................................................................. 9...development of this proposed process include: Feature-Oriented Domain Analysis ( FODA ) [3,4], Organization Domain Modeling (ODM) [2,5,6], Family-Oriented...configuration knowledge using generators [2]. 8 Existing Methods of Domain Engineering Feature-Oriented Domain Analysis ( FODA ) FODA is a domain
Characterizing mammographic images by using generic texture features
2012-01-01
Introduction Although mammographic density is an established risk factor for breast cancer, its use is limited in clinical practice because of a lack of automated and standardized measurement methods. The aims of this study were to evaluate a variety of automated texture features in mammograms as risk factors for breast cancer and to compare them with the percentage mammographic density (PMD) by using a case-control study design. Methods A case-control study including 864 cases and 418 controls was analyzed automatically. Four hundred seventy features were explored as possible risk factors for breast cancer. These included statistical features, moment-based features, spectral-energy features, and form-based features. An elaborate variable selection process using logistic regression analyses was performed to identify those features that were associated with case-control status. In addition, PMD was assessed and included in the regression model. Results Of the 470 image-analysis features explored, 46 remained in the final logistic regression model. An area under the curve of 0.79, with an odds ratio per standard deviation change of 2.88 (95% CI, 2.28 to 3.65), was obtained with validation data. Adding the PMD did not improve the final model. Conclusions Using texture features to predict the risk of breast cancer appears feasible. PMD did not show any additional value in this study. With regard to the features assessed, most of the analysis tools appeared to reflect mammographic density, although some features did not correlate with PMD. It remains to be investigated in larger case-control studies whether these features can contribute to increased prediction accuracy. PMID:22490545
Saliency image of feature building for image quality assessment
NASA Astrophysics Data System (ADS)
Ju, Xinuo; Sun, Jiyin; Wang, Peng
2011-11-01
The purpose and method of image quality assessment are quite different for automatic target recognition (ATR) and traditional application. Local invariant feature detectors, mainly including corner detectors, blob detectors and region detectors etc., are widely applied for ATR. A saliency model of feature was proposed to evaluate feasibility of ATR in this paper. The first step consisted of computing the first-order derivatives on horizontal orientation and vertical orientation, and computing DoG maps in different scales respectively. Next, saliency images of feature were built based auto-correlation matrix in different scale. Then, saliency images of feature of different scales amalgamated. Experiment were performed on a large test set, including infrared images and optical images, and the result showed that the salient regions computed by this model were consistent with real feature regions computed by mostly local invariant feature extraction algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hao; Tan, Shan; Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan
2014-01-01
Purpose: To construct predictive models using comprehensive tumor features for the evaluation of tumor response to neoadjuvant chemoradiation therapy (CRT) in patients with esophageal cancer. Methods and Materials: This study included 20 patients who underwent trimodality therapy (CRT + surgery) and underwent {sup 18}F-fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT) both before and after CRT. Four groups of tumor features were examined: (1) conventional PET/CT response measures (eg, standardized uptake value [SUV]{sub max}, tumor diameter); (2) clinical parameters (eg, TNM stage, histology) and demographics; (3) spatial-temporal PET features, which characterize tumor SUV intensity distribution, spatial patterns, geometry, and associated changesmore » resulting from CRT; and (4) all features combined. An optimal feature set was identified with recursive feature selection and cross-validations. Support vector machine (SVM) and logistic regression (LR) models were constructed for prediction of pathologic tumor response to CRT, cross-validations being used to avoid model overfitting. Prediction accuracy was assessed by area under the receiver operating characteristic curve (AUC), and precision was evaluated by confidence intervals (CIs) of AUC. Results: When applied to the 4 groups of tumor features, the LR model achieved AUCs (95% CI) of 0.57 (0.10), 0.73 (0.07), 0.90 (0.06), and 0.90 (0.06). The SVM model achieved AUCs (95% CI) of 0.56 (0.07), 0.60 (0.06), 0.94 (0.02), and 1.00 (no misclassifications). With the use of spatial-temporal PET features combined with conventional PET/CT measures and clinical parameters, the SVM model achieved very high accuracy (AUC 1.00) and precision (no misclassifications)—results that were significantly better than when conventional PET/CT measures or clinical parameters and demographics alone were used. For groups with many tumor features (groups 3 and 4), the SVM model achieved significantly higher accuracy than did the LR model. Conclusions: The SVM model that used all features including spatial-temporal PET features accurately and precisely predicted pathologic tumor response to CRT in esophageal cancer.« less
Let's Go Off the Grid: Subsurface Flow Modeling With Analytic Elements
NASA Astrophysics Data System (ADS)
Bakker, M.
2017-12-01
Subsurface flow modeling with analytic elements has the major advantage that no grid or time stepping are needed. Analytic element formulations exist for steady state and transient flow in layered aquifers and unsaturated flow in the vadose zone. Analytic element models are vector-based and consist of points, lines and curves that represent specific features in the subsurface. Recent advances allow for the simulation of partially penetrating wells and multi-aquifer wells, including skin effect and wellbore storage, horizontal wells of poly-line shape including skin effect, sharp changes in subsurface properties, and surface water features with leaky beds. Input files for analytic element models are simple, short and readable, and can easily be generated from, for example, GIS databases. Future plans include the incorporation of analytic element in parts of grid-based models where additional detail is needed. This presentation will give an overview of advanced flow features that can be modeled, many of which are implemented in free and open-source software.
Jacob, Gitta A; Ower, Nicole; Buchholz, Angela
2013-03-01
Experiential avoidance (EA) is an important factor in maintaining different forms of psychopathology including borderline personality pathology (BPD). So far little is known about the functions of EA, BPD features and general psychopathology for positive emotions. In this study we investigated three different anticipated pathways of their influence on positive emotions. A total of 334 subjects varying in general psychopathology &/or BPD features completed an online survey including self-ratings of BPD features, psychopathology, negative and positive emotions, and EA. Measures of positive emotions included both a general self-rating (PANAS) and emotional changes induced by two positive movie clips. Data were analyzed by means of path analysis. In comparing the three path models, one model was found clearly superior: In this model, EA acts as a mediator of the influence of psychopathology, BPD features, and negative emotions in the prediction of both measures of positive emotions. EA plays a central role in maintaining lack of positive emotions. Therapeutic implications and study limitations are discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.
Urban topography for flood modeling by fusion of OpenStreetMap, SRTM and local knowledge
NASA Astrophysics Data System (ADS)
Winsemius, Hessel; Donchyts, Gennadii; Eilander, Dirk; Chen, Jorik; Leskens, Anne; Coughlan, Erin; Mawanda, Shaban; Ward, Philip; Diaz Loaiza, Andres; Luo, Tianyi; Iceland, Charles
2016-04-01
Topography data is essential for understanding and modeling of urban flood hazard. Within urban areas, much of the topography is defined by highly localized man-made features such as roads, channels, ditches, culverts and buildings. This results in the requirement that urban flood models require high resolution topography, and water conveying connections within the topography are considered. In recent years, more and more topography information is collected through LIDAR surveys however there are still many cities in the world where high resolution topography data is not available. Furthermore, information on connectivity is required for flood modelling, even when LIDAR data are used. In this contribution, we demonstrate how high resolution terrain data can be synthesized using a fusion between features in OpenStreetMap (OSM) data (including roads, culverts, channels and buildings) and existing low resolution and noisy SRTM elevation data using the Google Earth Engine platform. Our method uses typical existing OSM properties to estimate heights and topology associated with the features, and uses these to correct noise and burn features on top of the existing low resolution SRTM elevation data. The method has been setup in the Google Earth Engine platform so that local stakeholders and mapping teams can on-the-fly propose, include and visualize the effect of additional features and properties of features, which are deemed important for topography and water conveyance. These features can be included in a workshop environment. We pilot our tool over Dar Es Salaam.
Feature Modeling of HfO2 Atomic Layer Deposition Using HfCl4/H2O
NASA Astrophysics Data System (ADS)
Stout, Phillip J.; Adams, Vance; Ventzek, Peter L. G.
2003-03-01
A Monte Carlo based feature scale model (Papaya) has been applied to atomic layer deposition (ALD) of HfO2 using HfCl_4/H_20. The model includes physical effects of transport to surface, specular and diffusive reflection within feature, adsorption, surface diffusion, deposition and etching. Discussed will be the 3D feature modeling of HfO2 deposition in assorted features (vias and trenches). The effect of feature aspect ratios, pulse times, cycle number, and temperature on film thickness, feature coverage, and film Cl fraction (surface/bulk) will be discussed. Differences between HfO2 ALD on blanket wafers and in features will be highlighted. For instance, the minimum pulse times sufficient for surface reaction saturation on blanket wafers needs to be increased when depositing on features. Also, HCl products created during the HfCl4 and H_20 pulses are more likely to react within a feature than at the field, reducing OH coverage within the feature (vs blanket wafer) thus limiting the maximum coverage attainable for a pulse over a feature.
Xie, Tian; Chen, Xiao; Fang, Jingqin; Kang, Houyi; Xue, Wei; Tong, Haipeng; Cao, Peng; Wang, Sumei; Yang, Yizeng; Zhang, Weiguo
2018-04-01
Presurgical glioma grading by dynamic contrast-enhanced MRI (DCE-MRI) has unresolved issues. The aim of this study was to investigate the ability of textural features derived from pharmacokinetic model-based or model-free parameter maps of DCE-MRI in discriminating between different grades of gliomas, and their correlation with pathological index. Retrospective. Forty-two adults with brain gliomas. 3.0T, including conventional anatomic sequences and DCE-MRI sequences (variable flip angle T1-weighted imaging and three-dimensional gradient echo volumetric imaging). Regions of interest on the cross-sectional images with maximal tumor lesion. Five commonly used textural features, including Energy, Entropy, Inertia, Correlation, and Inverse Difference Moment (IDM), were generated. All textural features of model-free parameters (initial area under curve [IAUC], maximal signal intensity [Max SI], maximal up-slope [Max Slope]) could effectively differentiate between grade II (n = 15), grade III (n = 13), and grade IV (n = 14) gliomas (P < 0.05). Two textural features, Entropy and IDM, of four DCE-MRI parameters, including Max SI, Max Slope (model-free parameters), vp (Extended Tofts), and vp (Patlak) could differentiate grade III and IV gliomas (P < 0.01) in four measurements. Both Entropy and IDM of Patlak-based K trans and vp could differentiate grade II (n = 15) from III (n = 13) gliomas (P < 0.01) in four measurements. No textural features of any DCE-MRI parameter maps could discriminate between subtypes of grade II and III gliomas (P < 0.05). Both Entropy and IDM of Extended Tofts- and Patlak-based vp showed highest area under curve in discriminating between grade III and IV gliomas. However, intraclass correlation coefficient (ICC) of these features revealed relatively lower inter-observer agreement. No significant correlation was found between microvascular density and textural features, compared with a moderate correlation found between cellular proliferation index and those features. Textural features of DCE-MRI parameter maps displayed a good ability in glioma grading. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:1099-1111. © 2017 International Society for Magnetic Resonance in Medicine.
Modeling of the ground-to-SSFMB link networking features using SPW
NASA Technical Reports Server (NTRS)
Watson, John C.
1993-01-01
This report describes the modeling and simulation of the networking features of the ground-to-Space Station Freedom manned base (SSFMB) link using COMDISCO signal processing work-system (SPW). The networking features modeled include the implementation of Consultative Committee for Space Data Systems (CCSDS) protocols in the multiplexing of digitized audio and core data into virtual channel data units (VCDU's) in the control center complex and the demultiplexing of VCDU's in the onboard baseband signal processor. The emphasis of this work has been placed on techniques for modeling the CCSDS networking features using SPW. The objectives for developing the SPW models are to test the suitability of SPW for modeling networking features and to develop SPW simulation models of the control center complex and space station baseband signal processor for use in end-to-end testing of the ground-to-SSFMB S-band single access forward (SSAF) link.
SU-F-R-46: Predicting Distant Failure in Lung SBRT Using Multi-Objective Radiomics Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Z; Folkert, M; Iyengar, P
2016-06-15
Purpose: To predict distant failure in lung stereotactic body radiation therapy (SBRT) in early stage non-small cell lung cancer (NSCLC) by using a new multi-objective radiomics model. Methods: Currently, most available radiomics models use the overall accuracy as the objective function. However, due to data imbalance, a single object may not reflect the performance of a predictive model. Therefore, we developed a multi-objective radiomics model which considers both sensitivity and specificity as the objective functions simultaneously. The new model is used to predict distant failure in lung SBRT using 52 patients treated at our institute. Quantitative imaging features of PETmore » and CT as well as clinical parameters are utilized to build the predictive model. Image features include intensity features (9), textural features (12) and geometric features (8). Clinical parameters for each patient include demographic parameters (4), tumor characteristics (8), treatment faction schemes (4) and pretreatment medicines (6). The modelling procedure consists of two steps: extracting features from segmented tumors in PET and CT; and selecting features and training model parameters based on multi-objective. Support Vector Machine (SVM) is used as the predictive model, while a nondominated sorting-based multi-objective evolutionary computation algorithm II (NSGA-II) is used for solving the multi-objective optimization. Results: The accuracy for PET, clinical, CT, PET+clinical, PET+CT, CT+clinical, PET+CT+clinical are 71.15%, 84.62%, 84.62%, 85.54%, 82.69%, 84.62%, 86.54%, respectively. The sensitivities for the above seven combinations are 41.76%, 58.33%, 50.00%, 50.00%, 41.67%, 41.67%, 58.33%, while the specificities are 80.00%, 92.50%, 90.00%, 97.50%, 92.50%, 97.50%, 97.50%. Conclusion: A new multi-objective radiomics model for predicting distant failure in NSCLC treated with SBRT was developed. The experimental results show that the best performance can be obtained by combining all features.« less
Sun, X; Chen, K J; Berg, E P; Newman, D J; Schwartz, C A; Keller, W L; Maddock Carlin, K R
2014-02-01
The objective was to use digital color image texture features to predict troponin-T degradation in beef. Image texture features, including 88 gray level co-occurrence texture features, 81 two-dimension fast Fourier transformation texture features, and 48 Gabor wavelet filter texture features, were extracted from color images of beef strip steaks (longissimus dorsi, n = 102) aged for 10d obtained using a digital camera and additional lighting. Steaks were designated degraded or not-degraded based on troponin-T degradation determined on d 3 and d 10 postmortem by immunoblotting. Statistical analysis (STEPWISE regression model) and artificial neural network (support vector machine model, SVM) methods were designed to classify protein degradation. The d 3 and d 10 STEPWISE models were 94% and 86% accurate, respectively, while the d 3 and d 10 SVM models were 63% and 71%, respectively, in predicting protein degradation in aged meat. STEPWISE and SVM models based on image texture features show potential to predict troponin-T degradation in meat. © 2013.
ERIC Educational Resources Information Center
Wu, Pai-Hsing; Wu, Hsin-Kai; Kuo, Che-Yu; Hsu, Ying-Shao
2015-01-01
Computer-based learning tools include design features to enhance learning but learners may not always perceive the existence of these features and use them in desirable ways. There might be a gap between what the tool features are designed to offer (intended affordance) and what they are actually used (actual affordance). This study thus aims at…
What is a 'good' job? Modelling job quality for blue collar workers.
Jones, Wendy; Haslam, Roger; Haslam, Cheryl
2017-01-01
This paper proposes a model of job quality, developed from interviews with blue collar workers: bus drivers, manufacturing operatives and cleaners (n = 80). The model distinguishes between core features, important for almost all workers, and 'job fit' features, important to some but not others, or where individuals might have different preferences. Core job features found important for almost all interviewees included job security, personal safety and having enough pay to meet their needs. 'Job fit' features included autonomy and the opportunity to form close relationships. These showed more variation between participants; priorities were influenced by family commitments, stage of life and personal preference. The resulting theoretical perspective indicates the features necessary for a job to be considered 'good' by the person doing it, whilst not adversely affecting their health. The model should have utility as a basis for measuring and improving job quality and the laudable goal of creating 'good jobs'. Practitioner Summary: Good work can contribute positively to health and well-being, but there is a lack of agreement regarding the concept of a 'good' job. A model of job quality has been constructed based on semi-structured worker interviews (n = 80). The model emphasises the need to take into account variation between individuals in their preferred work characteristics.
Nuclear Engine System Simulation (NESS). Version 2.0: Program user's guide
NASA Technical Reports Server (NTRS)
Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman
1993-01-01
This Program User's Guide discusses the Nuclear Thermal Propulsion (NTP) engine system design features and capabilities modeled in the Nuclear Engine System Simulation (NESS): Version 2.0 program (referred to as NESS throughout the remainder of this document), as well as its operation. NESS was upgraded to include many new modeling capabilities not available in the original version delivered to NASA LeRC in Dec. 1991, NESS's new features include the following: (1) an improved input format; (2) an advanced solid-core NERVA-type reactor system model (ENABLER 2); (3) a bleed-cycle engine system option; (4) an axial-turbopump design option; (5) an automated pump-out turbopump assembly sizing option; (6) an off-design gas generator engine cycle design option; (7) updated hydrogen properties; (8) an improved output format; and (9) personal computer operation capability. Sample design cases are presented in the user's guide that demonstrate many of the new features associated with this upgraded version of NESS, as well as design modeling features associated with the original version of NESS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brost, Randolph C.; McLendon, William Clarence,
2013-01-01
Modeling geospatial information with semantic graphs enables search for sites of interest based on relationships between features, without requiring strong a priori models of feature shape or other intrinsic properties. Geospatial semantic graphs can be constructed from raw sensor data with suitable preprocessing to obtain a discretized representation. This report describes initial work toward extending geospatial semantic graphs to include temporal information, and initial results applying semantic graph techniques to SAR image data. We describe an efficient graph structure that includes geospatial and temporal information, which is designed to support simultaneous spatial and temporal search queries. We also report amore » preliminary implementation of feature recognition, semantic graph modeling, and graph search based on input SAR data. The report concludes with lessons learned and suggestions for future improvements.« less
Economic Modeling and Analysis of Educational Vouchers
ERIC Educational Resources Information Center
Epple, Dennis; Romano, Richard
2012-01-01
The analysis of educational vouchers has evolved from market-based analogies to models that incorporate distinctive features of the educational environment. These distinctive features include peer effects, scope for private school pricing and admissions based on student characteristics, the linkage of household residential and school choices in…
Salient object detection method based on multiple semantic features
NASA Astrophysics Data System (ADS)
Wang, Chunyang; Yu, Chunyan; Song, Meiping; Wang, Yulei
2018-04-01
The existing salient object detection model can only detect the approximate location of salient object, or highlight the background, to resolve the above problem, a salient object detection method was proposed based on image semantic features. First of all, three novel salient features were presented in this paper, including object edge density feature (EF), object semantic feature based on the convex hull (CF) and object lightness contrast feature (LF). Secondly, the multiple salient features were trained with random detection windows. Thirdly, Naive Bayesian model was used for combine these features for salient detection. The results on public datasets showed that our method performed well, the location of salient object can be fixed and the salient object can be accurately detected and marked by the specific window.
Korakianitis, Theodosios; Shi, Yubing
2006-09-01
Numerical modeling of the human cardiovascular system has always been an active research direction since the 19th century. In the past, various simulation models of different complexities were proposed for different research purposes. In this paper, an improved numerical model to study the dynamic function of the human circulation system is proposed. In the development of the mathematical model, the heart chambers are described with a variable elastance model. The systemic and pulmonary loops are described based on the resistance-compliance-inertia concept by considering local effects of flow friction, elasticity of blood vessels and inertia of blood in different segments of the blood vessels. As an advancement from previous models, heart valve dynamics and atrioventricular interaction, including atrial contraction and motion of the annulus fibrosus, are specifically modeled. With these improvements the developed model can predict several important features that were missing in previous numerical models, including regurgitant flow on heart valve closure, the value of E/A velocity ratio in mitral flow, the motion of the annulus fibrosus (called the KG diaphragm pumping action), etc. These features have important clinical meaning and their changes are often related to cardiovascular diseases. Successful simulation of these features enhances the accuracy of simulations of cardiovascular dynamics, and helps in clinical studies of cardiac function.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-17
... for transport category airplanes. These design features include an electronic flight control system... Design Features The GVI has an electronic flight control system and no direct coupling from the cockpit...: Gulfstream Model GVI Airplane; Electronic Flight Control System: Control Surface Position Awareness AGENCY...
Spatial features register: toward standardization of spatial features
Cascio, Janette
1994-01-01
As the need to share spatial data increases, more than agreement on a common format is needed to ensure that the data is meaningful to both the importer and the exporter. Effective data transfer also requires common definitions of spatial features. To achieve this, part 2 of the Spatial Data Transfer Standard (SDTS) provides a model for a spatial features data content specification and a glossary of features and attributes that fit this model. The model provides a foundation for standardizing spatial features. The glossary now contains only a limited subset of hydrographic and topographic features. For it to be useful, terms and definitions must be included for other categories, such as base cartographic, bathymetric, cadastral, cultural and demographic, geodetic, geologic, ground transportation, international boundaries, soils, vegetation, water, and wetlands, and the set of hydrographic and topographic features must be expanded. This paper will review the philosophy of the SDTS part 2 and the current plans for creating a national spatial features register as one mechanism for maintaining part 2.
Ship Detection Based on Multiple Features in Random Forest Model for Hyperspectral Images
NASA Astrophysics Data System (ADS)
Li, N.; Ding, L.; Zhao, H.; Shi, J.; Wang, D.; Gong, X.
2018-04-01
A novel method for detecting ships which aim to make full use of both the spatial and spectral information from hyperspectral images is proposed. Firstly, the band which is high signal-noise ratio in the range of near infrared or short-wave infrared spectrum, is used to segment land and sea on Otsu threshold segmentation method. Secondly, multiple features that include spectral and texture features are extracted from hyperspectral images. Principal components analysis (PCA) is used to extract spectral features, the Grey Level Co-occurrence Matrix (GLCM) is used to extract texture features. Finally, Random Forest (RF) model is introduced to detect ships based on the extracted features. To illustrate the effectiveness of the method, we carry out experiments over the EO-1 data by comparing single feature and different multiple features. Compared with the traditional single feature method and Support Vector Machine (SVM) model, the proposed method can stably achieve the target detection of ships under complex background and can effectively improve the detection accuracy of ships.
NASA Technical Reports Server (NTRS)
Befrui, Bizhan A.
1995-01-01
This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, H; Chen, W; Kligerman, S
2014-06-15
Purpose: To develop predictive models using quantitative PET/CT features for the evaluation of tumor response to neoadjuvant chemo-radiotherapy (CRT) in patients with locally advanced esophageal cancer. Methods: This study included 20 patients who underwent tri-modality therapy (CRT + surgery) and had {sup 18}F-FDG PET/CT scans before initiation of CRT and 4-6 weeks after completion of CRT but prior to surgery. Four groups of tumor features were examined: (1) conventional PET/CT response measures (SUVmax, tumor diameter, etc.); (2) clinical parameters (TNM stage, histology, etc.) and demographics; (3) spatial-temporal PET features, which characterize tumor SUV intensity distribution, spatial patterns, geometry, and associatedmore » changes resulting from CRT; and (4) all features combined. An optimal feature set was identified with recursive feature selection and cross-validations. Support vector machine (SVM) and logistic regression (LR) models were constructed for prediction of pathologic tumor response to CRT, using cross-validations to avoid model over-fitting. Prediction accuracy was assessed via area under the receiver operating characteristic curve (AUC), and precision was evaluated via confidence intervals (CIs) of AUC. Results: When applied to the 4 groups of tumor features, the LR model achieved AUCs (95% CI) of 0.57 (0.10), 0.73 (0.07), 0.90 (0.06), and 0.90 (0.06). The SVM model achieved AUCs (95% CI) of 0.56 (0.07), 0.60 (0.06), 0.94 (0.02), and 1.00 (no misclassifications). Using spatial-temporal PET features combined with conventional PET/CT measures and clinical parameters, the SVM model achieved very high accuracy (AUC 1.00) and precision (no misclassifications), significantly better than using conventional PET/CT measures or clinical parameters and demographics alone. For groups with a large number of tumor features (groups 3 and 4), the SVM model achieved significantly higher accuracy than the LR model. Conclusion: The SVM model using all features including quantitative PET/CT features accurately and precisely predicted pathologic tumor response to CRT in esophageal cancer. This work was supported in part by National Cancer Institute Grant R21 CA131979 and R01 CA172638. Shan Tan was supported in part by the National Natural Science Foundation of China 60971112 and 61375018, and by Fundamental Research Funds for the Central Universities 2012QN086.« less
The future of primordial features with large-scale structure surveys
NASA Astrophysics Data System (ADS)
Chen, Xingang; Dvorkin, Cora; Huang, Zhiqi; Namjoo, Mohammad Hossein; Verde, Licia
2016-11-01
Primordial features are one of the most important extensions of the Standard Model of cosmology, providing a wealth of information on the primordial Universe, ranging from discrimination between inflation and alternative scenarios, new particle detection, to fine structures in the inflationary potential. We study the prospects of future large-scale structure (LSS) surveys on the detection and constraints of these features. We classify primordial feature models into several classes, and for each class we present a simple template of power spectrum that encodes the essential physics. We study how well the most ambitious LSS surveys proposed to date, including both spectroscopic and photometric surveys, will be able to improve the constraints with respect to the current Planck data. We find that these LSS surveys will significantly improve the experimental sensitivity on features signals that are oscillatory in scales, due to the 3D information. For a broad range of models, these surveys will be able to reduce the errors of the amplitudes of the features by a factor of 5 or more, including several interesting candidates identified in the recent Planck data. Therefore, LSS surveys offer an impressive opportunity for primordial feature discovery in the next decade or two. We also compare the advantages of both types of surveys.
The future of primordial features with large-scale structure surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xingang; Namjoo, Mohammad Hossein; Dvorkin, Cora
2016-11-01
Primordial features are one of the most important extensions of the Standard Model of cosmology, providing a wealth of information on the primordial Universe, ranging from discrimination between inflation and alternative scenarios, new particle detection, to fine structures in the inflationary potential. We study the prospects of future large-scale structure (LSS) surveys on the detection and constraints of these features. We classify primordial feature models into several classes, and for each class we present a simple template of power spectrum that encodes the essential physics. We study how well the most ambitious LSS surveys proposed to date, including both spectroscopicmore » and photometric surveys, will be able to improve the constraints with respect to the current Planck data. We find that these LSS surveys will significantly improve the experimental sensitivity on features signals that are oscillatory in scales, due to the 3D information. For a broad range of models, these surveys will be able to reduce the errors of the amplitudes of the features by a factor of 5 or more, including several interesting candidates identified in the recent Planck data. Therefore, LSS surveys offer an impressive opportunity for primordial feature discovery in the next decade or two. We also compare the advantages of both types of surveys.« less
Modeling formalisms in Systems Biology
2011-01-01
Systems Biology has taken advantage of computational tools and high-throughput experimental data to model several biological processes. These include signaling, gene regulatory, and metabolic networks. However, most of these models are specific to each kind of network. Their interconnection demands a whole-cell modeling framework for a complete understanding of cellular systems. We describe the features required by an integrated framework for modeling, analyzing and simulating biological processes, and review several modeling formalisms that have been used in Systems Biology including Boolean networks, Bayesian networks, Petri nets, process algebras, constraint-based models, differential equations, rule-based models, interacting state machines, cellular automata, and agent-based models. We compare the features provided by different formalisms, and discuss recent approaches in the integration of these formalisms, as well as possible directions for the future. PMID:22141422
Cellular-based modeling of oscillatory dynamics in brain networks.
Skinner, Frances K
2012-08-01
Oscillatory, population activities have long been known to occur in our brains during different behavioral states. We know that many different cell types exist and that they contribute in distinct ways to the generation of these activities. I review recent papers that involve cellular-based models of brain networks, most of which include theta, gamma and sharp wave-ripple activities. To help organize the modeling work, I present it from a perspective of three different types of cellular-based modeling: 'Generic', 'Biophysical' and 'Linking'. Cellular-based modeling is taken to encompass the four features of experiment, model development, theory/analyses, and model usage/computation. The three modeling types are shown to include these features and interactions in different ways. Copyright © 2012 Elsevier Ltd. All rights reserved.
Modeling asthma: Pitfalls, promises, and the road ahead.
Rosenberg, Helene F; Druey, Kirk M
2018-02-16
Asthma is a chronic, heterogeneous, and recurring inflammatory disease of the lower airways, with exacerbations that feature airway inflammation and bronchial hyperresponsiveness. Asthma has been modeled extensively via disease induction in both wild-type and genetically manipulated laboratory mice (Mus musculus). Antigen sensitization and challenge strategies have reproduced numerous important features of airway inflammation characteristic of human asthma, notably the critical roles of type 2 T helper cell cytokines. Recent models of disease induction have advanced to include physiologic aeroallergens with prolonged respiratory challenge without systemic sensitization; others incorporate tobacco, respiratory viruses, or bacteria as exacerbants. Nonetheless, differences in lung size, structure, and physiologic responses limit the degree to which airway dynamics measured in mice can be compared to human subjects. Other rodent allergic airways models, including those featuring the guinea pig (Cavia porcellus) might be considered for lung function studies. Finally, domestic cats (Feline catus) and horses (Equus caballus) develop spontaneous obstructive airway disorders with clinical and pathologic features that parallel human asthma. Information on pathogenesis and treatment of these disorders is an important resource. ©2018 Society for Leukocyte Biology.
Ontology patterns for complex topographic feature yypes
Varanka, Dalia E.
2011-01-01
Complex feature types are defined as integrated relations between basic features for a shared meaning or concept. The shared semantic concept is difficult to define in commonly used geographic information systems (GIS) and remote sensing technologies. The role of spatial relations between complex feature parts was recognized in early GIS literature, but had limited representation in the feature or coverage data models of GIS. Spatial relations are more explicitly specified in semantic technology. In this paper, semantics for topographic feature ontology design patterns (ODP) are developed as data models for the representation of complex features. In the context of topographic processes, component assemblages are supported by resource systems and are found on local landscapes. The topographic ontology is organized across six thematic modules that can account for basic feature types, resource systems, and landscape types. Types of complex feature attributes include location, generative processes and physical description. Node/edge networks model standard spatial relations and relations specific to topographic science to represent complex features. To demonstrate these concepts, data from The National Map of the U. S. Geological Survey was converted and assembled into ODP.
Feature Selection Methods for Zero-Shot Learning of Neural Activity.
Caceres, Carlos A; Roos, Matthew J; Rupp, Kyle M; Milsap, Griffin; Crone, Nathan E; Wolmetz, Michael E; Ratto, Christopher R
2017-01-01
Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fave, X; Court, L; UT Health Science Center, Graduate School of Biomedical Sciences, Houston, TX
Purpose: To determine how radiomics features change during radiation therapy and whether those changes (delta-radiomics features) can improve prognostic models built with clinical factors. Methods: 62 radiomics features, including histogram, co-occurrence, run-length, gray-tone difference, and shape features, were calculated from pretreatment and weekly intra-treatment CTs for 107 stage III NSCLC patients (5–9 images per patient). Image preprocessing for each feature was determined using the set of pretreatment images: bit-depth resample and/or a smoothing filter were tested for their impact on volume-correlation and significance of each feature in univariate cox regression models to maximize their information content. Next, the optimized featuresmore » were calculated from the intratreatment images and tested in linear mixed-effects models to determine which features changed significantly with dose-fraction. The slopes in these significant features were defined as delta-radiomics features. To test their prognostic potential multivariate cox regression models were fitted, first using only clinical features and then clinical+delta-radiomics features for overall-survival, local-recurrence, and distant-metastases. Leave-one-out cross validation was used for model-fitting and patient predictions. Concordance indices(c-index) and p-values for the log-rank test with patients stratified at the median were calculated. Results: Approximately one-half of the 62 optimized features required no preprocessing, one-fourth required smoothing, and one-fourth required smoothing and resampling. From these, 54 changed significantly during treatment. For overall-survival, the c-index improved from 0.52 for clinical factors alone to 0.62 for clinical+delta-radiomics features. For distant-metastases, the c-index improved from 0.53 to 0.58, while for local-recurrence it did not improve. Patient stratification significantly improved (p-value<0.05) for overallsurvival and distant-metastases when delta-radiomics features were included. The delta-radiomics versions of autocorrelation, kurtosis, and compactness were selected most frequently in leave-one-out iterations. Conclusion: Weekly changes in radiomics features can potentially be used to evaluate treatment response and predict patient outcomes. High-risk patients could be recommended for dose escalation or consolidation chemotherapy. This project was funded in part by grants from the National Cancer Institute (NCI) and the Cancer Prevention Research Institute of Texas (CPRIT).« less
Megjhani, Murad; Terilli, Kalijah; Frey, Hans-Peter; Velazquez, Angela G; Doyle, Kevin William; Connolly, Edward Sander; Roh, David Jinou; Agarwal, Sachin; Claassen, Jan; Elhadad, Noemie; Park, Soojin
2018-01-01
Accurate prediction of delayed cerebral ischemia (DCI) after subarachnoid hemorrhage (SAH) can be critical for planning interventions to prevent poor neurological outcome. This paper presents a model using convolution dictionary learning to extract features from physiological data available from bedside monitors. We develop and validate a prediction model for DCI after SAH, demonstrating improved precision over standard methods alone. 488 consecutive SAH admissions from 2006 to 2014 to a tertiary care hospital were included. Models were trained on 80%, while 20% were set aside for validation testing. Modified Fisher Scale was considered the standard grading scale in clinical use; baseline features also analyzed included age, sex, Hunt-Hess, and Glasgow Coma Scales. An unsupervised approach using convolution dictionary learning was used to extract features from physiological time series (systolic blood pressure and diastolic blood pressure, heart rate, respiratory rate, and oxygen saturation). Classifiers (partial least squares and linear and kernel support vector machines) were trained on feature subsets of the derivation dataset. Models were applied to the validation dataset. The performances of the best classifiers on the validation dataset are reported by feature subset. Standard grading scale (mFS): AUC 0.54. Combined demographics and grading scales (baseline features): AUC 0.63. Kernel derived physiologic features: AUC 0.66. Combined baseline and physiologic features with redundant feature reduction: AUC 0.71 on derivation dataset and 0.78 on validation dataset. Current DCI prediction tools rely on admission imaging and are advantageously simple to employ. However, using an agnostic and computationally inexpensive learning approach for high-frequency physiologic time series data, we demonstrated that we could incorporate individual physiologic data to achieve higher classification accuracy.
The amphibian egg as a model system for analyzing gravity effects
NASA Technical Reports Server (NTRS)
Malacinski, G. M.; Neff, A. W.
1989-01-01
Amphibian eggs provide several advantageous features as a model system for analyzing the effects of gravity on single cells. Those features include large size, readily tracked intracellular inclusions, and ease of experimental manipulation. Employing novel gravity orientation as a tool, a substantial data base is being developed. That information is being used to construct a three-dimensional model of the frog (Xenopus laevis) egg. Internal cytoplasmic organization (rather than surface features) are being emphasized. Several cytoplasmic compartments (domains) have been elucidated, and their behavior in inverted eggs monitored. They have been incorporated into the model, and serve as a point of departure for further inquiry and speculation.
The amphibian egg as a model system for analyzing gravity effects
NASA Astrophysics Data System (ADS)
Malacinski, G. M.; Neff, A. W.
Amphibian eggs provide several advantageous features as a model system for analyzing the effects of gravity on single cells. Those features include large size, readily tracked intracellular inclusions, and ease of experimental manipulation. Employing novel gravity orientation as a tool, a substantial data base is being developed. That information is being used to construct a 3-D model of the frog (Xenopus laevis) egg. Internal cytoplasmic organization (rather than surface features) are being emphasized. Several cytoplasmic compartments (domains) have been elucidated, and their behavior in inverted eggs monitored. They have been incorporated into the model, and serve as a point of departure for further inquiry and speculation.
Use of Animal Models to Develop Antiaddiction Medications
Gardner, Eliot L.
2008-01-01
Although addiction is a uniquely human phenomenon, some of its pathognomonic features can be modeled at the animal level. Such features include the euphoric “high” produced by acute administration of addictive drugs; the dysphoric “crash” produced by acute withdrawal, drug-seeking, and drug-taking behaviors; and relapse to drug-seeking behavior after achieving successful abstinence. Animal models exist for each of these features. In this review, I focus on various animal models of addiction and how they can be used to search for clinically effective antiaddiction medications. I conclude by noting some of the new and novel medications that have been developed preclinically using such models and the hope for further developments along such lines. PMID:18803910
Features in visual search combine linearly
Pramod, R. T.; Arun, S. P.
2014-01-01
Single features such as line orientation and length are known to guide visual search, but relatively little is known about how multiple features combine in search. To address this question, we investigated how search for targets differing in multiple features (intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We tested race models (based on reaction times) and co-activation models (based on reciprocal of reaction times) for their ability to predict multiple feature searches. Multiple feature searches were best accounted for by a co-activation model in which feature information combined linearly (r = 0.95). This result agrees with the classic finding that these features are separable i.e., subjective dissimilarity ratings sum linearly. We then replicated the classical finding that the length and width of a rectangle are integral features—in other words, they combine nonlinearly in visual search. However, to our surprise, upon including aspect ratio as an additional feature, length and width combined linearly and this model outperformed all other models. Thus, length and width of a rectangle became separable when considered together with aspect ratio. This finding predicts that searches involving shapes with identical aspect ratio should be more difficult than searches where shapes differ in aspect ratio. We confirmed this prediction on a variety of shapes. We conclude that features in visual search co-activate linearly and demonstrate for the first time that aspect ratio is a novel feature that guides visual search. PMID:24715328
Deep Multimodal Distance Metric Learning Using Click Constraints for Image Ranking.
Yu, Jun; Yang, Xiaokang; Gao, Fei; Tao, Dacheng
2017-12-01
How do we retrieve images accurately? Also, how do we rank a group of images precisely and efficiently for specific queries? These problems are critical for researchers and engineers to generate a novel image searching engine. First, it is important to obtain an appropriate description that effectively represent the images. In this paper, multimodal features are considered for describing images. The images unique properties are reflected by visual features, which are correlated to each other. However, semantic gaps always exist between images visual features and semantics. Therefore, we utilize click feature to reduce the semantic gap. The second key issue is learning an appropriate distance metric to combine these multimodal features. This paper develops a novel deep multimodal distance metric learning (Deep-MDML) method. A structured ranking model is adopted to utilize both visual and click features in distance metric learning (DML). Specifically, images and their related ranking results are first collected to form the training set. Multimodal features, including click and visual features, are collected with these images. Next, a group of autoencoders is applied to obtain initially a distance metric in different visual spaces, and an MDML method is used to assign optimal weights for different modalities. Next, we conduct alternating optimization to train the ranking model, which is used for the ranking of new queries with click features. Compared with existing image ranking methods, the proposed method adopts a new ranking model to use multimodal features, including click features and visual features in DML. We operated experiments to analyze the proposed Deep-MDML in two benchmark data sets, and the results validate the effects of the method.
SketchBio: a scientist's 3D interface for molecular modeling and animation.
Waldon, Shawn M; Thompson, Peter M; Hahn, Patrick J; Taylor, Russell M
2014-10-30
Because of the difficulties involved in learning and using 3D modeling and rendering software, many scientists hire programmers or animators to create models and animations. This both slows the discovery process and provides opportunities for miscommunication. Working with multiple collaborators, a tool was developed (based on a set of design goals) to enable them to directly construct models and animations. SketchBio is presented, a tool that incorporates state-of-the-art bimanual interaction and drop shadows to enable rapid construction of molecular structures and animations. It includes three novel features: crystal-by-example, pose-mode physics, and spring-based layout that accelerate operations common in the formation of molecular models. Design decisions and their consequences are presented, including cases where iterative design was required to produce effective approaches. The design decisions, novel features, and inclusion of state-of-the-art techniques enabled SketchBio to meet all of its design goals. These features and decisions can be incorporated into existing and new tools to improve their effectiveness.
Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.
Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick
2017-10-01
In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).
Qiao, Hong; Li, Yinlin; Li, Fengfu; Xi, Xuanyang; Wu, Wei
2016-10-01
Recently, many biologically inspired visual computational models have been proposed. The design of these models follows the related biological mechanisms and structures, and these models provide new solutions for visual recognition tasks. In this paper, based on the recent biological evidence, we propose a framework to mimic the active and dynamic learning and recognition process of the primate visual cortex. From principle point of view, the main contributions are that the framework can achieve unsupervised learning of episodic features (including key components and their spatial relations) and semantic features (semantic descriptions of the key components), which support higher level cognition of an object. From performance point of view, the advantages of the framework are as follows: 1) learning episodic features without supervision-for a class of objects without a prior knowledge, the key components, their spatial relations and cover regions can be learned automatically through a deep neural network (DNN); 2) learning semantic features based on episodic features-within the cover regions of the key components, the semantic geometrical values of these components can be computed based on contour detection; 3) forming the general knowledge of a class of objects-the general knowledge of a class of objects can be formed, mainly including the key components, their spatial relations and average semantic values, which is a concise description of the class; and 4) achieving higher level cognition and dynamic updating-for a test image, the model can achieve classification and subclass semantic descriptions. And the test samples with high confidence are selected to dynamically update the whole model. Experiments are conducted on face images, and a good performance is achieved in each layer of the DNN and the semantic description learning process. Furthermore, the model can be generalized to recognition tasks of other objects with learning ability.
NASA Astrophysics Data System (ADS)
Lin, Wei; Chen, Yu-hua; Wang, Ji-yuan; Gao, Hong-sheng; Wang, Ji-jun; Su, Rong-hua; Mao, Wei
2011-04-01
Detection probability is an important index to represent and estimate target viability, which provides basis for target recognition and decision-making. But it will expend a mass of time and manpower to obtain detection probability in reality. At the same time, due to the different interpretation of personnel practice knowledge and experience, a great difference will often exist in the datum obtained. By means of studying the relationship between image features and perception quantity based on psychology experiments, the probability model has been established, in which the process is as following.Firstly, four image features have been extracted and quantified, which affect directly detection. Four feature similarity degrees between target and background were defined. Secondly, the relationship between single image feature similarity degree and perception quantity was set up based on psychological principle, and psychological experiments of target interpretation were designed which includes about five hundred people for interpretation and two hundred images. In order to reduce image features correlativity, a lot of artificial synthesis images have been made which include images with single brightness feature difference, images with single chromaticity feature difference, images with single texture feature difference and images with single shape feature difference. By analyzing and fitting a mass of experiments datum, the model quantitys have been determined. Finally, by applying statistical decision theory and experimental results, the relationship between perception quantity with target detection probability has been found. With the verification of a great deal of target interpretation in practice, the target detection probability can be obtained by the model quickly and objectively.
TU-CD-BRB-01: Normal Lung CT Texture Features Improve Predictive Models for Radiation Pneumonitis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krafft, S; The University of Texas Graduate School of Biomedical Sciences, Houston, TX; Briere, T
2015-06-15
Purpose: Existing normal tissue complication probability (NTCP) models for radiation pneumonitis (RP) traditionally rely on dosimetric and clinical data but are limited in terms of performance and generalizability. Extraction of pre-treatment image features provides a potential new category of data that can improve NTCP models for RP. We consider quantitative measures of total lung CT intensity and texture in a framework for prediction of RP. Methods: Available clinical and dosimetric data was collected for 198 NSCLC patients treated with definitive radiotherapy. Intensity- and texture-based image features were extracted from the T50 phase of the 4D-CT acquired for treatment planning. Amore » total of 3888 features (15 clinical, 175 dosimetric, and 3698 image features) were gathered and considered candidate predictors for modeling of RP grade≥3. A baseline logistic regression model with mean lung dose (MLD) was first considered. Additionally, a least absolute shrinkage and selection operator (LASSO) logistic regression was applied to the set of clinical and dosimetric features, and subsequently to the full set of clinical, dosimetric, and image features. Model performance was assessed by comparing area under the curve (AUC). Results: A simple logistic fit of MLD was an inadequate model of the data (AUC∼0.5). Including clinical and dosimetric parameters within the framework of the LASSO resulted in improved performance (AUC=0.648). Analysis of the full cohort of clinical, dosimetric, and image features provided further and significant improvement in model performance (AUC=0.727). Conclusions: To achieve significant gains in predictive modeling of RP, new categories of data should be considered in addition to clinical and dosimetric features. We have successfully incorporated CT image features into a framework for modeling RP and have demonstrated improved predictive performance. Validation and further investigation of CT image features in the context of RP NTCP modeling is warranted. This work was supported by the Rosalie B. Hite Fellowship in Cancer research awarded to SPK.« less
Identification of Alfalfa Leaf Diseases Using Image Recognition Technology
Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang
2016-01-01
Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease. PMID:27977767
Identification of Alfalfa Leaf Diseases Using Image Recognition Technology.
Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang
2016-01-01
Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease.
ERIC Educational Resources Information Center
Dunst, Carl J.
2015-01-01
A model for designing and implementing evidence-based in-service professional development in early childhood intervention as well as the key features of the model are described. The key features include professional development specialist (PDS) description and demonstration of an intervention practice, active and authentic job-embedded…
NASA Technical Reports Server (NTRS)
Hubbard, R.
1974-01-01
The radially-streaming particle model for broad quasar and Seyfert galaxy emission features is modified to include sources of time dependence. The results are suggestive of reported observations of multiple components, variability, and transient features in the wings of Seyfert and quasi-stellar emission lines.
Complex Topographic Feature Ontology Patterns
Varanka, Dalia E.; Jerris, Thomas J.
2015-01-01
Semantic ontologies are examined as effective data models for the representation of complex topographic feature types. Complex feature types are viewed as integrated relations between basic features for a basic purpose. In the context of topographic science, such component assemblages are supported by resource systems and found on the local landscape. Ontologies are organized within six thematic modules of a domain ontology called Topography that includes within its sphere basic feature types, resource systems, and landscape types. Context is constructed not only as a spatial and temporal setting, but a setting also based on environmental processes. Types of spatial relations that exist between components include location, generative processes, and description. An example is offered in a complex feature type ‘mine.’ The identification and extraction of complex feature types are an area for future research.
Yoo, Kwangsun; Rosenberg, Monica D; Hsu, Wei-Ting; Zhang, Sheng; Li, Chiang-Shan R; Scheinost, Dustin; Constable, R Todd; Chun, Marvin M
2018-02-15
Connectome-based predictive modeling (CPM; Finn et al., 2015; Shen et al., 2017) was recently developed to predict individual differences in traits and behaviors, including fluid intelligence (Finn et al., 2015) and sustained attention (Rosenberg et al., 2016a), from functional brain connectivity (FC) measured with fMRI. Here, using the CPM framework, we compared the predictive power of three different measures of FC (Pearson's correlation, accordance, and discordance) and two different prediction algorithms (linear and partial least square [PLS] regression) for attention function. Accordance and discordance are recently proposed FC measures that respectively track in-phase synchronization and out-of-phase anti-correlation (Meskaldji et al., 2015). We defined connectome-based models using task-based or resting-state FC data, and tested the effects of (1) functional connectivity measure and (2) feature-selection/prediction algorithm on individualized attention predictions. Models were internally validated in a training dataset using leave-one-subject-out cross-validation, and externally validated with three independent datasets. The training dataset included fMRI data collected while participants performed a sustained attention task and rested (N = 25; Rosenberg et al., 2016a). The validation datasets included: 1) data collected during performance of a stop-signal task and at rest (N = 83, including 19 participants who were administered methylphenidate prior to scanning; Farr et al., 2014a; Rosenberg et al., 2016b), 2) data collected during Attention Network Task performance and rest (N = 41, Rosenberg et al., in press), and 3) resting-state data and ADHD symptom severity from the ADHD-200 Consortium (N = 113; Rosenberg et al., 2016a). Models defined using all combinations of functional connectivity measure (Pearson's correlation, accordance, and discordance) and prediction algorithm (linear and PLS regression) predicted attentional abilities, with correlations between predicted and observed measures of attention as high as 0.9 for internal validation, and 0.6 for external validation (all p's < 0.05). Models trained on task data outperformed models trained on rest data. Pearson's correlation and accordance features generally showed a small numerical advantage over discordance features, while PLS regression models were usually better than linear regression models. Overall, in addition to correlation features combined with linear models (Rosenberg et al., 2016a), it is useful to consider accordance features and PLS regression for CPM. Copyright © 2017 Elsevier Inc. All rights reserved.
Liu, Zhiya; Song, Xiaohong; Seger, Carol A.
2015-01-01
We examined whether the degree to which a feature is uniquely characteristic of a category can affect categorization above and beyond the typicality of the feature. We developed a multiple feature value category structure with different dimensions within which feature uniqueness and typicality could be manipulated independently. Using eye tracking, we found that the highest attentional weighting (operationalized as number of fixations, mean fixation time, and the first fixation of the trial) was given to a dimension that included a feature that was both unique and highly typical of the category. Dimensions that included features that were highly typical but not unique, or were unique but not highly typical, received less attention. A dimension with neither a unique nor a highly typical feature received least attention. On the basis of these results we hypothesized that subjects categorized via a rule learning procedure in which they performed an ordered evaluation of dimensions, beginning with unique and strongly typical dimensions, and in which earlier dimensions received higher weighting in the decision. This hypothesis accounted for performance on transfer stimuli better than simple implementations of two other common theories of category learning, exemplar models and prototype models, in which all dimensions were evaluated in parallel and received equal weighting. PMID:26274332
Liu, Zhiya; Song, Xiaohong; Seger, Carol A
2015-01-01
We examined whether the degree to which a feature is uniquely characteristic of a category can affect categorization above and beyond the typicality of the feature. We developed a multiple feature value category structure with different dimensions within which feature uniqueness and typicality could be manipulated independently. Using eye tracking, we found that the highest attentional weighting (operationalized as number of fixations, mean fixation time, and the first fixation of the trial) was given to a dimension that included a feature that was both unique and highly typical of the category. Dimensions that included features that were highly typical but not unique, or were unique but not highly typical, received less attention. A dimension with neither a unique nor a highly typical feature received least attention. On the basis of these results we hypothesized that subjects categorized via a rule learning procedure in which they performed an ordered evaluation of dimensions, beginning with unique and strongly typical dimensions, and in which earlier dimensions received higher weighting in the decision. This hypothesis accounted for performance on transfer stimuli better than simple implementations of two other common theories of category learning, exemplar models and prototype models, in which all dimensions were evaluated in parallel and received equal weighting.
Modeling listeners' emotional response to music.
Eerola, Tuomas
2012-10-01
An overview of the computational prediction of emotional responses to music is presented. Communication of emotions by music has received a great deal of attention during the last years and a large number of empirical studies have described the role of individual features (tempo, mode, articulation, timbre) in predicting the emotions suggested or invoked by the music. However, unlike the present work, relatively few studies have attempted to model continua of expressed emotions using a variety of musical features from audio-based representations in a correlation design. The construction of the computational model is divided into four separate phases, with a different focus for evaluation. These phases include the theoretical selection of relevant features, empirical assessment of feature validity, actual feature selection, and overall evaluation of the model. Existing research on music and emotions and extraction of musical features is reviewed in terms of these criteria. Examples drawn from recent studies of emotions within the context of film soundtracks are used to demonstrate each phase in the construction of the model. These models are able to explain the dominant part of the listeners' self-reports of the emotions expressed by music and the models show potential to generalize over different genres within Western music. Possible applications of the computational models of emotions are discussed. Copyright © 2012 Cognitive Science Society, Inc.
NASA Technical Reports Server (NTRS)
Choo, Y. K.; Staiger, P. J.
1982-01-01
The code was designed to analyze performance at valves-wide-open design flow. The code can model conventional steam cycles as well as cycles that include such special features as process steam extraction and induction and feedwater heating by external heat sources. Convenience features and extensions to the special features were incorporated into the PRESTO code. The features are described, and detailed examples illustrating the use of both the original and the special features are given.
On the effect of model parameters on forecast objects
NASA Astrophysics Data System (ADS)
Marzban, Caren; Jones, Corinne; Li, Ning; Sandgathe, Scott
2018-04-01
Many physics-based numerical models produce a gridded, spatial field of forecasts, e.g., a temperature map
. The field for some quantities generally consists of spatially coherent and disconnected objects
. Such objects arise in many problems, including precipitation forecasts in atmospheric models, eddy currents in ocean models, and models of forest fires. Certain features of these objects (e.g., location, size, intensity, and shape) are generally of interest. Here, a methodology is developed for assessing the impact of model parameters on the features of forecast objects. The main ingredients of the methodology include the use of (1) Latin hypercube sampling for varying the values of the model parameters, (2) statistical clustering algorithms for identifying objects, (3) multivariate multiple regression for assessing the impact of multiple model parameters on the distribution (across the forecast domain) of object features, and (4) methods for reducing the number of hypothesis tests and controlling the resulting errors. The final output
of the methodology is a series of box plots and confidence intervals that visually display the sensitivities. The methodology is demonstrated on precipitation forecasts from a mesoscale numerical weather prediction model.
78 FR 41684 - Special Conditions: Embraer S.A. Model EMB-550 Airplanes, Sudden Engine Stoppage
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-11
... airplane has novel or unusual design features as compared to the state of technology envisioned in the airworthiness standards for transport-category airplanes. These design features include engine size and the... contain adequate or appropriate safety standards for this design feature. These special conditions contain...
Interactions between hyporheic flow produced by stream meanders, bars, and dunes
Stonedahl, Susa H.; Harvey, Judson W.; Packman, Aaron I.
2013-01-01
Stream channel morphology from grain-scale roughness to large meanders drives hyporheic exchange flow. In practice, it is difficult to model hyporheic flow over the wide spectrum of topographic features typically found in rivers. As a result, many studies only characterize isolated exchange processes at a single spatial scale. In this work, we simulated hyporheic flows induced by a range of geomorphic features including meanders, bars and dunes in sand bed streams. Twenty cases were examined with 5 degrees of river meandering. Each meandering river model was run initially without any small topographic features. Models were run again after superimposing only bars and then only dunes, and then run a final time after including all scales of topographic features. This allowed us to investigate the relative importance and interactions between flows induced by different scales of topography. We found that dunes typically contributed more to hyporheic exchange than bars and meanders. Furthermore, our simulations show that the volume of water exchanged and the distributions of hyporheic residence times resulting from various scales of topographic features are close to, but not linearly additive. These findings can potentially be used to develop scaling laws for hyporheic flow that can be widely applied in streams and rivers.
Hierarchical Context Modeling for Video Event Recognition.
Wang, Xiaoyang; Ji, Qiang
2016-10-11
Current video event recognition research remains largely target-centered. For real-world surveillance videos, targetcentered event recognition faces great challenges due to large intra-class target variation, limited image resolution, and poor detection and tracking results. To mitigate these challenges, we introduced a context-augmented video event recognition approach. Specifically, we explicitly capture different types of contexts from three levels including image level, semantic level, and prior level. At the image level, we introduce two types of contextual features including the appearance context features and interaction context features to capture the appearance of context objects and their interactions with the target objects. At the semantic level, we propose a deep model based on deep Boltzmann machine to learn event object representations and their interactions. At the prior level, we utilize two types of prior-level contexts including scene priming and dynamic cueing. Finally, we introduce a hierarchical context model that systematically integrates the contextual information at different levels. Through the hierarchical context model, contexts at different levels jointly contribute to the event recognition. We evaluate the hierarchical context model for event recognition on benchmark surveillance video datasets. Results show that incorporating contexts in each level can improve event recognition performance, and jointly integrating three levels of contexts through our hierarchical model achieves the best performance.
Feature Selection Methods for Zero-Shot Learning of Neural Activity
Caceres, Carlos A.; Roos, Matthew J.; Rupp, Kyle M.; Milsap, Griffin; Crone, Nathan E.; Wolmetz, Michael E.; Ratto, Christopher R.
2017-01-01
Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy. PMID:28690513
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apte, A; Veeraraghavan, H; Oh, J
Purpose: To present an open source and free platform to facilitate radiomics research — The “Radiomics toolbox” in CERR. Method: There is scarcity of open source tools that support end-to-end modeling of image features to predict patient outcomes. The “Radiomics toolbox” strives to fill the need for such a software platform. The platform supports (1) import of various kinds of image modalities like CT, PET, MR, SPECT, US. (2) Contouring tools to delineate structures of interest. (3) Extraction and storage of image based features like 1st order statistics, gray-scale co-occurrence and zonesize matrix based texture features and shape features andmore » (4) Statistical Analysis. Statistical analysis of the extracted features is supported with basic functionality that includes univariate correlations, Kaplan-Meir curves and advanced functionality that includes feature reduction and multivariate modeling. The graphical user interface and the data management are performed with Matlab for the ease of development and readability of code and features for wide audience. Open-source software developed with other programming languages is integrated to enhance various components of this toolbox. For example: Java-based DCM4CHE for import of DICOM, R for statistical analysis. Results: The Radiomics toolbox will be distributed as an open source, GNU copyrighted software. The toolbox was prototyped for modeling Oropharyngeal PET dataset at MSKCC. The analysis will be presented in a separate paper. Conclusion: The Radiomics Toolbox provides an extensible platform for extracting and modeling image features. To emphasize new uses of CERR for radiomics and image-based research, we have changed the name from the “Computational Environment for Radiotherapy Research” to the “Computational Environment for Radiological Research”.« less
Zhai, Binxu; Chen, Jianguo
2018-04-18
A stacked ensemble model is developed for forecasting and analyzing the daily average concentrations of fine particulate matter (PM 2.5 ) in Beijing, China. Special feature extraction procedures, including those of simplification, polynomial, transformation and combination, are conducted before modeling to identify potentially significant features based on an exploratory data analysis. Stability feature selection and tree-based feature selection methods are applied to select important variables and evaluate the degrees of feature importance. Single models including LASSO, Adaboost, XGBoost and multi-layer perceptron optimized by the genetic algorithm (GA-MLP) are established in the level 0 space and are then integrated by support vector regression (SVR) in the level 1 space via stacked generalization. A feature importance analysis reveals that nitrogen dioxide (NO 2 ) and carbon monoxide (CO) concentrations measured from the city of Zhangjiakou are taken as the most important elements of pollution factors for forecasting PM 2.5 concentrations. Local extreme wind speeds and maximal wind speeds are considered to extend the most effects of meteorological factors to the cross-regional transportation of contaminants. Pollutants found in the cities of Zhangjiakou and Chengde have a stronger impact on air quality in Beijing than other surrounding factors. Our model evaluation shows that the ensemble model generally performs better than a single nonlinear forecasting model when applied to new data with a coefficient of determination (R 2 ) of 0.90 and a root mean squared error (RMSE) of 23.69μg/m 3 . For single pollutant grade recognition, the proposed model performs better when applied to days characterized by good air quality than when applied to days registering high levels of pollution. The overall classification accuracy level is 73.93%, with most misclassifications made among adjacent categories. The results demonstrate the interpretability and generalizability of the stacked ensemble model. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
A feature-based approach to modeling protein-protein interaction hot spots.
Cho, Kyu-il; Kim, Dongsup; Lee, Doheon
2009-05-01
Identifying features that effectively represent the energetic contribution of an individual interface residue to the interactions between proteins remains problematic. Here, we present several new features and show that they are more effective than conventional features. By combining the proposed features with conventional features, we develop a predictive model for interaction hot spots. Initially, 54 multifaceted features, composed of different levels of information including structure, sequence and molecular interaction information, are quantified. Then, to identify the best subset of features for predicting hot spots, feature selection is performed using a decision tree. Based on the selected features, a predictive model for hot spots is created using support vector machine (SVM) and tested on an independent test set. Our model shows better overall predictive accuracy than previous methods such as the alanine scanning methods Robetta and FOLDEF, and the knowledge-based method KFC. Subsequent analysis yields several findings about hot spots. As expected, hot spots have a larger relative surface area burial and are more hydrophobic than other residues. Unexpectedly, however, residue conservation displays a rather complicated tendency depending on the types of protein complexes, indicating that this feature is not good for identifying hot spots. Of the selected features, the weighted atomic packing density, relative surface area burial and weighted hydrophobicity are the top 3, with the weighted atomic packing density proving to be the most effective feature for predicting hot spots. Notably, we find that hot spots are closely related to pi-related interactions, especially pi . . . pi interactions.
An international standard for observation data
NASA Astrophysics Data System (ADS)
Cox, Simon
2010-05-01
A generic information model for observations and related features supports data exchange both within and between different scientific and technical communities. Observations and Measurements (O&M) formalizes a neutral terminology for observation data and metadata. It was based on a model developed for medical observations, and draws on experience from geology and mineral exploration, in-situ monitoring, remote sensing, intelligence, biodiversity studies, ocean observations and climate simulations. Hundreds of current deployments of Sensor Observation Services (SOS), covering multiple disciplines, provide validation of the O&M model. A W3C Incubator group on 'Semantic Sensor Networks' is now using O&M as one of the bases for development of a formal ontology for sensor networks. O&M defines the information describing observation acts and their results, including the following key terms: observation, result, observed-property, feature-of-interest, procedure, phenomenon-time, and result-time. The model separates of the (meta-)data associated with the observation procedure, the observed feature, and the observation event itself. Observation results may take various forms, including scalar quantities, categories, vectors, grids, or any data structure required to represent the value of some property of some observed feature. O&M follows the ISO/TC 211 General Feature Model so non-geometric properties must be associated with typed feature instances. This requires formalization of information that may be trivial when working within some earth-science sub-disciplines (e.g. temperature, pressure etc. are associated with the atmosphere or ocean, and not just a location) but is critical to cross-disciplinary applications. It also allows the same structure and terminology to be used for in-situ, ex-situ and remote sensing observations, as well as for simulations. For example: a stream level observation is an in-situ monitoring application where the feature-of-interest is a reach, the observed property is water-level, and the result is a time-series of heights; stream quality is usually determined by ex-situ observation where the feature-of-interest is a specimen that is recovered from the stream, the observed property is water-quality, and the result is a set of measures of various parameters, or an assessment derived from these; on the other hand, distribution of surface temperature of a water body is typically determined through remote-sensing, where at observation time the procedure is located distant from the feature-of-interest, and the result is an image or grid. Observations usually involve sampling of an ultimate feature-of-interest. In the environmental sciences common sampling strategies are used. Spatial sampling is classified primarily by topological dimension (point, curve, surface, volume) and is supported by standard processing and visualisation tools. Specimens are used for ex-situ processing in most disciplines. Sampling features are often part of complexes (e.g. specimens are sub-divided; specimens are retrieved from points along a transect; sections are taken across tracts), so relationships between instances must be recorded. And observational campaigns involve collections of sampling features. The sampling feature model is a core part of O&M, and application experience has shown that describing the relationships between sampling features and observations is generally critical to successful use of the model. O&M was developed through Open Geospatial Consortium (OGC) as part of the Sensor Web Enablement (SWE) initiative. Other SWE standards include SensorML, SOS, Sensor Planning Service (SPS). The OGC O&M standard (Version 1) had two parts: part 1 describes observation events, and part 2 provides a schema sampling features. A revised version of O&M (Version 2) is to be published in a single document as ISO 19156. O&M Version 1 included an XML encoding for data exchange, which is used as the payload for SOS responses. The new version will provide a UML model only. Since an XML encoding may be generated following a rule, such as that presented in ISO 19136 (GML 3.2), it is not included in the standard directly. O&M Version 2 thus supports multiple physical implementations and versions.
Sung, Yao-Ting; Chen, Ju-Ling; Cha, Ji-Her; Tseng, Hou-Chiang; Chang, Tao-Hsing; Chang, Kuo-En
2015-06-01
Multilevel linguistic features have been proposed for discourse analysis, but there have been few applications of multilevel linguistic features to readability models and also few validations of such models. Most traditional readability formulae are based on generalized linear models (GLMs; e.g., discriminant analysis and multiple regression), but these models have to comply with certain statistical assumptions about data properties and include all of the data in formulae construction without pruning the outliers in advance. The use of such readability formulae tends to produce a low text classification accuracy, while using a support vector machine (SVM) in machine learning can enhance the classification outcome. The present study constructed readability models by integrating multilevel linguistic features with SVM, which is more appropriate for text classification. Taking the Chinese language as an example, this study developed 31 linguistic features as the predicting variables at the word, semantic, syntax, and cohesion levels, with grade levels of texts as the criterion variable. The study compared four types of readability models by integrating unilevel and multilevel linguistic features with GLMs and an SVM. The results indicate that adopting a multilevel approach in readability analysis provides a better representation of the complexities of both texts and the reading comprehension process.
Formal Language Design in the Context of Domain Engineering
2000-03-28
73 Related Work 75 5.1 Feature oriented domain analysis ( FODA ) 75 5.2 Organizational domain modeling (ODM) 76 5.3 Domain-Specific Software...However there are only a few that are well defined and used repeatedly in practice. These include: Feature oriented domain analysis ( FODA ), Organizational...Feature oriented domain analysis ( FODA ) Feature oriented domain analysis ( FODA ) is a domain analysis method being researched and applied by the SEI
Jozwik, Kamila M.; Kriegeskorte, Nikolaus; Mur, Marieke
2016-01-01
Object similarity, in brain representations and conscious perception, must reflect a combination of the visual appearance of the objects on the one hand and the categories the objects belong to on the other. Indeed, visual object features and category membership have each been shown to contribute to the object representation in human inferior temporal (IT) cortex, as well as to object-similarity judgments. However, the explanatory power of features and categories has not been directly compared. Here, we investigate whether the IT object representation and similarity judgments are best explained by a categorical or a feature-based model. We use rich models (>100 dimensions) generated by human observers for a set of 96 real-world object images. The categorical model consists of a hierarchically nested set of category labels (such as “human”, “mammal”, and “animal”). The feature-based model includes both object parts (such as “eye”, “tail”, and “handle”) and other descriptive features (such as “circular”, “green”, and “stubbly”). We used non-negative least squares to fit the models to the brain representations (estimated from functional magnetic resonance imaging data) and to similarity judgments. Model performance was estimated on held-out images not used in fitting. Both models explained significant variance in IT and the amounts explained were not significantly different. The combined model did not explain significant additional IT variance, suggesting that it is the shared model variance (features correlated with categories, categories correlated with features) that best explains IT. The similarity judgments were almost fully explained by the categorical model, which explained significantly more variance than the feature-based model. The combined model did not explain significant additional variance in the similarity judgments. Our findings suggest that IT uses features that help to distinguish categories as stepping stones toward a semantic representation. Similarity judgments contain additional categorical variance that is not explained by visual features, reflecting a higher-level more purely semantic representation. PMID:26493748
Jozwik, Kamila M; Kriegeskorte, Nikolaus; Mur, Marieke
2016-03-01
Object similarity, in brain representations and conscious perception, must reflect a combination of the visual appearance of the objects on the one hand and the categories the objects belong to on the other. Indeed, visual object features and category membership have each been shown to contribute to the object representation in human inferior temporal (IT) cortex, as well as to object-similarity judgments. However, the explanatory power of features and categories has not been directly compared. Here, we investigate whether the IT object representation and similarity judgments are best explained by a categorical or a feature-based model. We use rich models (>100 dimensions) generated by human observers for a set of 96 real-world object images. The categorical model consists of a hierarchically nested set of category labels (such as "human", "mammal", and "animal"). The feature-based model includes both object parts (such as "eye", "tail", and "handle") and other descriptive features (such as "circular", "green", and "stubbly"). We used non-negative least squares to fit the models to the brain representations (estimated from functional magnetic resonance imaging data) and to similarity judgments. Model performance was estimated on held-out images not used in fitting. Both models explained significant variance in IT and the amounts explained were not significantly different. The combined model did not explain significant additional IT variance, suggesting that it is the shared model variance (features correlated with categories, categories correlated with features) that best explains IT. The similarity judgments were almost fully explained by the categorical model, which explained significantly more variance than the feature-based model. The combined model did not explain significant additional variance in the similarity judgments. Our findings suggest that IT uses features that help to distinguish categories as stepping stones toward a semantic representation. Similarity judgments contain additional categorical variance that is not explained by visual features, reflecting a higher-level more purely semantic representation. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Technical Reports Server (NTRS)
Bergan, Andrew C.; Leone, Frank A., Jr.
2016-01-01
A new model is proposed that represents the kinematics of kink-band formation and propagation within the framework of a mesoscale continuum damage mechanics (CDM) model. The model uses the recently proposed deformation gradient decomposition approach to represent a kink band as a displacement jump via a cohesive interface that is embedded in an elastic bulk material. The model is capable of representing the combination of matrix failure in the frame of a misaligned fiber and instability due to shear nonlinearity. In contrast to conventional linear or bilinear strain softening laws used in most mesoscale CDM models for longitudinal compression, the constitutive response of the proposed model includes features predicted by detailed micromechanical models. These features include: 1) the rotational kinematics of the kink band, 2) an instability when the peak load is reached, and 3) a nonzero plateau stress under large strains.
Radiomics-based Prognosis Analysis for Non-Small Cell Lung Cancer
NASA Astrophysics Data System (ADS)
Zhang, Yucheng; Oikonomou, Anastasia; Wong, Alexander; Haider, Masoom A.; Khalvati, Farzad
2017-04-01
Radiomics characterizes tumor phenotypes by extracting large numbers of quantitative features from radiological images. Radiomic features have been shown to provide prognostic value in predicting clinical outcomes in several studies. However, several challenges including feature redundancy, unbalanced data, and small sample sizes have led to relatively low predictive accuracy. In this study, we explore different strategies for overcoming these challenges and improving predictive performance of radiomics-based prognosis for non-small cell lung cancer (NSCLC). CT images of 112 patients (mean age 75 years) with NSCLC who underwent stereotactic body radiotherapy were used to predict recurrence, death, and recurrence-free survival using a comprehensive radiomics analysis. Different feature selection and predictive modeling techniques were used to determine the optimal configuration of prognosis analysis. To address feature redundancy, comprehensive analysis indicated that Random Forest models and Principal Component Analysis were optimum predictive modeling and feature selection methods, respectively, for achieving high prognosis performance. To address unbalanced data, Synthetic Minority Over-sampling technique was found to significantly increase predictive accuracy. A full analysis of variance showed that data endpoints, feature selection techniques, and classifiers were significant factors in affecting predictive accuracy, suggesting that these factors must be investigated when building radiomics-based predictive models for cancer prognosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Z; MD Anderson Cancer Center, Houston, TX; Ho, A
Purpose: To develop and validate a prediction model using radiomics features extracted from MR images to distinguish radiation necrosis from tumor progression for brain metastases treated with Gamma knife radiosurgery. Methods: The images used to develop the model were T1 post-contrast MR scans from 71 patients who had had pathologic confirmation of necrosis or progression; 1 lesion was identified per patient (17 necrosis and 54 progression). Radiomics features were extracted from 2 images at 2 time points per patient, both obtained prior to resection. Each lesion was manually contoured on each image, and 282 radiomics features were calculated for eachmore » lesion. The correlation for each radiomics feature between two time points was calculated within each group to identify a subset of features with distinct values between two groups. The delta of this subset of radiomics features, characterizing changes from the earlier time to the later one, was included as a covariate to build a prediction model using support vector machines with a cubic polynomial kernel function. The model was evaluated with a 10-fold cross-validation. Results: Forty radiomics features were selected based on consistent correlation values of approximately 0 for the necrosis group and >0.2 for the progression group. In performing the 10-fold cross-validation, we narrowed this number down to 11 delta radiomics features for the model. This 11-delta-feature model showed an overall prediction accuracy of 83.1%, with a true positive rate of 58.8% in predicting necrosis and 90.7% for predicting tumor progression. The area under the curve for the prediction model was 0.79. Conclusion: These delta radiomics features extracted from MR scans showed potential for distinguishing radiation necrosis from tumor progression. This tool may be a useful, noninvasive means of determining the status of an enlarging lesion after radiosurgery, aiding decision-making regarding surgical resection versus conservative medical management.« less
Modeling Autistic Features in Animals
Patterson, Paul H.
2011-01-01
A variety of features of autism can be simulated in rodents, including the core behavioral hallmarks of stereotyped and repetitive behaviors, and deficits in social interaction and communication. Other behaviors frequently found in autism spectrum disorders (ASD) such as neophobia, enhanced anxiety, abnormal pain sensitivity and eye blink conditioning, disturbed sleep patterns, seizures, and deficits in sensorimotor gating are also present in some of the animal models. Neuropathology and some characteristic neurochemical changes that are frequently seen in autism, as well as alterations in the immune status in the brain and periphery are also found in some of the models. Several known environmental risk factors for autism have been successfully established in rodents, including maternal infection and maternal valproate administration. Also under investigation are a number of mouse models based on genetic variants associated with autism or on syndromic disorders with autistic features. This review briefly summarizes recent developments in this field, highlighting models with face and/or construct validity, and noting the potential for investigation of pathogenesis and early progress towards clinical testing of potential therapeutics. Wherever possible, reference is made to reviews rather than primary articles. PMID:21289542
Predictive information processing in music cognition. A critical review.
Rohrmeier, Martin A; Koelsch, Stefan
2012-02-01
Expectation and prediction constitute central mechanisms in the perception and cognition of music, which have been explored in theoretical and empirical accounts. We review the scope and limits of theoretical accounts of musical prediction with respect to feature-based and temporal prediction. While the concept of prediction is unproblematic for basic single-stream features such as melody, it is not straight-forward for polyphonic structures or higher-order features such as formal predictions. Behavioural results based on explicit and implicit (priming) paradigms provide evidence of priming in various domains that may reflect predictive behaviour. Computational learning models, including symbolic (fragment-based), probabilistic/graphical, or connectionist approaches, provide well-specified predictive models of specific features and feature combinations. While models match some experimental results, full-fledged music prediction cannot yet be modelled. Neuroscientific results regarding the early right-anterior negativity (ERAN) and mismatch negativity (MMN) reflect expectancy violations on different levels of processing complexity, and provide some neural evidence for different predictive mechanisms. At present, the combinations of neural and computational modelling methodologies are at early stages and require further research. Copyright © 2012 Elsevier B.V. All rights reserved.
Imaging genetics approach to predict progression of Parkinson's diseases.
Mansu Kim; Seong-Jin Son; Hyunjin Park
2017-07-01
Imaging genetics is a tool to extract genetic variants associated with both clinical phenotypes and imaging information. The approach can extract additional genetic variants compared to conventional approaches to better investigate various diseased conditions. Here, we applied imaging genetics to study Parkinson's disease (PD). We aimed to extract significant features derived from imaging genetics and neuroimaging. We built a regression model based on extracted significant features combining genetics and neuroimaging to better predict clinical scores of PD progression (i.e. MDS-UPDRS). Our model yielded high correlation (r = 0.697, p <; 0.001) and low root mean squared error (8.36) between predicted and actual MDS-UPDRS scores. Neuroimaging (from 123 I-Ioflupane SPECT) predictors of regression model were computed from independent component analysis approach. Genetic features were computed using image genetics approach based on identified neuroimaging features as intermediate phenotypes. Joint modeling of neuroimaging and genetics could provide complementary information and thus have the potential to provide further insight into the pathophysiology of PD. Our model included newly found neuroimaging features and genetic variants which need further investigation.
Feature-oriented regional modeling and simulations in the Gulf of Maine and Georges Bank
NASA Astrophysics Data System (ADS)
Gangopadhyay, Avijit; Robinson, Allan R.; Haley, Patrick J.; Leslie, Wayne G.; Lozano, Carlos J.; Bisagni, James J.; Yu, Zhitao
2003-03-01
The multiscale synoptic circulation system in the Gulf of Maine and Georges Bank (GOMGB) region is presented using a feature-oriented approach. Prevalent synoptic circulation structures, or 'features', are identified from previous observational studies. These features include the buoyancy-driven Maine Coastal Current, the Georges Bank anticyclonic frontal circulation system, the basin-scale cyclonic gyres (Jordan, Georges and Wilkinson), the deep inflow through the Northeast Channel (NEC), the shallow outflow via the Great South Channel (GSC), and the shelf-slope front (SSF). Their synoptic water-mass ( T- S) structures are characterized and parameterized in a generalized formulation to develop temperature-salinity feature models. A synoptic initialization scheme for feature-oriented regional modeling and simulation (FORMS) of the circulation in the coastal-to-deep region of the GOMGB system is then developed. First, the temperature and salinity feature-model profiles are placed on a regional circulation template and then objectively analyzed with appropriate background climatology in the coastal region. Furthermore, these fields are melded with adjacent deep-ocean regional circulation (Gulf Stream Meander and Ring region) along and across the SSF. These initialization fields are then used for dynamical simulations via the primitive equation model. Simulation results are analyzed to calibrate the multiparameter feature-oriented modeling system. Experimental short-term synoptic simulations are presented for multiple resolutions in different regions with and without atmospheric forcing. The presented 'generic and portable' methodology demonstrates the potential of applying similar FORMS in many other regions of the Global Coastal Ocean.
NASA Astrophysics Data System (ADS)
Shi, Bibo; Grimm, Lars J.; Mazurowski, Maciej A.; Marks, Jeffrey R.; King, Lorraine M.; Maley, Carlo C.; Hwang, E. Shelley; Lo, Joseph Y.
2017-03-01
Reducing the overdiagnosis and overtreatment associated with ductal carcinoma in situ (DCIS) requires accurate prediction of the invasive potential at cancer screening. In this work, we investigated the utility of pre-operative histologic and mammographic features to predict upstaging of DCIS. The goal was to provide intentionally conservative baseline performance using readily available data from radiologists and pathologists and only linear models. We conducted a retrospective analysis on 99 patients with DCIS. Of those 25 were upstaged to invasive cancer at the time of definitive surgery. Pre-operative factors including both the histologic features extracted from stereotactic core needle biopsy (SCNB) reports and the mammographic features annotated by an expert breast radiologist were investigated with statistical analysis. Furthermore, we built classification models based on those features in an attempt to predict the presence of an occult invasive component in DCIS, with generalization performance assessed by receiver operating characteristic (ROC) curve analysis. Histologic features including nuclear grade and DCIS subtype did not show statistically significant differences between cases with pure DCIS and with DCIS plus invasive disease. However, three mammographic features, i.e., the major axis length of DCIS lesion, the BI-RADS level of suspicion, and radiologist's assessment did achieve the statistical significance. Using those three statistically significant features as input, a linear discriminant model was able to distinguish patients with DCIS plus invasive disease from those with pure DCIS, with AUC-ROC equal to 0.62. Overall, mammograms used for breast screening contain useful information that can be perceived by radiologists and help predict occult invasive components in DCIS.
3D for Geosciences: Interactive Tangibles and Virtual Models
NASA Astrophysics Data System (ADS)
Pippin, J. E.; Matheney, M.; Kitsch, N.; Rosado, G.; Thompson, Z.; Pierce, S. A.
2016-12-01
Point cloud processing provides a method of studying and modelling geologic features relevant to geoscience systems and processes. Here, software including Skanect, MeshLab, Blender, PDAL, and PCL are used in conjunction with 3D scanning hardware, including a Structure scanner and a Kinect camera, to create and analyze point cloud images of small scale topography, karst features, tunnels, and structures at high resolution. This project successfully scanned internal karst features ranging from small stalactites to large rooms, as well as an external waterfall feature. For comparison purposes, multiple scans of the same object were merged into single object files both automatically, using commercial software, and manually using open source libraries and code. Files with format .ply were manually converted into numeric data sets to be analyzed for similar regions between files in order to match them together. We can assume a numeric process would be more powerful and efficient than the manual method, however it could lack other useful features that GUI's may have. The digital models have applications in mining as efficient means of replacing topography functions such as measuring distances and areas. Additionally, it is possible to make simulation models such as drilling templates and calculations related to 3D spaces. Advantages of using methods described here for these procedures include the relatively quick time to obtain data and the easy transport of the equipment. With regard to openpit mining, obtaining 3D images of large surfaces and with precision would be a high value tool by georeferencing scan data to interactive maps. The digital 3D images obtained from scans may be saved as printable files to create physical 3D-printable models to create tangible objects based on scientific information, as well as digital "worlds" able to be navigated virtually. The data, models, and algorithms explored here can be used to convey complex scientific ideas to a range of professionals and audiences.
A feature-based approach to modeling protein–protein interaction hot spots
Cho, Kyu-il; Kim, Dongsup; Lee, Doheon
2009-01-01
Identifying features that effectively represent the energetic contribution of an individual interface residue to the interactions between proteins remains problematic. Here, we present several new features and show that they are more effective than conventional features. By combining the proposed features with conventional features, we develop a predictive model for interaction hot spots. Initially, 54 multifaceted features, composed of different levels of information including structure, sequence and molecular interaction information, are quantified. Then, to identify the best subset of features for predicting hot spots, feature selection is performed using a decision tree. Based on the selected features, a predictive model for hot spots is created using support vector machine (SVM) and tested on an independent test set. Our model shows better overall predictive accuracy than previous methods such as the alanine scanning methods Robetta and FOLDEF, and the knowledge-based method KFC. Subsequent analysis yields several findings about hot spots. As expected, hot spots have a larger relative surface area burial and are more hydrophobic than other residues. Unexpectedly, however, residue conservation displays a rather complicated tendency depending on the types of protein complexes, indicating that this feature is not good for identifying hot spots. Of the selected features, the weighted atomic packing density, relative surface area burial and weighted hydrophobicity are the top 3, with the weighted atomic packing density proving to be the most effective feature for predicting hot spots. Notably, we find that hot spots are closely related to π–related interactions, especially π · · · π interactions. PMID:19273533
Atoms of recognition in human and computer vision.
Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel
2016-03-08
Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.
Rule, Michael E.; Vargas-Irwin, Carlos; Donoghue, John P.; Truccolo, Wilson
2015-01-01
Understanding the sources of variability in single-neuron spiking responses is an important open problem for the theory of neural coding. This variability is thought to result primarily from spontaneous collective dynamics in neuronal networks. Here, we investigate how well collective dynamics reflected in motor cortex local field potentials (LFPs) can account for spiking variability during motor behavior. Neural activity was recorded via microelectrode arrays implanted in ventral and dorsal premotor and primary motor cortices of non-human primates performing naturalistic 3-D reaching and grasping actions. Point process models were used to quantify how well LFP features accounted for spiking variability not explained by the measured 3-D reach and grasp kinematics. LFP features included the instantaneous magnitude, phase and analytic-signal components of narrow band-pass filtered (δ,θ,α,β) LFPs, and analytic signal and amplitude envelope features in higher-frequency bands. Multiband LFP features predicted single-neuron spiking (1ms resolution) with substantial accuracy as assessed via ROC analysis. Notably, however, models including both LFP and kinematics features displayed marginal improvement over kinematics-only models. Furthermore, the small predictive information added by LFP features to kinematic models was redundant to information available in fast-timescale (<100 ms) spiking history. Overall, information in multiband LFP features, although predictive of single-neuron spiking during movement execution, was redundant to information available in movement parameters and spiking history. Our findings suggest that, during movement execution, collective dynamics reflected in motor cortex LFPs primarily relate to sensorimotor processes directly controlling movement output, adding little explanatory power to variability not accounted by movement parameters. PMID:26157365
Extension of Companion Modeling Using Classification Learning
NASA Astrophysics Data System (ADS)
Torii, Daisuke; Bousquet, François; Ishida, Toru
Companion Modeling is a methodology of refining initial models for understanding reality through a role-playing game (RPG) and a multiagent simulation. In this research, we propose a novel agent model construction methodology in which classification learning is applied to the RPG log data in Companion Modeling. This methodology enables a systematic model construction that handles multi-parameters, independent of the modelers ability. There are three problems in applying classification learning to the RPG log data: 1) It is difficult to gather enough data for the number of features because the cost of gathering data is high. 2) Noise data can affect the learning results because the amount of data may be insufficient. 3) The learning results should be explained as a human decision making model and should be recognized by the expert as being the result that reflects reality. We realized an agent model construction system using the following two approaches: 1) Using a feature selction method, the feature subset that has the best prediction accuracy is identified. In this process, the important features chosen by the expert are always included. 2) The expert eliminates irrelevant features from the learning results after evaluating the learning model through a visualization of the results. Finally, using the RPG log data from the Companion Modeling of agricultural economics in northeastern Thailand, we confirm the capability of this methodology.
Nuclear physics with antiprotons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dover, C.B.
1984-01-01
Transparencies of an invited talk presented at the Nashville meeting of the American Physical Society, October 18-20, 1984, are included. Topics include: (1) Salient features of two-body N anti N interactions (N anti N reversible NN, annihilation mechanisms (quark models), and optical model phenomenology); (2) anti N-nucleus interactions - elastic, inelastic, etc. (new cross section data, optical potentials, signatures of spin-isospin dependence of N anti N force, and (anti p, p) reactions); and (3) anti N-nucleus annihilation processes (features of cascade or fluid dynamics calculations, searches for baryonium and other exotics, meson interferometry, and (anti p, NN) reactions. (WHK)
Improved pulmonary nodule classification utilizing quantitative lung parenchyma features.
Dilger, Samantha K N; Uthoff, Johanna; Judisch, Alexandra; Hammond, Emily; Mott, Sarah L; Smith, Brian J; Newell, John D; Hoffman, Eric A; Sieren, Jessica C
2015-10-01
Current computer-aided diagnosis (CAD) models for determining pulmonary nodule malignancy characterize nodule shape, density, and border in computed tomography (CT) data. Analyzing the lung parenchyma surrounding the nodule has been minimally explored. We hypothesize that improved nodule classification is achievable by including features quantified from the surrounding lung tissue. To explore this hypothesis, we have developed expanded quantitative CT feature extraction techniques, including volumetric Laws texture energy measures for the parenchyma and nodule, border descriptors using ray-casting and rubber-band straightening, histogram features characterizing densities, and global lung measurements. Using stepwise forward selection and leave-one-case-out cross-validation, a neural network was used for classification. When applied to 50 nodules (22 malignant and 28 benign) from high-resolution CT scans, 52 features (8 nodule, 39 parenchymal, and 5 global) were statistically significant. Nodule-only features yielded an area under the ROC curve of 0.918 (including nodule size) and 0.872 (excluding nodule size). Performance was improved through inclusion of parenchymal (0.938) and global features (0.932). These results show a trend toward increased performance when the parenchyma is included, coupled with the large number of significant parenchymal features that support our hypothesis: the pulmonary parenchyma is influenced differentially by malignant versus benign nodules, assisting CAD-based nodule characterizations.
Torres-Valencia, Cristian A; Álvarez, Mauricio A; Orozco-Gutiérrez, Alvaro A
2014-01-01
Human emotion recognition (HER) allows the assessment of an affective state of a subject. Until recently, such emotional states were described in terms of discrete emotions, like happiness or contempt. In order to cover a high range of emotions, researchers in the field have introduced different dimensional spaces for emotion description that allow the characterization of affective states in terms of several variables or dimensions that measure distinct aspects of the emotion. One of the most common of such dimensional spaces is the bidimensional Arousal/Valence space. To the best of our knowledge, all HER systems so far have modelled independently, the dimensions in these dimensional spaces. In this paper, we study the effect of modelling the output dimensions simultaneously and show experimentally the advantages in modeling them in this way. We consider a multimodal approach by including features from the Electroencephalogram and a few physiological signals. For modelling the multiple outputs, we employ a multiple output regressor based on support vector machines. We also include an stage of feature selection that is developed within an embedded approach known as Recursive Feature Elimination (RFE), proposed initially for SVM. The results show that several features can be eliminated using the multiple output support vector regressor with RFE without affecting the performance of the regressor. From the analysis of the features selected in smaller subsets via RFE, it can be observed that the signals that are more informative into the arousal and valence space discrimination are the EEG, Electrooculogram/Electromiogram (EOG/EMG) and the Galvanic Skin Response (GSR).
Bremigan, M.T.; Soranno, P.A.; Gonzalez, M.J.; Bunnell, D.B.; Arend, K.K.; Renwick, W.H.; Stein, R.A.; Vanni, M.J.
2008-01-01
Although effects of land use/cover on nutrient concentrations in aquatic systems are well known, half or more of the variation in nutrient concentration remains unexplained by land use/cover alone. Hydrogeomorphic (HGM) landscape features can explain much remaining variation and influence food web interactions. To explore complex linkages among land use/cover, HGM features, reservoir productivity, and food webs, we sampled 11 Ohio reservoirs, ranging broadly in agricultural catchment land use/cover, for 3 years. We hypothesized that HGM features mediate the bottom-up effects of land use/cover on reservoir productivity, chlorophyll a, zooplankton, and recruitment of gizzard shad, an omnivorous fish species common throughout southeastern U.S. reservoirs and capable of exerting strong effects on food web and nutrient dynamics. We tested specific hypotheses using a model selection approach. Percent variation explained was highest for total nitrogen (R2 = 0.92), moderately high for total phosphorus, chlorophyll a, and rotifer biomass (R2 = 0.57 to 0.67), relatively low for crustacean zooplankton biomass and larval gizzard shad hatch abundance (R2 = 0.43 and 0.42), and high for larval gizzard shad survivor abundance (R2 = 0.79). The trophic status models included agricultural land use/cover and an HGM predictor, whereas the zooplankton models had few HGM predictors. The larval gizzard shad models had the highest complexity, including more than one HGM feature and food web components. We demonstrate the importance of integrating land use/cover, HGM features, and food web interactions to investigate critical interactions and feedbacks among physical, chemical, and biological components of linked land-water ecosystems.
THE U.S. ENVIRONMENTAL PROTECTION AGENCY VERSION OF POSITIVE MATRIX FACTORIZATION
The abstract describes some of the special features of the EPA's version of Positive Matrix Factorization that is freely distributed. Features include descriptions of the Graphical User Interface, an approach for estimating errors in the modeled solutions, and future development...
WASP TRANSPORT MODELING AND WASP ECOLOGICAL MODELING
A combination of lectures, demonstrations, and hands-on excercises will be used to introduce pollutant transport modeling with the U.S. EPA's general water quality model, WASP (Water Quality Analysis Simulation Program). WASP features include a user-friendly Windows-based interfa...
Consistency relations for sharp features in the primordial spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mooij, Sander; Palma, Gonzalo A.; Panotopoulos, Grigoris
We study the generation of sharp features in the primordial spectra within the framework of effective field theory of inflation, wherein curvature perturbations are the consequence of the dynamics of a single scalar degree of freedom. We identify two sources in the generation of features: rapid variations of the sound speed c{sub s} (at which curvature fluctuations propagate) and rapid variations of the expansion rate H during inflation. With this in mind, we propose a non-trivial relation linking these two quantities that allows us to study the generation of sharp features in realistic scenarios where features are the result ofmore » the simultaneous occurrence of these two sources. This relation depends on a single parameter with a value determined by the particular model (and its numerical input) responsible for the rapidly varying background. As a consequence, we find a one-parameter consistency relation between the shape and size of features in the bispectrum and features in the power spectrum. To substantiate this result, we discuss several examples of models for which this one-parameter relation (between c{sub s} and H) holds, including models in which features in the spectra are both sudden and resonant.« less
A Physiologically-based Model for Methylmercury Uptake and Accumulation in Female American Kestrels
A physiologically-based model was developed to describe the uptake, distribution, and elimination of methylmercury in female American Kestrels (Falco sparverius). The model was adapted from established models for methylmercury in rodents. Features unique to the model include meth...
An Automatic Registration Algorithm for 3D Maxillofacial Model
NASA Astrophysics Data System (ADS)
Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng
2016-09-01
3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.
Meeting in Turkey: WASP Transport Modeling and WASP Ecological Modeling
A combination of lectures, demonstrations, and hands-on excercises will be used to introduce pollutant transport modeling with the U.S. EPA's general water quality model, WASP (Water Quality Analysis Simulation Program). WASP features include a user-friendly Windows-based interfa...
Meeting in Korea: WASP Transport Modeling and WASP Ecological Modeling
A combination of lectures, demonstrations, and hands-on excercises will be used to introduce pollutant transport modeling with the U.S. EPA's general water quality model, WASP (Water Quality Analysis Simulation Program). WASP features include a user-friendly Windows-based interfa...
A review of multimodel superensemble forecasting for weather, seasonal climate, and hurricanes
NASA Astrophysics Data System (ADS)
Krishnamurti, T. N.; Kumar, V.; Simon, A.; Bhardwaj, A.; Ghosh, T.; Ross, R.
2016-06-01
This review provides a summary of work in the area of ensemble forecasts for weather, climate, oceans, and hurricanes. This includes a combination of multiple forecast model results that does not dwell on the ensemble mean but uses a unique collective bias reduction procedure. A theoretical framework for this procedure is provided, utilizing a suite of models that is constructed from the well-known Lorenz low-order nonlinear system. A tutorial that includes a walk-through table and illustrates the inner workings of the multimodel superensemble's principle is provided. Systematic errors in a single deterministic model arise from a host of features that range from the model's initial state (data assimilation), resolution, representation of physics, dynamics, and ocean processes, local aspects of orography, water bodies, and details of the land surface. Models, in their diversity of representation of such features, end up leaving unique signatures of systematic errors. The multimodel superensemble utilizes as many as 10 million weights to take into account the bias errors arising from these diverse features of multimodels. The design of a single deterministic forecast models that utilizes multiple features from the use of the large volume of weights is provided here. This has led to a better understanding of the error growths and the collective bias reductions for several of the physical parameterizations within diverse models, such as cumulus convection, planetary boundary layer physics, and radiative transfer. A number of examples for weather, seasonal climate, hurricanes and sub surface oceanic forecast skills of member models, the ensemble mean, and the superensemble are provided.
2014-01-01
Background It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. Results We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5Å. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637Å. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. Conclusion SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/. PMID:24776231
Cao, Renzhi; Wang, Zheng; Wang, Yiheng; Cheng, Jianlin
2014-04-28
It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5Å. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637Å. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/.
EEG-based Affect and Workload Recognition in a Virtual Driving Environment for ASD Intervention
Wade, Joshua W.; Key, Alexandra P.; Warren, Zachary E.; Sarkar, Nilanjan
2017-01-01
objective To build group-level classification models capable of recognizing affective states and mental workload of individuals with autism spectrum disorder (ASD) during driving skill training. Methods Twenty adolescents with ASD participated in a six-session virtual reality driving simulator based experiment, during which their electroencephalogram (EEG) data were recorded alongside driving events and a therapist’s rating of their affective states and mental workload. Five feature generation approaches including statistical features, fractal dimension features, higher order crossings (HOC)-based features, power features from frequency bands, and power features from bins (Δf = 2 Hz) were applied to extract relevant features. Individual differences were removed with a two-step feature calibration method. Finally, binary classification results based on the k-nearest neighbors algorithm and univariate feature selection method were evaluated by leave-one-subject-out nested cross-validation to compare feature types and identify discriminative features. Results The best classification results were achieved using power features from bins for engagement (0.95) and boredom (0.78), and HOC-based features for enjoyment (0.90), frustration (0.88), and workload (0.86). Conclusion Offline EEG-based group-level classification models are feasible for recognizing binary low and high intensity of affect and workload of individuals with ASD in the context of driving. However, while promising the applicability of the models in an online adaptive driving task requires further development. Significance The developed models provide a basis for an EEG-based passive brain computer interface system that has the potential to benefit individuals with ASD with an affect- and workload-based individualized driving skill training intervention. PMID:28422647
Evaluation of Sexual Communication Message Strategies
2011-01-01
Parent-child communication about sex is an important proximal reproductive health outcome. But while campaigns to promote it such as the Parents Speak Up National Campaign (PSUNC) have been effective, little is known about how messages influence parental cognitions and behavior. This study examines which message features explain responses to sexual communication messages. We content analyzed 4 PSUNC ads to identify specific, measurable message and advertising execution features. We then develop quantitative measures of those features, including message strategies, marketing strategies, and voice and other stylistic features, and merged the resulting data into a dataset drawn from a national media tracking survey of the campaign. Finally, we conducted multivariable logistic regression models to identify relationships between message content and ad reactions/receptivity, and between ad reactions/receptivity and parents' cognitions related to sexual communication included in the campaign's conceptual model. We found that overall parents were highly receptive to the PSUNC ads. We did not find significant associations between message content and ad reactions/receptivity. However, we found that reactions/receptivity to specific PSUNC ads were associated with increased norms, self-efficacy, short- and long-term expectations about parent-child sexual communication, as theorized in the conceptual model. This study extends previous research and methods to analyze message content and reactions/receptivity. The results confirm and extend previous PSUNC campaign evaluation and provide further evidence for the conceptual model. Future research should examine additional message content features and the effects of reactions/receptivity. PMID:21599875
Photogrammetry and remote sensing for visualization of spatial data in a virtual reality environment
NASA Astrophysics Data System (ADS)
Bhagawati, Dwipen
2001-07-01
Researchers in many disciplines have started using the tool of Virtual Reality (VR) to gain new insights into problems in their respective disciplines. Recent advances in computer graphics, software and hardware technologies have created many opportunities for VR systems, advanced scientific and engineering applications being among them. In Geometronics, generally photogrammetry and remote sensing are used for management of spatial data inventory. VR technology can be suitably used for management of spatial data inventory. This research demonstrates usefulness of VR technology for inventory management by taking the roadside features as a case study. Management of roadside feature inventory involves positioning and visualization of the features. This research has developed a methodology to demonstrate how photogrammetric principles can be used to position the features using the video-logging images and GPS camera positioning and how image analysis can help produce appropriate texture for building the VR, which then can be visualized in a Cave Augmented Virtual Environment (CAVE). VR modeling was implemented in two stages to demonstrate the different approaches for modeling the VR scene. A simulated highway scene was implemented with the brute force approach, while modeling software was used to model the real world scene using feature positions produced in this research. The first approach demonstrates an implementation of the scene by writing C++ codes to include a multi-level wand menu for interaction with the scene that enables the user to interact with the scene. The interactions include editing the features inside the CAVE display, navigating inside the scene, and performing limited geographic analysis. The second approach demonstrates creation of a VR scene for a real roadway environment using feature positions determined in this research. The scene looks realistic with textures from the real site mapped on to the geometry of the scene. Remote sensing and digital image processing techniques were used for texturing the roadway features in this scene.
A two component model for thermal emission from organic grains in Comet Halley
NASA Technical Reports Server (NTRS)
Chyba, Christopher; Sagan, Carl
1988-01-01
Observations of Comet Halley in the near infrared reveal a triple-peaked emission feature near 3.4 micrometer, characteristic of C-H stretching in hydrocarbons. A variety of plausible cometary materials exhibit these features, including the organic residue of irradiated candidate cometary ices (such as the residue of irradiated methane ice clathrate, and polycyclic aromatic hydrocarbons. Indeed, any molecule containing -CH3 and -CH2 alkanes will emit at 3.4 micrometer under suitable conditions. Therefore tentative identifications must rest on additional evidence, including a plausible account of the origins of the organic material, a plausible model for the infrared emission of this material, and a demonstration that this conjunction of material and model not only matches the 3 to 4 micrometer spectrum, but also does not yield additional emission features where none is observed. In the case of the residue of irradiated low occupancy methane ice clathrate, it is argued that the lab synthesis of the organic residue well simulates the radiation processing experienced by Comet Halley.
Patient-derived xenografts as preclinical neuroblastoma models.
Braekeveldt, Noémie; Bexell, Daniel
2018-05-01
The prognosis for children with high-risk neuroblastoma is often poor and survivors can suffer from severe side effects. Predictive preclinical models and novel therapeutic strategies for high-risk disease are therefore a clinical imperative. However, conventional cancer cell line-derived xenografts can deviate substantially from patient tumors in terms of their molecular and phenotypic features. Patient-derived xenografts (PDXs) recapitulate many biologically and clinically relevant features of human cancers. Importantly, PDXs can closely parallel clinical features and outcome and serve as excellent models for biomarker and preclinical drug development. Here, we review progress in and applications of neuroblastoma PDX models. Neuroblastoma orthotopic PDXs share the molecular characteristics, neuroblastoma markers, invasive properties and tumor stroma of aggressive patient tumors and retain spontaneous metastatic capacity to distant organs including bone marrow. The recent identification of genomic changes in relapsed neuroblastomas opens up opportunities to target treatment-resistant tumors in well-characterized neuroblastoma PDXs. We highlight and discuss the features and various sources of neuroblastoma PDXs, methodological considerations when establishing neuroblastoma PDXs, in vitro 3D models, current limitations of PDX models and their application to preclinical drug testing.
Liang, Ja-Der; Ping, Xiao-Ou; Tseng, Yi-Ju; Huang, Guan-Tarn; Lai, Feipei; Yang, Pei-Ming
2014-12-01
Recurrence of hepatocellular carcinoma (HCC) is an important issue despite effective treatments with tumor eradication. Identification of patients who are at high risk for recurrence may provide more efficacious screening and detection of tumor recurrence. The aim of this study was to develop recurrence predictive models for HCC patients who received radiofrequency ablation (RFA) treatment. From January 2007 to December 2009, 83 newly diagnosed HCC patients receiving RFA as their first treatment were enrolled. Five feature selection methods including genetic algorithm (GA), simulated annealing (SA) algorithm, random forests (RF) and hybrid methods (GA+RF and SA+RF) were utilized for selecting an important subset of features from a total of 16 clinical features. These feature selection methods were combined with support vector machine (SVM) for developing predictive models with better performance. Five-fold cross-validation was used to train and test SVM models. The developed SVM-based predictive models with hybrid feature selection methods and 5-fold cross-validation had averages of the sensitivity, specificity, accuracy, positive predictive value, negative predictive value, and area under the ROC curve as 67%, 86%, 82%, 69%, 90%, and 0.69, respectively. The SVM derived predictive model can provide suggestive high-risk recurrent patients, who should be closely followed up after complete RFA treatment. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Liu, Tongtong; Ge, Xifeng; Yu, Jinhua; Guo, Yi; Wang, Yuanyuan; Wang, Wenping; Cui, Ligang
2018-06-21
B-mode ultrasound (B-US) and strain elastography ultrasound (SE-US) images have a potential to distinguish thyroid tumor with different lymph node (LN) status. The purpose of our study is to investigate whether the application of multi-modality images including B-US and SE-US can improve the discriminability of thyroid tumor with LN metastasis based on a radiomics approach. Ultrasound (US) images including B-US and SE-US images of 75 papillary thyroid carcinoma (PTC) cases were retrospectively collected. A radiomics approach was developed in this study to estimate LNs status of PTC patients. The approach included image segmentation, quantitative feature extraction, feature selection and classification. Three feature sets were extracted from B-US, SE-US, and multi-modality containing B-US and SE-US. They were used to evaluate the contribution of different modalities. A total of 684 radiomics features have been extracted in our study. We used sparse representation coefficient-based feature selection method with 10-bootstrap to reduce the dimension of feature sets. Support vector machine with leave-one-out cross-validation was used to build the model for estimating LN status. Using features extracted from both B-US and SE-US, the radiomics-based model produced an area under the receiver operating characteristic curve (AUC) [Formula: see text] 0.90, accuracy (ACC) [Formula: see text] 0.85, sensitivity (SENS) [Formula: see text] 0.77 and specificity (SPEC) [Formula: see text] 0.88, which was better than using features extracted from B-US or SE-US separately. Multi-modality images provided more information in radiomics study. Combining use of B-US and SE-US could improve the LN metastasis estimation accuracy for PTC patients.
SU-F-R-14: PET Based Radiomics to Predict Outcomes in Patients with Hodgkin Lymphoma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, J; Aristophanous, M; Akhtari, M
Purpose: To identify PET-based radiomics features associated with high refractory/relapsed disease risk for Hodgkin lymphoma patients. Methods: A total of 251 Hodgkin lymphoma patients including 19 primary refractory and 9 relapsed patients were investigated. All patients underwent an initial pre-treatment diagnostic FDG PET/CT scan. All cancerous lymph node regions (ROIs) were delineated by an experienced physician based on thresholding each volume of disease in the anatomical regions to SUV>2.5. We extracted 122 image features and evaluated the effect of ROI selection (the largest ROI, the ROI with highest mean SUV, merged ROI, and a single anatomic region [e.g. mediastinum]) onmore » classification accuracy. Random forest was used as a classifier and ROC analysis was used to assess the relationship between selected features and patient’s outcome status. Results: Each patient had between 1 and 9 separate ROIs, with much intra-patient variability in PET features. The best model, which used features from a single anatomic region (the mediastinal ROI, only volumes>5cc: 169 patients with 12 primary refractory) had a classification accuracy of 80.5% for primary refractory disease. The top five features, based on Gini index, consist of shape features (max 3D-diameter and volume) and texture features (correlation and information measure of correlation1&2). In the ROC analysis, sensitivity and specificity of the best model were 0.92 and 0.80, respectively. The area under the ROC (AUC) and the accuracy were 0.86 and 0.86, respectively. The classification accuracy was less than 60% for other ROI models or when ROIs less than 5cc were included. Conclusion: This study showed that PET-based radiomics features from the mediastinal lymph region are associated with primary refractory disease and therefore may play an important role in predicting outcomes in Hodgkin lymphoma patients. These features could be additive beyond baseline tumor and clinical characteristics, and may warrant more aggressive treatment.« less
Doshi, Ankur M; Ream, Justin M; Kierans, Andrea S; Bilbily, Matthew; Rusinek, Henry; Huang, William C; Chandarana, Hersh
2016-03-01
The purpose of this study was to determine whether qualitative and quantitative MRI feature analysis is useful for differentiating type 1 from type 2 papillary renal cell carcinoma (PRCC). This retrospective study included 21 type 1 and 17 type 2 PRCCs evaluated with preoperative MRI. Two radiologists independently evaluated various qualitative features, including signal intensity, heterogeneity, and margin. For the quantitative analysis, a radiology fellow and a medical student independently drew 3D volumes of interest over the entire tumor on T2-weighted HASTE images, apparent diffusion coefficient parametric maps, and nephrographic phase contrast-enhanced MR images to derive first-order texture metrics. Qualitative and quantitative features were compared between the groups. For both readers, qualitative features with greater frequency in type 2 PRCC included heterogeneous enhancement, indistinct margin, and T2 heterogeneity (all, p < 0.035). Indistinct margins and heterogeneous enhancement were independent predictors (AUC, 0.822). Quantitative analysis revealed that apparent diffusion coefficient, HASTE, and contrast-enhanced entropy were greater in type 2 PRCC (p < 0.05; AUC, 0.682-0.716). A combined quantitative and qualitative model had an AUC of 0.859. Qualitative features within the model had interreader concordance of 84-95%, and the quantitative data had intraclass coefficients of 0.873-0.961. Qualitative and quantitative features can help discriminate between type 1 and type 2 PRCC. Quantitative analysis may capture useful information that complements the qualitative appearance while benefiting from high interobserver agreement.
Wu, Haifeng; Sun, Tao; Wang, Jingjing; Li, Xia; Wang, Wei; Huo, Da; Lv, Pingxin; He, Wen; Wang, Keyang; Guo, Xiuhua
2013-08-01
The objective of this study was to investigate the method of the combination of radiological and textural features for the differentiation of malignant from benign solitary pulmonary nodules by computed tomography. Features including 13 gray level co-occurrence matrix textural features and 12 radiological features were extracted from 2,117 CT slices, which came from 202 (116 malignant and 86 benign) patients. Lasso-type regularization to a nonlinear regression model was applied to select predictive features and a BP artificial neural network was used to build the diagnostic model. Eight radiological and two textural features were obtained after the Lasso-type regularization procedure. Twelve radiological features alone could reach an area under the ROC curve (AUC) of 0.84 in differentiating between malignant and benign lesions. The 10 selected characters improved the AUC to 0.91. The evaluation results showed that the method of selecting radiological and textural features appears to yield more effective in the distinction of malignant from benign solitary pulmonary nodules by computed tomography.
Documentation of a digital spatial data base for hydrologic investigations, Broward County, Florida
Sonenshein, R.S.
1992-01-01
Geographic information systems have become an important tool in planning for the protection and development of natural resources, including ground water and surface water. A digital spatial data base consisting of 18 data layers that can be accessed by a geographic information system was developed for Broward County, Florida. Five computer programs, including one that can be used to create documentation files for each data layer and four that can be used to create data layers from data files not already in geographic information system format, were also developed. Four types of data layers have been developed. Data layers for manmade features include major roads, municipal boundaries, the public land-survey section grid, land use, and underground storage tank facilities. The data layer for topographic features consists of surveyed point land-surface elevations. Data layers for hydrologic features include surface-water and rainfall data-collection stations, surface-water bodies, water-control district boundaries, and water-management basins. Data layers for hydrogeologic features include soil associations, transmissivity polygons, hydrogeologic unit depths, and a finite-difference model grid for south-central Broward County. Each data layer is documented as to the extent of the features, number of features, scale, data sources, and a description of the attribute tables where applicable.
Form drag in rivers due to small-scale natural topographic features: 1. Regular sequences
Kean, J.W.; Smith, J.D.
2006-01-01
Small-scale topographic features are commonly found on the boundaries of natural rivers, streams, and floodplains. A simple method for determining the form drag on these features is presented, and the results of this model are compared to laboratory measurements. The roughness elements are modeled as Gaussian-shaped features defined in terms of three parameters: a protrusion height, H; a streamwise length scale, ??; and a spacing between crests, ??. This shape is shown to be a good approximation to a wide variety of natural topographic bank features. The form drag on an individual roughness element embedded in a series of identical elements is determined using the drag coefficient of the individual element and a reference velocity that includes the effects of roughness elements further upstream. In addition to calculating the drag on each element, the model determines the spatially averaged total stress, skin friction stress, and roughness height of the boundary. The effects of bank roughness on patterns of velocity and boundary shear stress are determined by combining the form drag model with a channel flow model. The combined model shows that drag on small-scale topographic features substantially alters the near-bank flow field. These methods can be used to improve predictions of flow resistance in rivers and to form the basis for fully predictive (no empirically adjusted parameters) channel flow models. They also provide a foundation for calculating the near-bank boundary shear stress fields necessary for determining rates of sediment transport and lateral erosion.
Computational Identification of Genomic Features That Influence 3D Chromatin Domain Formation.
Mourad, Raphaël; Cuvier, Olivier
2016-05-01
Recent advances in long-range Hi-C contact mapping have revealed the importance of the 3D structure of chromosomes in gene expression. A current challenge is to identify the key molecular drivers of this 3D structure. Several genomic features, such as architectural proteins and functional elements, were shown to be enriched at topological domain borders using classical enrichment tests. Here we propose multiple logistic regression to identify those genomic features that positively or negatively influence domain border establishment or maintenance. The model is flexible, and can account for statistical interactions among multiple genomic features. Using both simulated and real data, we show that our model outperforms enrichment test and non-parametric models, such as random forests, for the identification of genomic features that influence domain borders. Using Drosophila Hi-C data at a very high resolution of 1 kb, our model suggests that, among architectural proteins, BEAF-32 and CP190 are the main positive drivers of 3D domain borders. In humans, our model identifies well-known architectural proteins CTCF and cohesin, as well as ZNF143 and Polycomb group proteins as positive drivers of domain borders. The model also reveals the existence of several negative drivers that counteract the presence of domain borders including P300, RXRA, BCL11A and ELK1.
Computational Identification of Genomic Features That Influence 3D Chromatin Domain Formation
Mourad, Raphaël; Cuvier, Olivier
2016-01-01
Recent advances in long-range Hi-C contact mapping have revealed the importance of the 3D structure of chromosomes in gene expression. A current challenge is to identify the key molecular drivers of this 3D structure. Several genomic features, such as architectural proteins and functional elements, were shown to be enriched at topological domain borders using classical enrichment tests. Here we propose multiple logistic regression to identify those genomic features that positively or negatively influence domain border establishment or maintenance. The model is flexible, and can account for statistical interactions among multiple genomic features. Using both simulated and real data, we show that our model outperforms enrichment test and non-parametric models, such as random forests, for the identification of genomic features that influence domain borders. Using Drosophila Hi-C data at a very high resolution of 1 kb, our model suggests that, among architectural proteins, BEAF-32 and CP190 are the main positive drivers of 3D domain borders. In humans, our model identifies well-known architectural proteins CTCF and cohesin, as well as ZNF143 and Polycomb group proteins as positive drivers of domain borders. The model also reveals the existence of several negative drivers that counteract the presence of domain borders including P300, RXRA, BCL11A and ELK1. PMID:27203237
Functional constraints on tooth morphology in carnivorous mammals
2012-01-01
Background The range of potential morphologies resulting from evolution is limited by complex interacting processes, ranging from development to function. Quantifying these interactions is important for understanding adaptation and convergent evolution. Using three-dimensional reconstructions of carnivoran and dasyuromorph tooth rows, we compared statistical models of the relationship between tooth row shape and the opposing tooth row, a static feature, as well as measures of mandibular motion during chewing (occlusion), which are kinetic features. This is a new approach to quantifying functional integration because we use measures of movement and displacement, such as the amount the mandible translates laterally during occlusion, as opposed to conventional morphological measures, such as mandible length and geometric landmarks. By sampling two distantly related groups of ecologically similar mammals, we study carnivorous mammals in general rather than a specific group of mammals. Results Statistical model comparisons demonstrate that the best performing models always include some measure of mandibular motion, indicating that functional and statistical models of tooth shape as purely a function of the opposing tooth row are too simple and that increased model complexity provides a better understanding of tooth form. The predictors of the best performing models always included the opposing tooth row shape and a relative linear measure of mandibular motion. Conclusions Our results provide quantitative support of long-standing hypotheses of tooth row shape as being influenced by mandibular motion in addition to the opposing tooth row. Additionally, this study illustrates the utility and necessity of including kinetic features in analyses of morphological integration. PMID:22899809
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom, N.; Lawson, M.; Yu, Y. H.
WEC-Sim is a midfidelity numerical tool for modeling wave energy conversion devices. The code uses the MATLAB SimMechanics package to solve multibody dynamics and models wave interactions using hydrodynamic coefficients derived from frequency-domain boundary-element methods. This paper presents the new modeling features introduced in the latest release of WEC-Sim. The first feature discussed conversion of the fluid memory kernel to a state-space form. This enhancement offers a substantial computational benefit after the hydrodynamic body-to-body coefficients are introduced and the number of interactions increases exponentially with each additional body. Additional features include the ability to calculate the wave-excitation forces based onmore » the instantaneous incident wave angle, allowing the device to weathervane, as well as import a user-defined wave elevation time series. A review of the hydrodynamic theory for each feature is provided and the successful implementation is verified using test cases.« less
Recent Developments on the Turbulence Modeling Resource Website (Invited)
NASA Technical Reports Server (NTRS)
Rumssey, Christopher L.
2015-01-01
The NASA Langley Turbulence Model Resource (TMR) website has been active for over five years. Its main goal of providing a one-stop, easily accessible internet site for up-to-date information on Reynolds-averaged Navier-Stokes turbulence models remains unchanged. In particular, the site strives to provide an easy way for users to verify their own implementations of widely-used turbulence models, and to compare the results from different models for a variety of simple unit problems covering a range of flow physics. Some new features have been recently added to the website. This paper documents the site's features, including recent developments, future plans, and open questions.
Toward the Institutionalization of Change. Working Paper No. 11.
ERIC Educational Resources Information Center
Wilson, Albert; Wilson, Donna
In connection with plans for the publication of an annual series of reports on the "Future State of the Union," conceptual problems of such an undertaking are explored and some of the features to be included are examined. Philosophical prerequisites discussed include a model of change; a cybernetic model; some social indicators for…
Combining Feature Selection and Integration—A Neural Model for MT Motion Selectivity
Beck, Cornelia; Neumann, Heiko
2011-01-01
Background The computation of pattern motion in visual area MT based on motion input from area V1 has been investigated in many experiments and models attempting to replicate the main mechanisms. Two different core conceptual approaches were developed to explain the findings. In integrationist models the key mechanism to achieve pattern selectivity is the nonlinear integration of V1 motion activity. In contrast, selectionist models focus on the motion computation at positions with 2D features. Methodology/Principal Findings Recent experiments revealed that neither of the two concepts alone is sufficient to explain all experimental data and that most of the existing models cannot account for the complex behaviour found. MT pattern selectivity changes over time for stimuli like type II plaids from vector average to the direction computed with an intersection of constraint rule or by feature tracking. Also, the spatial arrangement of the stimulus within the receptive field of a MT cell plays a crucial role. We propose a recurrent neural model showing how feature integration and selection can be combined into one common architecture to explain these findings. The key features of the model are the computation of 1D and 2D motion in model area V1 subpopulations that are integrated in model MT cells using feedforward and feedback processing. Our results are also in line with findings concerning the solution of the aperture problem. Conclusions/Significance We propose a new neural model for MT pattern computation and motion disambiguation that is based on a combination of feature selection and integration. The model can explain a range of recent neurophysiological findings including temporally dynamic behaviour. PMID:21814543
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-07
...) Protection, Limit Engine Torque Loads for Sudden Engine Stoppage, and Design Roll Maneuver Requirement AGENCY... design features when compared to the state of technology envisioned in the airworthiness standards for transport category airplanes. These design features include limit engine torque loads for sudden engine...
Shi, Xiaohu; Zhang, Jingfen; He, Zhiquan; Shang, Yi; Xu, Dong
2011-09-01
One of the major challenges in protein tertiary structure prediction is structure quality assessment. In many cases, protein structure prediction tools generate good structural models, but fail to select the best models from a huge number of candidates as the final output. In this study, we developed a sampling-based machine-learning method to rank protein structural models by integrating multiple scores and features. First, features such as predicted secondary structure, solvent accessibility and residue-residue contact information are integrated by two Radial Basis Function (RBF) models trained from different datasets. Then, the two RBF scores and five selected scoring functions developed by others, i.e., Opus-CA, Opus-PSP, DFIRE, RAPDF, and Cheng Score are synthesized by a sampling method. At last, another integrated RBF model ranks the structural models according to the features of sampling distribution. We tested the proposed method by using two different datasets, including the CASP server prediction models of all CASP8 targets and a set of models generated by our in-house software MUFOLD. The test result shows that our method outperforms any individual scoring function on both best model selection, and overall correlation between the predicted ranking and the actual ranking of structural quality.
A LabVIEW model incorporating an open-loop arterial impedance and a closed-loop circulatory system.
Cole, R T; Lucas, C L; Cascio, W E; Johnson, T A
2005-11-01
While numerous computer models exist for the circulatory system, many are limited in scope, contain unwanted features or incorporate complex components specific to unique experimental situations. Our purpose was to develop a basic, yet multifaceted, computer model of the left heart and systemic circulation in LabVIEW having universal appeal without sacrificing crucial physiologic features. The program we developed employs Windkessel-type impedance models in several open-loop configurations and a closed-loop model coupling a lumped impedance and ventricular pressure source. The open-loop impedance models demonstrate afterload effects on arbitrary aortic pressure/flow inputs. The closed-loop model catalogs the major circulatory waveforms with changes in afterload, preload, and left heart properties. Our model provides an avenue for expanding the use of the ventricular equations through closed-loop coupling that includes a basic coronary circuit. Tested values used for the afterload components and the effects of afterload parameter changes on various waveforms are consistent with published data. We conclude that this model offers the ability to alter several circulatory factors and digitally catalog the most salient features of the pressure/flow waveforms employing a user-friendly platform. These features make the model a useful instructional tool for students as well as a simple experimental tool for cardiovascular research.
Directionality volatility in electroencephalogram time series
NASA Astrophysics Data System (ADS)
Mansor, Mahayaudin M.; Green, David A.; Metcalfe, Andrew V.
2016-06-01
We compare time series of electroencephalograms (EEGs) from healthy volunteers with EEGs from subjects diagnosed with epilepsy. The EEG time series from the healthy group are recorded during awake state with their eyes open and eyes closed, and the records from subjects with epilepsy are taken from three different recording regions of pre-surgical diagnosis: hippocampal, epileptogenic and seizure zone. The comparisons for these 5 categories are in terms of deviations from linear time series models with constant variance Gaussian white noise error inputs. One feature investigated is directionality, and how this can be modelled by either non-linear threshold autoregressive models or non-Gaussian errors. A second feature is volatility, which is modelled by Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) processes. Other features include the proportion of variability accounted for by time series models, and the skewness and the kurtosis of the residuals. The results suggest these comparisons may have diagnostic potential for epilepsy and provide early warning of seizures.
Visual Saliency Detection Based on Multiscale Deep CNN Features.
Guanbin Li; Yizhou Yu
2016-11-01
Visual saliency is a fundamental problem in both cognitive and computational sciences, including computer vision. In this paper, we discover that a high-quality visual saliency model can be learned from multiscale features extracted using deep convolutional neural networks (CNNs), which have had many successes in visual recognition tasks. For learning such saliency models, we introduce a neural network architecture, which has fully connected layers on top of CNNs responsible for feature extraction at three different scales. The penultimate layer of our neural network has been confirmed to be a discriminative high-level feature vector for saliency detection, which we call deep contrast feature. To generate a more robust feature, we integrate handcrafted low-level features with our deep contrast feature. To promote further research and evaluation of visual saliency models, we also construct a new large database of 4447 challenging images and their pixelwise saliency annotations. Experimental results demonstrate that our proposed method is capable of achieving the state-of-the-art performance on all public benchmarks, improving the F-measure by 6.12% and 10%, respectively, on the DUT-OMRON data set and our new data set (HKU-IS), and lowering the mean absolute error by 9% and 35.3%, respectively, on these two data sets.
Saliency Detection of Stereoscopic 3D Images with Application to Visual Discomfort Prediction
NASA Astrophysics Data System (ADS)
Li, Hong; Luo, Ting; Xu, Haiyong
2017-06-01
Visual saliency detection is potentially useful for a wide range of applications in image processing and computer vision fields. This paper proposes a novel bottom-up saliency detection approach for stereoscopic 3D (S3D) images based on regional covariance matrix. As for S3D saliency detection, besides the traditional 2D low-level visual features, additional 3D depth features should also be considered. However, only limited efforts have been made to investigate how different features (e.g. 2D and 3D features) contribute to the overall saliency of S3D images. The main contribution of this paper is that we introduce a nonlinear feature integration descriptor, i.e., regional covariance matrix, to fuse both 2D and 3D features for S3D saliency detection. The regional covariance matrix is shown to be effective for nonlinear feature integration by modelling the inter-correlation of different feature dimensions. Experimental results demonstrate that the proposed approach outperforms several existing relevant models including 2D extended and pure 3D saliency models. In addition, we also experimentally verified that the proposed S3D saliency map can significantly improve the prediction accuracy of experienced visual discomfort when viewing S3D images.
NASA Astrophysics Data System (ADS)
DeForest, Craig; Seaton, Daniel B.; Darnell, John A.
2017-08-01
I present and demonstrate a new, general purpose post-processing technique, "3D noise gating", that can reduce image noise by an order of magnitude or more without effective loss of spatial or temporal resolution in typical solar applications.Nearly all scientific images are, ultimately, limited by noise. Noise can be direct Poisson "shot noise" from photon counting effects, or introduced by other means such as detector read noise. Noise is typically represented as a random variable (perhaps with location- or image-dependent characteristics) that is sampled once per pixel or once per resolution element of an image sequence. Noise limits many aspects of image analysis, including photometry, spatiotemporal resolution, feature identification, morphology extraction, and background modeling and separation.Identifying and separating noise from image signal is difficult. The common practice of blurring in space and/or time works because most image "signal" is concentrated in the low Fourier components of an image, while noise is evenly distributed. Blurring in space and/or time attenuates the high spatial and temporal frequencies, reducing noise at the expense of also attenuating image detail. Noise-gating exploits the same property -- "coherence" -- that we use to identify features in images, to separate image features from noise.Processing image sequences through 3-D noise gating results in spectacular (more than 10x) improvements in signal-to-noise ratio, while not blurring bright, resolved features in either space or time. This improves most types of image analysis, including feature identification, time sequence extraction, absolute and relative photometry (including differential emission measure analysis), feature tracking, computer vision, correlation tracking, background modeling, cross-scale analysis, visual display/presentation, and image compression.I will introduce noise gating, describe the method, and show examples from several instruments (including SDO/AIA , SDO/HMI, STEREO/SECCHI, and GOES-R/SUVI) that explore the benefits and limits of the technique.
Rosenkrantz, Andrew B; Doshi, Ankur M; Ginocchio, Luke A; Aphinyanaphongs, Yindalon
2016-12-01
This study aimed to assess the performance of a text classification machine-learning model in predicting highly cited articles within the recent radiological literature and to identify the model's most influential article features. We downloaded from PubMed the title, abstract, and medical subject heading terms for 10,065 articles published in 25 general radiology journals in 2012 and 2013. Three machine-learning models were applied to predict the top 10% of included articles in terms of the number of citations to the article in 2014 (reflecting the 2-year time window in conventional impact factor calculations). The model having the highest area under the curve was selected to derive a list of article features (words) predicting high citation volume, which was iteratively reduced to identify the smallest possible core feature list maintaining predictive power. Overall themes were qualitatively assigned to the core features. The regularized logistic regression (Bayesian binary regression) model had highest performance, achieving an area under the curve of 0.814 in predicting articles in the top 10% of citation volume. We reduced the initial 14,083 features to 210 features that maintain predictivity. These features corresponded with topics relating to various imaging techniques (eg, diffusion-weighted magnetic resonance imaging, hyperpolarized magnetic resonance imaging, dual-energy computed tomography, computed tomography reconstruction algorithms, tomosynthesis, elastography, and computer-aided diagnosis), particular pathologies (prostate cancer; thyroid nodules; hepatic adenoma, hepatocellular carcinoma, non-alcoholic fatty liver disease), and other topics (radiation dose, electroporation, education, general oncology, gadolinium, statistics). Machine learning can be successfully applied to create specific feature-based models for predicting articles likely to achieve high influence within the radiological literature. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Orientation Modeling for Amateur Cameras by Matching Image Line Features and Building Vector Data
NASA Astrophysics Data System (ADS)
Hung, C. H.; Chang, W. C.; Chen, L. C.
2016-06-01
With the popularity of geospatial applications, database updating is getting important due to the environmental changes over time. Imagery provides a lower cost and efficient way to update the database. Three dimensional objects can be measured by space intersection using conjugate image points and orientation parameters of cameras. However, precise orientation parameters of light amateur cameras are not always available due to their costliness and heaviness of precision GPS and IMU. To automatize data updating, the correspondence of object vector data and image may be built to improve the accuracy of direct georeferencing. This study contains four major parts, (1) back-projection of object vector data, (2) extraction of image feature lines, (3) object-image feature line matching, and (4) line-based orientation modeling. In order to construct the correspondence of features between an image and a building model, the building vector features were back-projected onto the image using the initial camera orientation from GPS and IMU. Image line features were extracted from the imagery. Afterwards, the matching procedure was done by assessing the similarity between the extracted image features and the back-projected ones. Then, the fourth part utilized line features in orientation modeling. The line-based orientation modeling was performed by the integration of line parametric equations into collinearity condition equations. The experiment data included images with 0.06 m resolution acquired by Canon EOS Mark 5D II camera on a Microdrones MD4-1000 UAV. Experimental results indicate that 2.1 pixel accuracy may be reached, which is equivalent to 0.12 m in the object space.
Hsieh, Chi-Hsuan; Chiu, Yu-Fang; Shen, Yi-Hsiang; Chu, Ta-Shun; Huang, Yuan-Hao
2016-02-01
This paper presents an ultra-wideband (UWB) impulse-radio radar signal processing platform used to analyze human respiratory features. Conventional radar systems used in human detection only analyze human respiration rates or the response of a target. However, additional respiratory signal information is available that has not been explored using radar detection. The authors previously proposed a modified raised cosine waveform (MRCW) respiration model and an iterative correlation search algorithm that could acquire additional respiratory features such as the inspiration and expiration speeds, respiration intensity, and respiration holding ratio. To realize real-time respiratory feature extraction by using the proposed UWB signal processing platform, this paper proposes a new four-segment linear waveform (FSLW) respiration model. This model offers a superior fit to the measured respiration signal compared with the MRCW model and decreases the computational complexity of feature extraction. In addition, an early-terminated iterative correlation search algorithm is presented, substantially decreasing the computational complexity and yielding negligible performance degradation. These extracted features can be considered the compressed signals used to decrease the amount of data storage required for use in long-term medical monitoring systems and can also be used in clinical diagnosis. The proposed respiratory feature extraction algorithm was designed and implemented using the proposed UWB radar signal processing platform including a radar front-end chip and an FPGA chip. The proposed radar system can detect human respiration rates at 0.1 to 1 Hz and facilitates the real-time analysis of the respiratory features of each respiration period.
Brozoski, Thomas J; Bauer, Carol A
2016-08-01
Presented is a thematic review of animal tinnitus models from a functional perspective. Chronic tinnitus is a persistent subjective sound sensation, emergent typically after hearing loss. Although the sensation is experientially simple, it appears to have central a nervous system substrate of unexpected complexity that includes areas outside of those classically defined as auditory. Over the past 27 years animal models have significantly contributed to understanding tinnitus' complex neurophysiology. In that time, a diversity of models have been developed, each with its own strengths and limitations. None has clearly become a standard. Animal models trace their origin to the 1988 experiments of Jastreboff and colleagues. All subsequent models derive some of their features from those experiments. Common features include behavior-dependent psychophysical determination, acoustic conditions that contrast objective sound and silence, and inclusion of at least one normal-hearing control group. In the present review, animal models have been categorized as either interrogative or reflexive. Interrogative models use emitted behavior under voluntary control to indicate hearing. An example would be pressing a lever to obtain food in the presence of a particular sound. In this type of model animals are interrogated about their auditory sensations, analogous to asking a patient, "What do you hear?" These models require at least some training and motivation management, and reflect the perception of tinnitus. Reflexive models, in contrast, employ acoustic modulation of an auditory reflex, such as the acoustic startle response. An unexpected loud sound will elicit a reflexive motor response from many species, including humans. Although involuntary, acoustic startle can be modified by a lower-level preceding event, including a silent sound gap. Sound-gap modulation of acoustic startle appears to discriminate tinnitus in animals as well as humans, and requires no training or motivational manipulation, but its sensitivity, reliability, mechanism, and optimal implementation are incompletely understood. While to date animal models have significantly expanded the neuroscience of tinnitus, they have been limited to examining sensory features. In the human condition, emotional and cognitive factors are also important. It is not clear that the emotional features of tinnitus can be further understood using animal models, but models may be applied to examine cognitive factors. A recently developed model is described that reveals an interaction between tinnitus and auditory attention. This research suggests that effective tinnitus therapy could rely on modifying attention to the sensation rather than modifying the sensation itself. This article is part of a Special Issue entitled
Wallis, Thomas S A; Funke, Christina M; Ecker, Alexander S; Gatys, Leon A; Wichmann, Felix A; Bethge, Matthias
2017-10-01
Our visual environment is full of texture-"stuff" like cloth, bark, or gravel as distinct from "things" like dresses, trees, or paths-and humans are adept at perceiving subtle variations in material properties. To investigate image features important for texture perception, we psychophysically compare a recent parametric model of texture appearance (convolutional neural network [CNN] model) that uses the features encoded by a deep CNN (VGG-19) with two other models: the venerable Portilla and Simoncelli model and an extension of the CNN model in which the power spectrum is additionally matched. Observers discriminated model-generated textures from original natural textures in a spatial three-alternative oddity paradigm under two viewing conditions: when test patches were briefly presented to the near-periphery ("parafoveal") and when observers were able to make eye movements to all three patches ("inspection"). Under parafoveal viewing, observers were unable to discriminate 10 of 12 original images from CNN model images, and remarkably, the simpler Portilla and Simoncelli model performed slightly better than the CNN model (11 textures). Under foveal inspection, matching CNN features captured appearance substantially better than the Portilla and Simoncelli model (nine compared to four textures), and including the power spectrum improved appearance matching for two of the three remaining textures. None of the models we test here could produce indiscriminable images for one of the 12 textures under the inspection condition. While deep CNN (VGG-19) features can often be used to synthesize textures that humans cannot discriminate from natural textures, there is currently no uniformly best model for all textures and viewing conditions.
Diversity and Community Can Coexist.
Stivala, Alex; Robins, Garry; Kashima, Yoshihisa; Kirley, Michael
2016-03-01
We examine the (in)compatibility of diversity and sense of community by means of agent-based models based on the well-known Schelling model of residential segregation and Axelrod model of cultural dissemination. We find that diversity and highly clustered social networks, on the assumptions of social tie formation based on spatial proximity and homophily, are incompatible when agent features are immutable, and this holds even for multiple independent features. We include both mutable and immutable features into a model that integrates Schelling and Axelrod models, and we find that even for multiple independent features, diversity and highly clustered social networks can be incompatible on the assumptions of social tie formation based on spatial proximity and homophily. However, this incompatibility breaks down when cultural diversity can be sufficiently large, at which point diversity and clustering need not be negatively correlated. This implies that segregation based on immutable characteristics such as race can possibly be overcome by sufficient similarity on mutable characteristics based on culture, which are subject to a process of social influence, provided a sufficiently large "scope of cultural possibilities" exists. © Society for Community Research and Action 2016.
Novel method to predict body weight in children based on age and morphological facial features.
Huang, Ziyin; Barrett, Jeffrey S; Barrett, Kyle; Barrett, Ryan; Ng, Chee M
2015-04-01
A new and novel approach of predicting the body weight of children based on age and morphological facial features using a three-layer feed-forward artificial neural network (ANN) model is reported. The model takes in four parameters, including age-based CDC-inferred median body weight and three facial feature distances measured from digital facial images. In this study, thirty-nine volunteer subjects with age ranging from 6-18 years old and BW ranging from 18.6-96.4 kg were used for model development and validation. The final model has a mean prediction error of 0.48, a mean squared error of 18.43, and a coefficient of correlation of 0.94. The model shows significant improvement in prediction accuracy over several age-based body weight prediction methods. Combining with a facial recognition algorithm that can detect, extract and measure the facial features used in this study, mobile applications that incorporate this body weight prediction method may be developed for clinical investigations where access to scales is limited. © 2014, The American College of Clinical Pharmacology.
Black-white preterm birth disparity: a marker of inequality
Purpose. The racial disparity in preterrn birth (PTB) is a persistent feature of perinatal epidemiology, inconsistently modeled in the literature. Rather than include race as an explanatory variable, or employ race-stratified models, we sought to directly model the PTB disparity ...
Attiyeh, Marc A; Chakraborty, Jayasree; Doussot, Alexandre; Langdon-Embry, Liana; Mainarich, Shiana; Gönen, Mithat; Balachandran, Vinod P; D'Angelica, Michael I; DeMatteo, Ronald P; Jarnagin, William R; Kingham, T Peter; Allen, Peter J; Simpson, Amber L; Do, Richard K
2018-04-01
Pancreatic cancer is a highly lethal cancer with no established a priori markers of survival. Existing nomograms rely mainly on post-resection data and are of limited utility in directing surgical management. This study investigated the use of quantitative computed tomography (CT) features to preoperatively assess survival for pancreatic ductal adenocarcinoma (PDAC) patients. A prospectively maintained database identified consecutive chemotherapy-naive patients with CT angiography and resected PDAC between 2009 and 2012. Variation in CT enhancement patterns was extracted from the tumor region using texture analysis, a quantitative image analysis tool previously described in the literature. Two continuous survival models were constructed, with 70% of the data (training set) using Cox regression, first based only on preoperative serum cancer antigen (CA) 19-9 levels and image features (model A), and then on CA19-9, image features, and the Brennan score (composite pathology score; model B). The remaining 30% of the data (test set) were reserved for independent validation. A total of 161 patients were included in the analysis. Training and test sets contained 113 and 48 patients, respectively. Quantitative image features combined with CA19-9 achieved a c-index of 0.69 [integrated Brier score (IBS) 0.224] on the test data, while combining CA19-9, imaging, and the Brennan score achieved a c-index of 0.74 (IBS 0.200) on the test data. We present two continuous survival prediction models for resected PDAC patients. Quantitative analysis of CT texture features is associated with overall survival. Further work includes applying the model to an external dataset to increase the sample size for training and to determine its applicability.
Two Strain Dengue Model with Temporary Cross Immunity and Seasonality
NASA Astrophysics Data System (ADS)
Aguiar, Maíra; Ballesteros, Sebastien; Stollenwerk, Nico
2010-09-01
Models on dengue fever epidemiology have previously shown critical fluctuations with power law distributions and also deterministic chaos in some parameter regions due to the multi-strain structure of the disease pathogen. In our first model including well known biological features, we found a rich dynamical structure including limit cycles, symmetry breaking bifurcations, torus bifurcations, coexisting attractors including isola solutions and deterministic chaos (as indicated by positive Lyapunov exponents) in a much larger parameter region, which is also biologically more plausible than the previous results of other researches. Based on these findings we will investigate the model structures further including seasonality.
Two Strain Dengue Model with Temporary Cross Immunity and Seasonality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguiar, Maira; Ballesteros, Sebastien; Stollenwerk, Nico
Models on dengue fever epidemiology have previously shown critical fluctuations with power law distributions and also deterministic chaos in some parameter regions due to the multi-strain structure of the disease pathogen. In our first model including well known biological features, we found a rich dynamical structure including limit cycles, symmetry breaking bifurcations, torus bifurcations, coexisting attractors including isola solutions and deterministic chaos (as indicated by positive Lyapunov exponents) in a much larger parameter region, which is also biologically more plausible than the previous results of other researches. Based on these findings we will investigate the model structures further including seasonality.
Enhanced HMAX model with feedforward feature learning for multiclass categorization.
Li, Yinlin; Wu, Wei; Zhang, Bo; Li, Fengfu
2015-01-01
In recent years, the interdisciplinary research between neuroscience and computer vision has promoted the development in both fields. Many biologically inspired visual models are proposed, and among them, the Hierarchical Max-pooling model (HMAX) is a feedforward model mimicking the structures and functions of V1 to posterior inferotemporal (PIT) layer of the primate visual cortex, which could generate a series of position- and scale- invariant features. However, it could be improved with attention modulation and memory processing, which are two important properties of the primate visual cortex. Thus, in this paper, based on recent biological research on the primate visual cortex, we still mimic the first 100-150 ms of visual cognition to enhance the HMAX model, which mainly focuses on the unsupervised feedforward feature learning process. The main modifications are as follows: (1) To mimic the attention modulation mechanism of V1 layer, a bottom-up saliency map is computed in the S1 layer of the HMAX model, which can support the initial feature extraction for memory processing; (2) To mimic the learning, clustering and short-term memory to long-term memory conversion abilities of V2 and IT, an unsupervised iterative clustering method is used to learn clusters with multiscale middle level patches, which are taken as long-term memory; (3) Inspired by the multiple feature encoding mode of the primate visual cortex, information including color, orientation, and spatial position are encoded in different layers of the HMAX model progressively. By adding a softmax layer at the top of the model, multiclass categorization experiments can be conducted, and the results on Caltech101 show that the enhanced model with a smaller memory size exhibits higher accuracy than the original HMAX model, and could also achieve better accuracy than other unsupervised feature learning methods in multiclass categorization task.
Origin of the Hadži ABC structure: An ab initio study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Hoozen, Brian L.; Petersen, Poul B.
2015-11-14
Medium and strong hydrogen bonds are well known to give rise to broad features in the vibrational spectrum often spanning several hundred wavenumbers. In some cases, these features can span over 1000 cm{sup −1} and even contain multiple broad peaks. One class of strongly hydrogen-bonded dimers that includes many different phosphinic, phosphoric, sulfinic, and selenic acid homodimers exhibits a three-peaked structure over 1500 cm{sup −1} broad. This unusual feature is often referred to as the Hadži ABC structure. The origin of this feature has been debated since its discovery in the 1950s. Only a couple of theoretical studies have attemptedmore » to interpret the origin of this feature; however, no previous study has been able to reproduce this feature from first principles. Here, we present the first ab initio calculation of the Hadži ABC structure. Using a reduced dimensionality calculation that includes four vibrational modes, we are able to reproduce the three-peak structure and much of the broadness of the feature. Our results indicate that Fermi resonances of the in-plane bend, out-of-plane bend, and combination of these bends play significant roles in explaining this feature. Much of the broadness of the feature and the ability of the OH stretch mode to couple with many overtone bending modes are captured by including an adiabatically separated dimer stretch mode in the model. This mode modulates the distance between the monomer units and accordingly the strength of the hydrogen-bonds causing the OH stretch frequency to shift from 2000 to 3000 cm{sup −1}. Using this model, we were also able to reproduce the vibrational spectrum of the deuterated isotopologue which consists of a single 500 cm{sup −1} broad feature. Whereas previous empirical studies have asserted that Fermi resonances contribute very little to this feature, our study indicates that while not appearing as a separate peak, a Fermi resonance of the in-plane bend contributes substantially to the feature.« less
Zhang, Yifan; Gao, Xunzhang; Peng, Xuan; Ye, Jiaqi; Li, Xiang
2018-05-16
The High Resolution Range Profile (HRRP) recognition has attracted great concern in the field of Radar Automatic Target Recognition (RATR). However, traditional HRRP recognition methods failed to model high dimensional sequential data efficiently and have a poor anti-noise ability. To deal with these problems, a novel stochastic neural network model named Attention-based Recurrent Temporal Restricted Boltzmann Machine (ARTRBM) is proposed in this paper. RTRBM is utilized to extract discriminative features and the attention mechanism is adopted to select major features. RTRBM is efficient to model high dimensional HRRP sequences because it can extract the information of temporal and spatial correlation between adjacent HRRPs. The attention mechanism is used in sequential data recognition tasks including machine translation and relation classification, which makes the model pay more attention to the major features of recognition. Therefore, the combination of RTRBM and the attention mechanism makes our model effective for extracting more internal related features and choose the important parts of the extracted features. Additionally, the model performs well with the noise corrupted HRRP data. Experimental results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset show that our proposed model outperforms other traditional methods, which indicates that ARTRBM extracts, selects, and utilizes the correlation information between adjacent HRRPs effectively and is suitable for high dimensional data or noise corrupted data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-09-14
This package contains statistical routines for extracting features from multivariate time-series data which can then be used for subsequent multivariate statistical analysis to identify patterns and anomalous behavior. It calculates local linear or quadratic regression model fits to moving windows for each series and then summarizes the model coefficients across user-defined time intervals for each series. These methods are domain agnostic-but they have been successfully applied to a variety of domains, including commercial aviation and electric power grid data.
Computer-aided diagnosis of melanoma using border and wavelet-based texture analysis.
Garnavi, Rahil; Aldeen, Mohammad; Bailey, James
2012-11-01
This paper presents a novel computer-aided diagnosis system for melanoma. The novelty lies in the optimised selection and integration of features derived from textural, borderbased and geometrical properties of the melanoma lesion. The texture features are derived from using wavelet-decomposition, the border features are derived from constructing a boundaryseries model of the lesion border and analysing it in spatial and frequency domains, and the geometry features are derived from shape indexes. The optimised selection of features is achieved by using the Gain-Ratio method, which is shown to be computationally efficient for melanoma diagnosis application. Classification is done through the use of four classifiers; namely, Support Vector Machine, Random Forest, Logistic Model Tree and Hidden Naive Bayes. The proposed diagnostic system is applied on a set of 289 dermoscopy images (114 malignant, 175 benign) partitioned into train, validation and test image sets. The system achieves and accuracy of 91.26% and AUC value of 0.937, when 23 features are used. Other important findings include (i) the clear advantage gained in complementing texture with border and geometry features, compared to using texture information only, and (ii) higher contribution of texture features than border-based features in the optimised feature set.
Data Field Modeling and Spectral-Spatial Feature Fusion for Hyperspectral Data Classification.
Liu, Da; Li, Jianxun
2016-12-16
Classification is a significant subject in hyperspectral remote sensing image processing. This study proposes a spectral-spatial feature fusion algorithm for the classification of hyperspectral images (HSI). Unlike existing spectral-spatial classification methods, the influences and interactions of the surroundings on each measured pixel were taken into consideration in this paper. Data field theory was employed as the mathematical realization of the field theory concept in physics, and both the spectral and spatial domains of HSI were considered as data fields. Therefore, the inherent dependency of interacting pixels was modeled. Using data field modeling, spatial and spectral features were transformed into a unified radiation form and further fused into a new feature by using a linear model. In contrast to the current spectral-spatial classification methods, which usually simply stack spectral and spatial features together, the proposed method builds the inner connection between the spectral and spatial features, and explores the hidden information that contributed to classification. Therefore, new information is included for classification. The final classification result was obtained using a random forest (RF) classifier. The proposed method was tested with the University of Pavia and Indian Pines, two well-known standard hyperspectral datasets. The experimental results demonstrate that the proposed method has higher classification accuracies than those obtained by the traditional approaches.
Häberle, Lothar; Hack, Carolin C; Heusinger, Katharina; Wagner, Florian; Jud, Sebastian M; Uder, Michael; Beckmann, Matthias W; Schulz-Wendtland, Rüdiger; Wittenberg, Thomas; Fasching, Peter A
2017-08-30
Tumors in radiologically dense breast were overlooked on mammograms more often than tumors in low-density breasts. A fast reproducible and automated method of assessing percentage mammographic density (PMD) would be desirable to support decisions whether ultrasonography should be provided for women in addition to mammography in diagnostic mammography units. PMD assessment has still not been included in clinical routine work, as there are issues of interobserver variability and the procedure is quite time consuming. This study investigated whether fully automatically generated texture features of mammograms can replace time-consuming semi-automatic PMD assessment to predict a patient's risk of having an invasive breast tumor that is visible on ultrasound but masked on mammography (mammography failure). This observational study included 1334 women with invasive breast cancer treated at a hospital-based diagnostic mammography unit. Ultrasound was available for the entire cohort as part of routine diagnosis. Computer-based threshold PMD assessments ("observed PMD") were carried out and 363 texture features were obtained from each mammogram. Several variable selection and regression techniques (univariate selection, lasso, boosting, random forest) were applied to predict PMD from the texture features. The predicted PMD values were each used as new predictor for masking in logistic regression models together with clinical predictors. These four logistic regression models with predicted PMD were compared among themselves and with a logistic regression model with observed PMD. The most accurate masking prediction was determined by cross-validation. About 120 of the 363 texture features were selected for predicting PMD. Density predictions with boosting were the best substitute for observed PMD to predict masking. Overall, the corresponding logistic regression model performed better (cross-validated AUC, 0.747) than one without mammographic density (0.734), but less well than the one with the observed PMD (0.753). However, in patients with an assigned mammography failure risk >10%, covering about half of all masked tumors, the boosting-based model performed at least as accurately as the original PMD model. Automatically generated texture features can replace semi-automatically determined PMD in a prediction model for mammography failure, such that more than 50% of masked tumors could be discovered.
The Integrated Curriculum Model (ICM)
ERIC Educational Resources Information Center
VanTassel-Baska, Joyce; Wood, Susannah
2010-01-01
This article explicates the Integrated Curriculum Model (ICM) which has been used worldwide to design differentiated curriculum, instruction, and assessment units of study for gifted learners. The article includes a literature review of appropriate curriculum features for the gifted, other extant curriculum models, the theoretical basis for the…
Development , Implementation and Evaluation of a Physics-Base Windblown Dust Emission Model
A physics-based windblown dust emission parametrization scheme is developed and implemented in the CMAQ modeling system. A distinct feature of the present model includes the incorporation of a newly developed, dynamic relation for the surface roughness length, which is important ...
Predicting age groups of Twitter users based on language and metadata features
Morgan-Lopez, Antonio A.; Chew, Robert F.; Ruddle, Paul
2017-01-01
Health organizations are increasingly using social media, such as Twitter, to disseminate health messages to target audiences. Determining the extent to which the target audience (e.g., age groups) was reached is critical to evaluating the impact of social media education campaigns. The main objective of this study was to examine the separate and joint predictive validity of linguistic and metadata features in predicting the age of Twitter users. We created a labeled dataset of Twitter users across different age groups (youth, young adults, adults) by collecting publicly available birthday announcement tweets using the Twitter Search application programming interface. We manually reviewed results and, for each age-labeled handle, collected the 200 most recent publicly available tweets and user handles’ metadata. The labeled data were split into training and test datasets. We created separate models to examine the predictive validity of language features only, metadata features only, language and metadata features, and words/phrases from another age-validated dataset. We estimated accuracy, precision, recall, and F1 metrics for each model. An L1-regularized logistic regression model was conducted for each age group, and predicted probabilities between the training and test sets were compared for each age group. Cohen’s d effect sizes were calculated to examine the relative importance of significant features. Models containing both Tweet language features and metadata features performed the best (74% precision, 74% recall, 74% F1) while the model containing only Twitter metadata features were least accurate (58% precision, 60% recall, and 57% F1 score). Top predictive features included use of terms such as “school” for youth and “college” for young adults. Overall, it was more challenging to predict older adults accurately. These results suggest that examining linguistic and Twitter metadata features to predict youth and young adult Twitter users may be helpful for informing public health surveillance and evaluation research. PMID:28850620
Predicting age groups of Twitter users based on language and metadata features.
Morgan-Lopez, Antonio A; Kim, Annice E; Chew, Robert F; Ruddle, Paul
2017-01-01
Health organizations are increasingly using social media, such as Twitter, to disseminate health messages to target audiences. Determining the extent to which the target audience (e.g., age groups) was reached is critical to evaluating the impact of social media education campaigns. The main objective of this study was to examine the separate and joint predictive validity of linguistic and metadata features in predicting the age of Twitter users. We created a labeled dataset of Twitter users across different age groups (youth, young adults, adults) by collecting publicly available birthday announcement tweets using the Twitter Search application programming interface. We manually reviewed results and, for each age-labeled handle, collected the 200 most recent publicly available tweets and user handles' metadata. The labeled data were split into training and test datasets. We created separate models to examine the predictive validity of language features only, metadata features only, language and metadata features, and words/phrases from another age-validated dataset. We estimated accuracy, precision, recall, and F1 metrics for each model. An L1-regularized logistic regression model was conducted for each age group, and predicted probabilities between the training and test sets were compared for each age group. Cohen's d effect sizes were calculated to examine the relative importance of significant features. Models containing both Tweet language features and metadata features performed the best (74% precision, 74% recall, 74% F1) while the model containing only Twitter metadata features were least accurate (58% precision, 60% recall, and 57% F1 score). Top predictive features included use of terms such as "school" for youth and "college" for young adults. Overall, it was more challenging to predict older adults accurately. These results suggest that examining linguistic and Twitter metadata features to predict youth and young adult Twitter users may be helpful for informing public health surveillance and evaluation research.
Animal models of the non-motor features of Parkinson’s disease
McDowell, Kimberly; Chesselet, Marie-Françoise
2012-01-01
The non-motor symptoms (NMS) of Parkinson’s disease (PD) occur in roughly 90% of patients, have a profound negative impact on their quality of life, and often go undiagnosed. NMS typically involve many functional systems, and include sleep disturbances, neuropsychiatric and cognitive deficits, and autonomic and sensory dysfunction. The development and use of animal models have provided valuable insight into the classical motor symptoms of PD over the past few decades. Toxin-induced models provide a suitable approach to study aspects of the disease that derive from the loss of nigrostriatal dopaminergic neurons, a cardinal feature of PD. This also includes some NMS, primarily cognitive dysfunction. However, several NMS poorly respond to dopaminergic treatments, suggesting that they may be due to other pathologies. Recently developed genetic models of PD are providing new ways to model these NMS and identify their mechanisms. This review summarizes the current available literature on the ability of both toxin-induced and genetically-based animal models to reproduce the NMS of PD. PMID:22236386
Minimal modeling of the extratropical general circulation
NASA Technical Reports Server (NTRS)
O'Brien, Enda; Branscome, Lee E.
1989-01-01
The ability of low-order, two-layer models to reproduce basic features of the mid-latitude general circulation is investigated. Changes in model behavior with increased spectral resolution are examined in detail. Qualitatively correct time-mean heat and momentum balances are achieved in a beta-plane channel model which includes the first and third meridional modes. This minimal resolution also reproduces qualitatively realistic surface and upper-level winds and mean meridional circulations. Higher meridional resolution does not result in substantial changes in the latitudinal structure of the circulation. A qualitatively correct kinetic energy spectrum is produced when the resolution is high enough to include several linearly stable modes. A model with three zonal waves and the first three meridional modes has a reasonable energy spectrum and energy conversion cycle, while also satisfying heat and momentum budget requirements. This truncation reproduces the basic mechanisms and zonal circulation features that are obtained at higher resolution. The model performance improves gradually with higher resolution and is smoothly dependent on changes in external parameters.
Normalized distance aggregation of discriminative features for person reidentification
NASA Astrophysics Data System (ADS)
Hou, Li; Han, Kang; Wan, Wanggen; Hwang, Jenq-Neng; Yao, Haiyan
2018-03-01
We propose an effective person reidentification method based on normalized distance aggregation of discriminative features. Our framework is built on the integration of three high-performance discriminative feature extraction models, including local maximal occurrence (LOMO), feature fusion net (FFN), and a concatenation of LOMO and FFN called LOMO-FFN, through two fast and discriminant metric learning models, i.e., cross-view quadratic discriminant analysis (XQDA) and large-scale similarity learning (LSSL). More specifically, we first represent all the cross-view person images using LOMO, FFN, and LOMO-FFN, respectively, and then apply each extracted feature representation to train XQDA and LSSL, respectively, to obtain the optimized individual cross-view distance metric. Finally, the cross-view person matching is computed as the sum of the optimized individual cross-view distance metric through the min-max normalization. Experimental results have shown the effectiveness of the proposed algorithm on three challenging datasets (VIPeR, PRID450s, and CUHK01).
Salari, Nader; Shohaimi, Shamarina; Najafi, Farid; Nallappan, Meenakshii; Karishnarajah, Isthrinayagy
2014-01-01
Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that performance of the proposed model in terms of classification accuracy is desirable, promising, and competitive to the existing state-of-the-art classification models. PMID:25419659
Autonomous navigation of structured city roads
NASA Astrophysics Data System (ADS)
Aubert, Didier; Kluge, Karl C.; Thorpe, Chuck E.
1991-03-01
Autonomous road following is a domain which spans a range of complexity from poorly defined unmarked dirt roads to well defined well marked highly struc-. tured highways. The YARF system (for Yet Another Road Follower) is designed to operate in the middle of this range of complexity driving on urban streets. Our research program has focused on the use of feature- and situation-specific segmentation techniques driven by an explicit model of the appearance and geometry of the road features in the environment. We report results in robust detection of white and yellow painted stripes fitting a road model to detected feature locations to determine vehicle position and local road geometry and automatic location of road features in an initial image. We also describe our planned extensions to include intersection navigation.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-14
... design features include an electronic flight control system that provides roll control of the airplane... Design Features The GVI is equipped with an electronic flight control system that provides roll control... condition at design maneuvering speed (V A ), in which the cockpit roll control is returned to neutral...
Variations in lithospheric thickness on Venus
NASA Technical Reports Server (NTRS)
Johnson, C. L.; Sandwell, David T.
1992-01-01
Recent analyses of Magellan data have indicated many regions exhibiting topograhic flexure. On Venus, flexure is associated predominantly with coronae and the chasmata with Aphrodite Terra. Modeling of these flexural signatures allows the elastic and mechanical thickness of the lithosphere to be estimated. In areas where the lithosphere is flexed beyond its elastic limit the saturation moment provides information on the strength of the lithosphere. Modeling of 12 flexural features on Venus has indicated lithospheric thicknesses comparable with terrestrial values. This has important implications for the venusian heat budget. Flexure of a thin elastic plate due simultaneously to a line load on a continuous plate and a bending moment applied to the end of a broken plate is considered. The mean radius and regional topographic gradient are also included in the model. Features with a large radius of curvature were selected so that a two-dimensional approximation could be used. Comparisons with an axisymmetric model were made for some features to check the validity of the two-dimensional assumption. The best-fit elastic thickness was found for each profile crossing a given flexural feature. In addition, the surface stress and bending moment at the first zero crossing of each profile were also calculated. Flexural amplitudes and elastic thicknesses obtained for 12 features vary significantly. Three examples of the model fitting procedures are discussed.
Machine learning to analyze images of shocked materials for precise and accurate measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dresselhaus-Cooper, Leora; Howard, Marylesa; Hock, Margaret C.
A supervised machine learning algorithm, called locally adaptive discriminant analysis (LADA), has been developed to locate boundaries between identifiable image features that have varying intensities. LADA is an adaptation of image segmentation, which includes techniques that find the positions of image features (classes) using statistical intensity distributions for each class in the image. In order to place a pixel in the proper class, LADA considers the intensity at that pixel and the distribution of intensities in local (nearby) pixels. This paper presents the use of LADA to provide, with statistical uncertainties, the positions and shapes of features within ultrafast imagesmore » of shock waves. We demonstrate the ability to locate image features including crystals, density changes associated with shock waves, and material jetting caused by shock waves. This algorithm can analyze images that exhibit a wide range of physical phenomena because it does not rely on comparison to a model. LADA enables analysis of images from shock physics with statistical rigor independent of underlying models or simulations.« less
Software For Least-Squares And Robust Estimation
NASA Technical Reports Server (NTRS)
Jeffreys, William H.; Fitzpatrick, Michael J.; Mcarthur, Barbara E.; Mccartney, James
1990-01-01
GAUSSFIT computer program includes full-featured programming language facilitating creation of mathematical models solving least-squares and robust-estimation problems. Programming language designed to make it easy to specify complex reduction models. Written in 100 percent C language.
A description of the new 3D electron gun and collector modeling tool: MICHELLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petillo, J.; Mondelli, A.; Krueger, W.
1999-07-01
A new 3D finite element gun and collector modeling code is under development at SAIC in collaboration with industrial partners and national laboratories. This development program has been designed specifically to address the shortcomings of current simulation and modeling tools. In particular, although there are 3D gun codes that exist today, their ability to address fine scale features is somewhat limited in 3D due to disparate length scales of certain classes of devices. Additionally, features like advanced emission rules, including thermionic Child's law and comprehensive secondary emission models also need attention. The program specifically targets problems classes including gridded-guns, sheet-beammore » guns, multi-beam devices, and anisotropic collectors. The presentation will provide an overview of the program objectives, the approach to be taken by the development team, and a status of the project.« less
Modeling and quantification of repolarization feature dependency on heart rate.
Minchole, A; Zacur, E; Pueyo, E; Laguna, P
2014-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Biosignal Interpretation: Advanced Methods for Studying Cardiovascular and Respiratory Systems". This work aims at providing an efficient method to estimate the parameters of a non linear model including memory, previously proposed to characterize rate adaptation of repolarization indices. The physiological restrictions on the model parameters have been included in the cost function in such a way that unconstrained optimization techniques such as descent optimization methods can be used for parameter estimation. The proposed method has been evaluated on electrocardiogram (ECG) recordings of healthy subjects performing a tilt test, where rate adaptation of QT and Tpeak-to-Tend (Tpe) intervals has been characterized. The proposed strategy results in an efficient methodology to characterize rate adaptation of repolarization features, improving the convergence time with respect to previous strategies. Moreover, Tpe interval adapts faster to changes in heart rate than the QT interval. In this work an efficient estimation of the parameters of a model aimed at characterizing rate adaptation of repolarization features has been proposed. The Tpe interval has been shown to be rate related and with a shorter memory lag than the QT interval.
Features and heterogeneities in growing network models
NASA Astrophysics Data System (ADS)
Ferretti, Luca; Cortelezzi, Michele; Yang, Bin; Marmorini, Giacomo; Bianconi, Ginestra
2012-06-01
Many complex networks from the World Wide Web to biological networks grow taking into account the heterogeneous features of the nodes. The feature of a node might be a discrete quantity such as a classification of a URL document such as personal page, thematic website, news, blog, search engine, social network, etc., or the classification of a gene in a functional module. Moreover the feature of a node can be a continuous variable such as the position of a node in the embedding space. In order to account for these properties, in this paper we provide a generalization of growing network models with preferential attachment that includes the effect of heterogeneous features of the nodes. The main effect of heterogeneity is the emergence of an “effective fitness” for each class of nodes, determining the rate at which nodes acquire new links. The degree distribution exhibits a multiscaling behavior analogous to the the fitness model. This property is robust with respect to variations in the model, as long as links are assigned through effective preferential attachment. Beyond the degree distribution, in this paper we give a full characterization of the other relevant properties of the model. We evaluate the clustering coefficient and show that it disappears for large network size, a property shared with the Barabási-Albert model. Negative degree correlations are also present in this class of models, along with nontrivial mixing patterns among features. We therefore conclude that both small clustering coefficients and disassortative mixing are outcomes of the preferential attachment mechanism in general growing networks.
Dienstmann, R; Mason, M J; Sinicrope, F A; Phipps, A I; Tejpar, S; Nesbakken, A; Danielsen, S A; Sveen, A; Buchanan, D D; Clendenning, M; Rosty, C; Bot, B; Alberts, S R; Milburn Jessup, J; Lothe, R A; Delorenzi, M; Newcomb, P A; Sargent, D; Guinney, J
2017-05-01
TNM staging alone does not accurately predict outcome in colon cancer (CC) patients who may be eligible for adjuvant chemotherapy. It is unknown to what extent the molecular markers microsatellite instability (MSI) and mutations in BRAF or KRAS improve prognostic estimation in multivariable models that include detailed clinicopathological annotation. After imputation of missing at random data, a subset of patients accrued in phase 3 trials with adjuvant chemotherapy (n = 3016)-N0147 (NCT00079274) and PETACC3 (NCT00026273)-was aggregated to construct multivariable Cox models for 5-year overall survival that were subsequently validated internally in the remaining clinical trial samples (n = 1499), and also externally in different population cohorts of chemotherapy-treated (n = 949) or -untreated (n = 1080) CC patients, and an additional series without treatment annotation (n = 782). TNM staging, MSI and BRAFV600E mutation status remained independent prognostic factors in multivariable models across clinical trials cohorts and observational studies. Concordance indices increased from 0.61-0.68 in the TNM alone model to 0.63-0.71 in models with added molecular markers, 0.65-0.73 with clinicopathological features and 0.66-0.74 with all covariates. In validation cohorts with complete annotation, the integrated time-dependent AUC rose from 0.64 for the TNM alone model to 0.67 for models that included clinicopathological features, with or without molecular markers. In patient cohorts that received adjuvant chemotherapy, the relative proportion of variance explained (R2) by TNM, clinicopathological features and molecular markers was on an average 65%, 25% and 10%, respectively. Incorporation of MSI, BRAFV600E and KRAS mutation status to overall survival models with TNM staging improves the ability to precisely prognosticate in stage II and III CC patients, but only modestly increases prediction accuracy in multivariable models that include clinicopathological features, particularly in chemotherapy-treated patients. © The Author 2017. Published by Oxford University Press on behalf of the European Society for Medical Oncology.
Development and Application of a Three-Dimensional Finite Element Vapor Intrusion Model
Pennell, Kelly G.; Bozkurt, Ozgur; Suuberg, Eric M.
2010-01-01
Details of a three-dimensional finite element model of soil vapor intrusion, including the overall modeling process and the stepwise approach, are provided. The model is a quantitative modeling tool that can help guide vapor intrusion characterization efforts. It solves the soil gas continuity equation coupled with the chemical transport equation, allowing for both advective and diffusive transport. Three-dimensional pressure, velocity, and chemical concentration fields are produced from the model. Results from simulations involving common site features, such as impervious surfaces, porous foundation sub-base material, and adjacent structures are summarized herein. The results suggest that site-specific features are important to consider when characterizing vapor intrusion risks. More importantly, the results suggest that soil gas or subslab gas samples taken without proper regard for particular site features may not be suitable for evaluating vapor intrusion risks; rather, careful attention needs to be given to the many factors that affect chemical transport into and around buildings. PMID:19418819
NASA Astrophysics Data System (ADS)
Liu, Chunhui; Zhang, Duona; Zhao, Xintao
2018-03-01
Saliency detection in synthetic aperture radar (SAR) images is a difficult problem. This paper proposed a multitask saliency detection (MSD) model for the saliency detection task of SAR images. We extract four features of the SAR image, which include the intensity, orientation, uniqueness, and global contrast, as the input of the MSD model. The saliency map is generated by the multitask sparsity pursuit, which integrates the multiple features collaboratively. Detection of different scale features is also taken into consideration. Subjective and objective evaluation of the MSD model verifies its effectiveness. Based on the saliency maps obtained by the MSD model, we apply the saliency map of the SAR image to the SAR and color optical image fusion. The experimental results of real data show that the saliency map obtained by the MSD model helps to improve the fusion effect, and the salient areas in the SAR image can be highlighted in the fusion results.
Creasy, John M; Midya, Abhishek; Chakraborty, Jayasree; Adams, Lauryn B; Gomes, Camilla; Gonen, Mithat; Seastedt, Kenneth P; Sutton, Elizabeth J; Cercek, Andrea; Kemeny, Nancy E; Shia, Jinru; Balachandran, Vinod P; Kingham, T Peter; Allen, Peter J; DeMatteo, Ronald P; Jarnagin, William R; D'Angelica, Michael I; Do, Richard K G; Simpson, Amber L
2018-06-19
This study investigates whether quantitative image analysis of pretreatment CT scans can predict volumetric response to chemotherapy for patients with colorectal liver metastases (CRLM). Patients treated with chemotherapy for CRLM (hepatic artery infusion (HAI) combined with systemic or systemic alone) were included in the study. Patients were imaged at baseline and approximately 8 weeks after treatment. Response was measured as the percentage change in tumour volume from baseline. Quantitative imaging features were derived from the index hepatic tumour on pretreatment CT, and features statistically significant on univariate analysis were included in a linear regression model to predict volumetric response. The regression model was constructed from 70% of data, while 30% were reserved for testing. Test data were input into the trained model. Model performance was evaluated with mean absolute prediction error (MAPE) and R 2 . Clinicopatholologic factors were assessed for correlation with response. 157 patients were included, split into training (n = 110) and validation (n = 47) sets. MAPE from the multivariate linear regression model was 16.5% (R 2 = 0.774) and 21.5% in the training and validation sets, respectively. Stratified by HAI utilisation, MAPE in the validation set was 19.6% for HAI and 25.1% for systemic chemotherapy alone. Clinical factors associated with differences in median tumour response were treatment strategy, systemic chemotherapy regimen, age and KRAS mutation status (p < 0.05). Quantitative imaging features extracted from pretreatment CT are promising predictors of volumetric response to chemotherapy in patients with CRLM. Pretreatment predictors of response have the potential to better select patients for specific therapies. • Colorectal liver metastases (CRLM) are downsized with chemotherapy but predicting the patients that will respond to chemotherapy is currently not possible. • Heterogeneity and enhancement patterns of CRLM can be measured with quantitative imaging. • Prediction model constructed that predicts volumetric response with 20% error suggesting that quantitative imaging holds promise to better select patients for specific treatments.
Adam, Asrul; Shapiai, Mohd Ibrahim; Tumari, Mohd Zaidi Mohd; Mohamad, Mohd Saberi; Mubin, Marizan
2014-01-01
Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model.
Dayside auroral arcs and convection
NASA Technical Reports Server (NTRS)
Reiff, P. H.; Burch, J. L.; Heelis, R. A.
1978-01-01
Recent Defense Meteorological Satellite Program and International Satellite for Ionospheric Studies dayside auroral observations show two striking features: a lack of visible auroral arcs near noon and occasional fan shaped arcs radiating away from noon on both the morning and afternoon sides of the auroral oval. A simple model which includes these two features is developed by reference to the dayside convection pattern of Heelis et al. (1976). The model may be testable in the near future with simultaneous convection, current and auroral light data.
McLaren, Christine E.; Chen, Wen-Pin; Nie, Ke; Su, Min-Ying
2009-01-01
Rationale and Objectives Dynamic contrast enhanced MRI (DCE-MRI) is a clinical imaging modality for detection and diagnosis of breast lesions. Analytical methods were compared for diagnostic feature selection and performance of lesion classification to differentiate between malignant and benign lesions in patients. Materials and Methods The study included 43 malignant and 28 benign histologically-proven lesions. Eight morphological parameters, ten gray level co-occurrence matrices (GLCM) texture features, and fourteen Laws’ texture features were obtained using automated lesion segmentation and quantitative feature extraction. Artificial neural network (ANN) and logistic regression analysis were compared for selection of the best predictors of malignant lesions among the normalized features. Results Using ANN, the final four selected features were compactness, energy, homogeneity, and Law_LS, with area under the receiver operating characteristic curve (AUC) = 0.82, and accuracy = 0.76. The diagnostic performance of these 4-features computed on the basis of logistic regression yielded AUC = 0.80 (95% CI, 0.688 to 0.905), similar to that of ANN. The analysis also shows that the odds of a malignant lesion decreased by 48% (95% CI, 25% to 92%) for every increase of 1 SD in the Law_LS feature, adjusted for differences in compactness, energy, and homogeneity. Using logistic regression with z-score transformation, a model comprised of compactness, NRL entropy, and gray level sum average was selected, and it had the highest overall accuracy of 0.75 among all models, with AUC = 0.77 (95% CI, 0.660 to 0.880). When logistic modeling of transformations using the Box-Cox method was performed, the most parsimonious model with predictors, compactness and Law_LS, had an AUC of 0.79 (95% CI, 0.672 to 0.898). Conclusion The diagnostic performance of models selected by ANN and logistic regression was similar. The analytic methods were found to be roughly equivalent in terms of predictive ability when a small number of variables were chosen. The robust ANN methodology utilizes a sophisticated non-linear model, while logistic regression analysis provides insightful information to enhance interpretation of the model features. PMID:19409817
A mass balance eutrophication model, Gulf of Mexico Dissolved Oxygen Model (GoMDOM), has been developed and applied to describe nitrogen, phosphorus and primary production in the Louisiana shelf of the Gulf of Mexico. Features of this model include bi-directional boundary exchan...
Unification of gauge and Yukawa couplings
NASA Astrophysics Data System (ADS)
Abdalgabar, Ammar; Khojali, Mohammed Omer; Cornell, Alan S.; Cacciapaglia, Giacomo; Deandrea, Aldo
2018-01-01
The unification of gauge and top Yukawa couplings is an attractive feature of gauge-Higgs unification models in extra-dimensions. This feature is usually considered difficult to obtain based on simple group theory analyses. We reconsider a minimal toy model including the renormalisation group running at one loop. Our results show that the gauge couplings unify asymptotically at high energies, and that this may result from the presence of an UV fixed point. The Yukawa coupling in our toy model is enhanced at low energies, showing that a genuine unification of gauge and Yukawa couplings may be achieved.
A Composite Model of Wound Segmentation Based on Traditional Methods and Deep Neural Networks
Wang, Changjian; Liu, Xiaohui; Jin, Shiyao
2018-01-01
Wound segmentation plays an important supporting role in the wound observation and wound healing. Current methods of image segmentation include those based on traditional process of image and those based on deep neural networks. The traditional methods use the artificial image features to complete the task without large amounts of labeled data. Meanwhile, the methods based on deep neural networks can extract the image features effectively without the artificial design, but lots of training data are required. Combined with the advantages of them, this paper presents a composite model of wound segmentation. The model uses the skin with wound detection algorithm we designed in the paper to highlight image features. Then, the preprocessed images are segmented by deep neural networks. And semantic corrections are applied to the segmentation results at last. The model shows a good performance in our experiment. PMID:29955227
Terrain-driven unstructured mesh development through semi-automatic vertical feature extraction
NASA Astrophysics Data System (ADS)
Bilskie, Matthew V.; Coggin, David; Hagen, Scott C.; Medeiros, Stephen C.
2015-12-01
A semi-automated vertical feature terrain extraction algorithm is described and applied to a two-dimensional, depth-integrated, shallow water equation inundation model. The extracted features describe what are commonly sub-mesh scale elevation details (ridge and valleys), which may be ignored in standard practice because adequate mesh resolution cannot be afforded. The extraction algorithm is semi-automated, requires minimal human intervention, and is reproducible. A lidar-derived digital elevation model (DEM) of coastal Mississippi and Alabama serves as the source data for the vertical feature extraction. Unstructured mesh nodes and element edges are aligned to the vertical features and an interpolation algorithm aimed at minimizing topographic elevation error assigns elevations to mesh nodes via the DEM. The end result is a mesh that accurately represents the bare earth surface as derived from lidar with element resolution in the floodplain ranging from 15 m to 200 m. To examine the influence of the inclusion of vertical features on overland flooding, two additional meshes were developed, one without crest elevations of the features and another with vertical features withheld. All three meshes were incorporated into a SWAN+ADCIRC model simulation of Hurricane Katrina. Each of the three models resulted in similar validation statistics when compared to observed time-series water levels at gages and post-storm collected high water marks. Simulated water level peaks yielded an R2 of 0.97 and upper and lower 95% confidence interval of ∼ ± 0.60 m. From the validation at the gages and HWM locations, it was not clear which of the three model experiments performed best in terms of accuracy. Examination of inundation extent among the three model results were compared to debris lines derived from NOAA post-event aerial imagery, and the mesh including vertical features showed higher accuracy. The comparison of model results to debris lines demonstrates that additional validation techniques are necessary for state-of-the-art flood inundation models. In addition, the semi-automated, unstructured mesh generation process presented herein increases the overall accuracy of simulated storm surge across the floodplain without reliance on hand digitization or sacrificing computational cost.
Parsa, Soroush; Ccanto, Raúl; Olivera, Edgar; Scurrah, María; Alcázar, Jesús; Rosenheim, Jay A.
2012-01-01
Background Pest impact on an agricultural field is jointly influenced by local and landscape features. Rarely, however, are these features studied together. The present study applies a “facilitated ecoinformatics” approach to jointly screen many local and landscape features of suspected importance to Andean potato weevils (Premnotrypes spp.), the most serious pests of potatoes in the high Andes. Methodology/Principal Findings We generated a comprehensive list of predictors of weevil damage, including both local and landscape features deemed important by farmers and researchers. To test their importance, we assembled an observational dataset measuring these features across 138 randomly-selected potato fields in Huancavelica, Peru. Data for local features were generated primarily by participating farmers who were trained to maintain records of their management operations. An information theoretic approach to modeling the data resulted in 131,071 models, the best of which explained 40.2–46.4% of the observed variance in infestations. The best model considering both local and landscape features strongly outperformed the best models considering them in isolation. Multi-model inferences confirmed many, but not all of the expected patterns, and suggested gaps in local knowledge for Andean potato weevils. The most important predictors were the field's perimeter-to-area ratio, the number of nearby potato storage units, the amount of potatoes planted in close proximity to the field, and the number of insecticide treatments made early in the season. Conclusions/Significance Results underscored the need to refine the timing of insecticide applications and to explore adjustments in potato hilling as potential control tactics for Andean weevils. We believe our study illustrates the potential of ecoinformatics research to help streamline IPM learning in agricultural learning collaboratives. PMID:22693551
Python scripting in the nengo simulator.
Stewart, Terrence C; Tripp, Bryan; Eliasmith, Chris
2009-01-01
Nengo (http://nengo.ca) is an open-source neural simulator that has been greatly enhanced by the recent addition of a Python script interface. Nengo provides a wide range of features that are useful for physiological simulations, including unique features that facilitate development of population-coding models using the neural engineering framework (NEF). This framework uses information theory, signal processing, and control theory to formalize the development of large-scale neural circuit models. Notably, it can also be used to determine the synaptic weights that underlie observed network dynamics and transformations of represented variables. Nengo provides rich NEF support, and includes customizable models of spike generation, muscle dynamics, synaptic plasticity, and synaptic integration, as well as an intuitive graphical user interface. All aspects of Nengo models are accessible via the Python interface, allowing for programmatic creation of models, inspection and modification of neural parameters, and automation of model evaluation. Since Nengo combines Python and Java, it can also be integrated with any existing Java or 100% Python code libraries. Current work includes connecting neural models in Nengo with existing symbolic cognitive models, creating hybrid systems that combine detailed neural models of specific brain regions with higher-level models of remaining brain areas. Such hybrid models can provide (1) more realistic boundary conditions for the neural components, and (2) more realistic sub-components for the larger cognitive models.
Python Scripting in the Nengo Simulator
Stewart, Terrence C.; Tripp, Bryan; Eliasmith, Chris
2008-01-01
Nengo (http://nengo.ca) is an open-source neural simulator that has been greatly enhanced by the recent addition of a Python script interface. Nengo provides a wide range of features that are useful for physiological simulations, including unique features that facilitate development of population-coding models using the neural engineering framework (NEF). This framework uses information theory, signal processing, and control theory to formalize the development of large-scale neural circuit models. Notably, it can also be used to determine the synaptic weights that underlie observed network dynamics and transformations of represented variables. Nengo provides rich NEF support, and includes customizable models of spike generation, muscle dynamics, synaptic plasticity, and synaptic integration, as well as an intuitive graphical user interface. All aspects of Nengo models are accessible via the Python interface, allowing for programmatic creation of models, inspection and modification of neural parameters, and automation of model evaluation. Since Nengo combines Python and Java, it can also be integrated with any existing Java or 100% Python code libraries. Current work includes connecting neural models in Nengo with existing symbolic cognitive models, creating hybrid systems that combine detailed neural models of specific brain regions with higher-level models of remaining brain areas. Such hybrid models can provide (1) more realistic boundary conditions for the neural components, and (2) more realistic sub-components for the larger cognitive models. PMID:19352442
NASA Astrophysics Data System (ADS)
Benetti, Micol; Pandolfi, Stefania; Lattanzi, Massimiliano; Martinelli, Matteo; Melchiorri, Alessandro
2013-01-01
Using the most recent data from the WMAP, ACT and SPT experiments, we update the constraints on models with oscillatory features in the primordial power spectrum of scalar perturbations. This kind of features can appear in models of inflation where slow-roll is interrupted, like multifield models. We also derive constraints for the case in which, in addition to cosmic microwave observations, we also consider the data on the spectrum of luminous red galaxies from the 7th SDSS catalog, and the SNIa Union Compilation 2 data. We have found that: (i) considering a model with features in the primordial power spectrum increases the agreement with data compared to the featureless “vanilla” ΛCDM model by Δχ2=6.7, representing an improvement with respect to the expected value Δχ2=3 for an equivalent model with three additional parameters; (ii) the uncertainty on the determination of the standard parameters is not degraded when features are included; (iii) the best fit for the features model locates the step in the primordial spectrum at a scale k≃0.005Mpc-1, corresponding to the scale where the outliers in the WMAP7 data at ℓ=22 and ℓ=40 are located.; (iv) a distinct, albeit less statistically significant peak is present in the likelihood at smaller scales, whose presence might be related to the WMAP7 preference for a negative value of the running of the scalar spectral index parameter; (v) the inclusion of the LRG-7 data does not change significantly the best fit model, but allows to better constrain the amplitude of the oscillations.
NASA Astrophysics Data System (ADS)
Harou, J. J.; Hansen, K. M.
2008-12-01
Increased scarcity of world water resources is inevitable given the limited supply and increased human pressures. The idea that "some scarcity is optimal" must be accepted for rational resource use and infrastructure management decisions to be made. Hydro-economic systems models are unique at representing the overlap of economic drivers, socio-political forces and distributed water resource systems. They demonstrate the tangible benefits of cooperation and integrated flexible system management. Further improvement of models, quality control practices and software will be needed for these academic policy tools to become accepted into mainstream water resource practice. Promising features include: calibration methods, limited foresight optimization formulations, linked simulation-optimization approaches (e.g. embedding pre-existing calibrated simulation models), spatial groundwater models, stream-aquifer interactions and stream routing, etc.. Conventional user-friendly decision support systems helped spread simulation models on a massive scale. Hydro-economic models must also find a means to facilitate construction, distribution and use. Some of these issues and model features are illustrated with a hydro-economic optimization model of the Sacramento Valley. Carry-over storage value functions are used to limit hydrologic foresight of the multi- period optimization model. Pumping costs are included in the formulation by tracking regional piezometric head of groundwater sub-basins. To help build and maintain this type of network model, an open-source water management modeling software platform is described and initial project work is discussed. The objective is to generically facilitate the connection of models, such as those developed in a modeling environment (GAMS, MatLab, Octave, "), to a geographic user interface (drag and drop node-link network) and a database (topology, parameters and time series). These features aim to incrementally move hydro- economic models in the direction of more practical implementation.
AERMOD: MODEL FORMULATION AND EVALUATION RESULTS
AERMOD is an advanced plume model that incorporates updated treatments of the boundary layer theory, understanding of turbulence and dispersion, and includes handling of terrain interactions. This paper presents an overview of AERMOD's features relative to ISCST3.
AERM...
Assumptions to the annual energy outlook 1999 : with projections to 2020
DOT National Transportation Integrated Search
1998-12-16
This paper presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 19991 (AEO99), including general features of : the model structure, assumptions concerning energy ...
Assumptions to the annual energy outlook 2000 : with projections to 2020
DOT National Transportation Integrated Search
2000-01-01
This paper presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20001 (AEO2000), including general features of : the model structure, assumptions concerning energ...
Assumptions to the annual energy outlook 2001 : with projections to 2020
DOT National Transportation Integrated Search
2000-12-01
This report presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20011 (AEO2001), including general features of : the model structure, assumptions concerning ener...
Assumptions for the annual energy outlook 2003 : with projections to 2025
DOT National Transportation Integrated Search
2003-01-01
This report presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20031 (AEO2003), including general features of : the model structure, assumptions concerning ener...
Linear-time general decoding algorithm for the surface code
NASA Astrophysics Data System (ADS)
Darmawan, Andrew S.; Poulin, David
2018-05-01
A quantum error correcting protocol can be substantially improved by taking into account features of the physical noise process. We present an efficient decoder for the surface code which can account for general noise features, including coherences and correlations. We demonstrate that the decoder significantly outperforms the conventional matching algorithm on a variety of noise models, including non-Pauli noise and spatially correlated noise. The algorithm is based on an approximate calculation of the logical channel using a tensor-network description of the noisy state.
Lee, Hansang; Hong, Helen; Kim, Junmo; Jung, Dae Chul
2018-04-01
To develop an automatic deep feature classification (DFC) method for distinguishing benign angiomyolipoma without visible fat (AMLwvf) from malignant clear cell renal cell carcinoma (ccRCC) from abdominal contrast-enhanced computer tomography (CE CT) images. A dataset including 80 abdominal CT images of 39 AMLwvf and 41 ccRCC patients was used. We proposed a DFC method for differentiating the small renal masses (SRM) into AMLwvf and ccRCC using the combination of hand-crafted and deep features, and machine learning classifiers. First, 71-dimensional hand-crafted features (HCF) of texture and shape were extracted from the SRM contours. Second, 1000-4000-dimensional deep features (DF) were extracted from the ImageNet pretrained deep learning model with the SRM image patches. In DF extraction, we proposed the texture image patches (TIP) to emphasize the texture information inside the mass in DFs and reduce the mass size variability. Finally, the two features were concatenated and the random forest (RF) classifier was trained on these concatenated features to classify the types of SRMs. The proposed method was tested on our dataset using leave-one-out cross-validation and evaluated using accuracy, sensitivity, specificity, positive predictive values (PPV), negative predictive values (NPV), and area under receiver operating characteristics curve (AUC). In experiments, the combinations of four deep learning models, AlexNet, VGGNet, GoogleNet, and ResNet, and four input image patches, including original, masked, mass-size, and texture image patches, were compared and analyzed. In qualitative evaluation, we observed the change in feature distributions between the proposed and comparative methods using tSNE method. In quantitative evaluation, we evaluated and compared the classification results, and observed that (a) the proposed HCF + DF outperformed HCF-only and DF-only, (b) AlexNet showed generally the best performances among the CNN models, and (c) the proposed TIPs not only achieved the competitive performances among the input patches, but also steady performance regardless of CNN models. As a result, the proposed method achieved the accuracy of 76.6 ± 1.4% for the proposed HCF + DF with AlexNet and TIPs, which improved the accuracy by 6.6%p and 8.3%p compared to HCF-only and DF-only, respectively. The proposed shape features and TIPs improved the HCFs and DFs, respectively, and the feature concatenation further enhanced the quality of features for differentiating AMLwvf from ccRCC in abdominal CE CT images. © 2018 American Association of Physicists in Medicine.
The interpretation of polycrystalline coherent inelastic neutron scattering from aluminium
Roach, Daniel L.; Ross, D. Keith; Gale, Julian D.; Taylor, Jon W.
2013-01-01
A new approach to the interpretation and analysis of coherent inelastic neutron scattering from polycrystals (poly-CINS) is presented. This article describes a simulation of the one-phonon coherent inelastic scattering from a lattice model of an arbitrary crystal system. The one-phonon component is characterized by sharp features, determined, for example, by boundaries of the (Q, ω) regions where one-phonon scattering is allowed. These features may be identified with the same features apparent in the measured total coherent inelastic cross section, the other components of which (multiphonon or multiple scattering) show no sharp features. The parameters of the model can then be relaxed to improve the fit between model and experiment. This method is of particular interest where no single crystals are available. To test the approach, the poly-CINS has been measured for polycrystalline aluminium using the MARI spectrometer (ISIS), because both lattice dynamical models and measured dispersion curves are available for this material. The models used include a simple Lennard-Jones model fitted to the elastic constants of this material plus a number of embedded atom method force fields. The agreement obtained suggests that the method demonstrated should be effective in developing models for other materials where single-crystal dispersion curves are not available. PMID:24282332
Intelligent classifier for dynamic fault patterns based on hidden Markov model
NASA Astrophysics Data System (ADS)
Xu, Bo; Feng, Yuguang; Yu, Jinsong
2006-11-01
It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.
CNN-SVM for Microvascular Morphological Type Recognition with Data Augmentation.
Xue, Di-Xiu; Zhang, Rong; Feng, Hui; Wang, Ya-Lei
2016-01-01
This paper focuses on the problem of feature extraction and the classification of microvascular morphological types to aid esophageal cancer detection. We present a patch-based system with a hybrid SVM model with data augmentation for intraepithelial papillary capillary loop recognition. A greedy patch-generating algorithm and a specialized CNN named NBI-Net are designed to extract hierarchical features from patches. We investigate a series of data augmentation techniques to progressively improve the prediction invariance of image scaling and rotation. For classifier boosting, SVM is used as an alternative to softmax to enhance generalization ability. The effectiveness of CNN feature representation ability is discussed for a set of widely used CNN models, including AlexNet, VGG-16, and GoogLeNet. Experiments are conducted on the NBI-ME dataset. The recognition rate is up to 92.74% on the patch level with data augmentation and classifier boosting. The results show that the combined CNN-SVM model beats models of traditional features with SVM as well as the original CNN with softmax. The synthesis results indicate that our system is able to assist clinical diagnosis to a certain extent.
No-Reference Image Quality Assessment by Wide-Perceptual-Domain Scorer Ensemble Method.
Liu, Tsung-Jung; Liu, Kuan-Hsien
2018-03-01
A no-reference (NR) learning-based approach to assess image quality is presented in this paper. The devised features are extracted from wide perceptual domains, including brightness, contrast, color, distortion, and texture. These features are used to train a model (scorer) which can predict scores. The scorer selection algorithms are utilized to help simplify the proposed system. In the final stage, the ensemble method is used to combine the prediction results from selected scorers. Two multiple-scale versions of the proposed approach are also presented along with the single-scale one. They turn out to have better performances than the original single-scale method. Because of having features from five different domains at multiple image scales and using the outputs (scores) from selected score prediction models as features for multi-scale or cross-scale fusion (i.e., ensemble), the proposed NR image quality assessment models are robust with respect to more than 24 image distortion types. They also can be used on the evaluation of images with authentic distortions. The extensive experiments on three well-known and representative databases confirm the performance robustness of our proposed model.
Dynamical scattering in coherent hard x-ray nanobeam Bragg diffraction
NASA Astrophysics Data System (ADS)
Pateras, A.; Park, J.; Ahn, Y.; Tilka, J. A.; Holt, M. V.; Kim, H.; Mawst, L. J.; Evans, P. G.
2018-06-01
Unique intensity features arising from dynamical diffraction arise in coherent x-ray nanobeam diffraction patterns of crystals having thicknesses larger than the x-ray extinction depth or exhibiting combinations of nanoscale and mesoscale features. We demonstrate that dynamical scattering effects can be accurately predicted using an optical model combined with the Darwin theory of dynamical x-ray diffraction. The model includes the highly divergent coherent x-ray nanobeams produced by Fresnel zone plate focusing optics and accounts for primary extinction, multiple scattering, and absorption. The simulation accurately reproduces the dynamical scattering features of experimental diffraction patterns acquired from a GaAs/AlGaAs epitaxial heterostructure on a GaAs (001) substrate.
NASA Astrophysics Data System (ADS)
Kong, D.; Donnellan, A.; Pierce, M. E.
2012-12-01
QuakeSim is an online computational framework focused on using remotely sensed geodetic imaging data to model and understand earthquakes. With the rise in online social networking over the last decade, many tools and concepts have been developed that are useful to research groups. In particular, QuakeSim is interested in the ability for researchers to post, share, and annotate files generated by modeling tools in order to facilitate collaboration. To accomplish this, features were added to the preexisting QuakeSim site that include single sign-on, automated saving of output from modeling tools, and a personal user space to manage sharing permissions on these saved files. These features implement OpenID and Lightweight Data Access Protocol (LDAP) technologies to manage files across several different servers, including a web server running Drupal and other servers hosting the computational tools themselves.
Kantarevic, Jasmin; Kralj, Boris
2016-10-01
We develop a stylized principal-agent model with moral hazard and adverse selection to provide a unified framework for understanding some of the most salient features of the recent physician payment reform in Ontario and its impact on physician behavior. These features include the following: (i) physicians can choose a payment contract from a menu that includes an enhanced fee-for-service contract and a blended capitation contract; (ii) the capitation rate is higher, and the cost-reimbursement rate is lower in the blended capitation contract; (iii) physicians sort selectively into the contracts based on their preferences; and (iv) physicians in the blended capitation model provide fewer services than physicians in the enhanced fee-for-service model. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Development of machine learning models for diagnosis of glaucoma.
Kim, Seong Jae; Cho, Kyong Jin; Oh, Sejong
2017-01-01
The study aimed to develop machine learning models that have strong prediction power and interpretability for diagnosis of glaucoma based on retinal nerve fiber layer (RNFL) thickness and visual field (VF). We collected various candidate features from the examination of retinal nerve fiber layer (RNFL) thickness and visual field (VF). We also developed synthesized features from original features. We then selected the best features proper for classification (diagnosis) through feature evaluation. We used 100 cases of data as a test dataset and 399 cases of data as a training and validation dataset. To develop the glaucoma prediction model, we considered four machine learning algorithms: C5.0, random forest (RF), support vector machine (SVM), and k-nearest neighbor (KNN). We repeatedly composed a learning model using the training dataset and evaluated it by using the validation dataset. Finally, we got the best learning model that produces the highest validation accuracy. We analyzed quality of the models using several measures. The random forest model shows best performance and C5.0, SVM, and KNN models show similar accuracy. In the random forest model, the classification accuracy is 0.98, sensitivity is 0.983, specificity is 0.975, and AUC is 0.979. The developed prediction models show high accuracy, sensitivity, specificity, and AUC in classifying among glaucoma and healthy eyes. It will be used for predicting glaucoma against unknown examination records. Clinicians may reference the prediction results and be able to make better decisions. We may combine multiple learning models to increase prediction accuracy. The C5.0 model includes decision rules for prediction. It can be used to explain the reasons for specific predictions.
Applying the Sport Education Model to Tennis
ERIC Educational Resources Information Center
Ayvazo, Shiri
2009-01-01
The physical education field abounds with theoretically sound curricular approaches such as fitness education, skill theme approach, tactical approach, and sport education. In an era that emphasizes authentic sport experiences, the Sport Education Model includes unique features that sets it apart from other curricular models and can be a valuable…
Learn about the 2005 update to the NONROAD emissions inventory model and its features and outputs, including hands-on exercises. Keep in mind that the most current model, approved for use in SIPs, is MOVES2014a which absorbed the latest NONROAD model.
Assumptions to the Annual Energy Outlook
2017-01-01
This report presents the major assumptions of the National Energy Modeling System (NEMS) used to generate the projections in the Annual Energy Outlook, including general features of the model structure, assumptions concerning energy markets, and the key input data and parameters that are the most significant in formulating the model results.
Learning Molecular Behaviour May Improve Student Explanatory Models of the Greenhouse Effect
ERIC Educational Resources Information Center
Harris, Sara E.; Gold, Anne U.
2018-01-01
We assessed undergraduates' representations of the greenhouse effect, based on student-generated concept sketches, before and after a 30-min constructivist lesson. Principal component analysis of features in student sketches revealed seven distinct and coherent explanatory models including a new "Molecular Details" model. After the…
A Cloud Microphysics Model for the Gas Giant Planets
NASA Astrophysics Data System (ADS)
Palotai, Csaba J.; Le Beau, Raymond P.; Shankar, Ramanakumar; Flom, Abigail; Lashley, Jacob; McCabe, Tyler
2016-10-01
Recent studies have significantly increased the quality and the number of observed meteorological features on the jovian planets, revealing banded cloud structures and discrete features. Our current understanding of the formation and decay of those clouds also defines the conceptual modes about the underlying atmospheric dynamics. The full interpretation of the new observational data set and the related theories requires modeling these features in a general circulation model (GCM). Here, we present details of our bulk cloud microphysics model that was designed to simulate clouds in the Explicit Planetary Hybrid-Isentropic Coordinate (EPIC) GCM for the jovian planets. The cloud module includes hydrological cycles for each condensable species that consist of interactive vapor, cloud and precipitation phases and it also accounts for latent heating and cooling throughout the transfer processes (Palotai and Dowling, 2008. Icarus, 194, 303-326). Previously, the self-organizing clouds in our simulations successfully reproduced the vertical and horizontal ammonia cloud structure in the vicinity of Jupiter's Great Red Spot and Oval BA (Palotai et al. 2014, Icarus, 232, 141-156). In our recent work, we extended this model to include water clouds on Jupiter and Saturn, ammonia clouds on Saturn, and methane clouds on Uranus and Neptune. Details of our cloud parameterization scheme, our initial results and their comparison with observations will be shown. The latest version of EPIC model is available as open source software from NASA's PDS Atmospheres Node.
Chasin, Rachel; Rumshisky, Anna; Uzuner, Ozlem; Szolovits, Peter
2014-01-01
Objective To evaluate state-of-the-art unsupervised methods on the word sense disambiguation (WSD) task in the clinical domain. In particular, to compare graph-based approaches relying on a clinical knowledge base with bottom-up topic-modeling-based approaches. We investigate several enhancements to the topic-modeling techniques that use domain-specific knowledge sources. Materials and methods The graph-based methods use variations of PageRank and distance-based similarity metrics, operating over the Unified Medical Language System (UMLS). Topic-modeling methods use unlabeled data from the Multiparameter Intelligent Monitoring in Intensive Care (MIMIC II) database to derive models for each ambiguous word. We investigate the impact of using different linguistic features for topic models, including UMLS-based and syntactic features. We use a sense-tagged clinical dataset from the Mayo Clinic for evaluation. Results The topic-modeling methods achieve 66.9% accuracy on a subset of the Mayo Clinic's data, while the graph-based methods only reach the 40–50% range, with a most-frequent-sense baseline of 56.5%. Features derived from the UMLS semantic type and concept hierarchies do not produce a gain over bag-of-words features in the topic models, but identifying phrases from UMLS and using syntax does help. Discussion Although topic models outperform graph-based methods, semantic features derived from the UMLS prove too noisy to improve performance beyond bag-of-words. Conclusions Topic modeling for WSD provides superior results in the clinical domain; however, integration of knowledge remains to be effectively exploited. PMID:24441986
NASA Astrophysics Data System (ADS)
Wang, Ximing; Kim, Bokkyu; Park, Ji Hoon; Wang, Erik; Forsyth, Sydney; Lim, Cody; Ravi, Ragini; Karibyan, Sarkis; Sanchez, Alexander; Liu, Brent
2017-03-01
Quantitative imaging biomarkers are used widely in clinical trials for tracking and evaluation of medical interventions. Previously, we have presented a web based informatics system utilizing quantitative imaging features for predicting outcomes in stroke rehabilitation clinical trials. The system integrates imaging features extraction tools and a web-based statistical analysis tool. The tools include a generalized linear mixed model(GLMM) that can investigate potential significance and correlation based on features extracted from clinical data and quantitative biomarkers. The imaging features extraction tools allow the user to collect imaging features and the GLMM module allows the user to select clinical data and imaging features such as stroke lesion characteristics from the database as regressors and regressands. This paper discusses the application scenario and evaluation results of the system in a stroke rehabilitation clinical trial. The system was utilized to manage clinical data and extract imaging biomarkers including stroke lesion volume, location and ventricle/brain ratio. The GLMM module was validated and the efficiency of data analysis was also evaluated.
ERIC Educational Resources Information Center
Ross, Scott R.; Benning, Stephen D.; Patrick, Christopher J.; Thompson, Angela; Thurston, Amanda
2009-01-01
Psychopathy is a personality disorder that includes interpersonal-affective and antisocial deviance features. The Psychopathic Personality Inventory (PPI) contains two underlying factors (fearless dominance and impulsive antisociality) that may differentially tap these two sets of features. In a mixed-gender sample of undergraduates and prisoners,…
Dong, Fei; Zeng, Qiang; Jiang, Biao; Yu, Xinfeng; Wang, Weiwei; Xu, Jingjing; Yu, Jinna; Li, Qian; Zhang, Minming
2018-05-01
To study whether some of the quantitative enhancement and necrosis features in preoperative conventional MRI (cMRI) had a predictive value for epidermal growth factor receptor (EGFR) gene amplification status in glioblastoma multiforme (GBM).Fifty-five patients with pathologically determined GBMs who underwent cMRI were retrospectively reviewed. The following cMRI features were quantitatively measured and recorded: long and short diameters of the enhanced portion (LDE and SDE), maximum and minimum thickness of the enhanced portion (MaxTE and MinTE), and long and short diameters of the necrotic portion (LDN and SDN). Univariate analysis of each feature and a decision tree model fed with all the features were performed. Area under the receiver operating characteristic (ROC) curve (AUC) was used to assess the performance of features, and predictive accuracy was used to assess the performance of the model.For single feature, MinTE showed the best performance in differentiating EGFR gene amplification negative (wild-type) (nEGFR) GBM from EGFR gene amplification positive (pEGFR) GBM, and it got an AUC of 0.68 with a cut-off value of 2.6 mm. The decision tree model included 2 features MinTE and SDN, and got an accuracy of 0.83 in validation dataset.Our results suggest that quantitative measurement of the features MinTE and SDN in preoperative cMRI had a high accuracy for predicting EGFR gene amplification status in GBM.
Koppes, Abigail N; Kamath, Megha; Pfluger, Courtney A; Burkey, Daniel D; Dokmeci, Mehmet; Wang, Lin; Carrier, Rebecca L
2016-08-22
Native small intestine possesses distinct multi-scale structures (e.g., crypts, villi) not included in traditional 2D intestinal culture models for drug delivery and regenerative medicine. The known impact of structure on cell function motivates exploration of the influence of intestinal topography on the phenotype of cultured epithelial cells, but the irregular, macro- to submicron-scale features of native intestine are challenging to precisely replicate in cellular growth substrates. Herein, we utilized chemical vapor deposition of Parylene C on decellularized porcine small intestine to create polymeric intestinal replicas containing biomimetic irregular, multi-scale structures. These replicas were used as molds for polydimethylsiloxane (PDMS) growth substrates with macro to submicron intestinal topographical features. Resultant PDMS replicas exhibit multiscale resolution including macro- to micro-scale folds, crypt and villus structures, and submicron-scale features of the underlying basement membrane. After 10 d of human epithelial colorectal cell culture on PDMS substrates, the inclusion of biomimetic topographical features enhanced alkaline phosphatase expression 2.3-fold compared to flat controls, suggesting biomimetic topography is important in induced epithelial differentiation. This work presents a facile, inexpensive method for precisely replicating complex hierarchal features of native tissue, towards a new model for regenerative medicine and drug delivery for intestinal disorders and diseases.
Identifying the features of an exercise addiction: A Delphi study
Macfarlane, Lucy; Owens, Glynn; Cruz, Borja del Pozo
2016-01-01
Objectives There remains limited consensus regarding the definition and conceptual basis of exercise addiction. An understanding of the factors motivating maintenance of addictive exercise behavior is important for appropriately targeting intervention. The aims of this study were twofold: first, to establish consensus on features of an exercise addiction using Delphi methodology and second, to identify whether these features are congruous with a conceptual model of exercise addiction adapted from the Work Craving Model. Methods A three-round Delphi process explored the views of participants regarding the features of an exercise addiction. The participants were selected from sport and exercise relevant domains, including physicians, physiotherapists, coaches, trainers, and athletes. Suggestions meeting consensus were considered with regard to the proposed conceptual model. Results and discussion Sixty-three items reached consensus. There was concordance of opinion that exercising excessively is an addiction, and therefore it was appropriate to consider the suggestions in light of the addiction-based conceptual model. Statements reaching consensus were consistent with all three components of the model: learned (negative perfectionism), behavioral (obsessive–compulsive drive), and hedonic (self-worth compensation and reduction of negative affect and withdrawal). Conclusions Delphi methodology allowed consensus to be reached regarding the features of an exercise addiction, and these features were consistent with our hypothesized conceptual model of exercise addiction. This study is the first to have applied Delphi methodology to the exercise addiction field, and therefore introduces a novel approach to exercise addiction research that can be used as a template to stimulate future examination using this technique. PMID:27554504
AIRID: an application of the KAS/Prospector expert system builder to airplane identification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldridge, J.P.
1984-01-01
The Knowledge Acquisition System/Prospector expert system building tool developed by SRI, International, has been used to construct an expert system to identify aircraft on the basis of observables such as wing shape, engine number/location, fuselage shape, and tail assembly shape. Additional detailed features are allowed to influence the identification as other favorable features. Constraints on the observations imposed by bad weather and distant observations have been included as contexts to the models. Models for Soviet and US fighter aircraft have been included. Inclusion of other types of aircraft such as bombers, transports, and reconnaissance craft is straightforward. Two models permitmore » exploration of the interaction of semantic and taxonomic networks with the models. A full set of text data for fluid communication with the user has been included. The use of demons as triggered output responses to enhance utility to the user has been explored. This paper presents discussion of the ease of building the expert system using this powerful tool and problems encountered in the construction process.« less
Simulating Thermal Cycling and Isothermal Deformation Response of Polycrystalline NiTi
NASA Technical Reports Server (NTRS)
Manchiraju, Sivom; Gaydosh, Darrell J.; Noebe, Ronald D.; Anderson, Peter M.
2011-01-01
A microstructure-based FEM model that couples crystal plasticity, crystallographic descriptions of the B2-B19' martensitic phase transformation, and anisotropic elasticity is used to simulate thermal cycling and isothermal deformation in polycrystalline NiTi (49.9at% Ni). The model inputs include anisotropic elastic properties, polycrystalline texture, DSC data, and a subset of isothermal deformation and load-biased thermal cycling data. A key experimental trend is captured.namely, the transformation strain during thermal cycling is predicted to reach a peak with increasing bias stress, due to the onset of plasticity at larger bias stress. Plasticity induces internal stress that affects both thermal cycling and isothermal deformation responses. Affected thermal cycling features include hysteretic width, two-way shape memory effect, and evolution of texture with increasing bias stress. Affected isothermal deformation features include increased hardening during loading and retained martensite after unloading. These trends are not captured by microstructural models that lack plasticity, nor are they all captured in a robust manner by phenomenological approaches. Despite this advance in microstructural modeling, quantitative differences exist, such as underprediction of open loop strain during thermal cycling.
A semantic model for multimodal data mining in healthcare information systems.
Iakovidis, Dimitris; Smailis, Christos
2012-01-01
Electronic health records (EHRs) are representative examples of multimodal/multisource data collections; including measurements, images and free texts. The diversity of such information sources and the increasing amounts of medical data produced by healthcare institutes annually, pose significant challenges in data mining. In this paper we present a novel semantic model that describes knowledge extracted from the lowest-level of a data mining process, where information is represented by multiple features i.e. measurements or numerical descriptors extracted from measurements, images, texts or other medical data, forming multidimensional feature spaces. Knowledge collected by manual annotation or extracted by unsupervised data mining from one or more feature spaces is modeled through generalized qualitative spatial semantics. This model enables a unified representation of knowledge across multimodal data repositories. It contributes to bridging the semantic gap, by enabling direct links between low-level features and higher-level concepts e.g. describing body parts, anatomies and pathological findings. The proposed model has been developed in web ontology language based on description logics (OWL-DL) and can be applied to a variety of data mining tasks in medical informatics. It utility is demonstrated for automatic annotation of medical data.
Outer satellite atmospheres: Their nature and planetary interactions
NASA Technical Reports Server (NTRS)
Smyth, W. H.
1981-01-01
Modeling capabilities and initial model calculations are reported for the peculiar directional features of the Io sodium cloud discovered by Pilcher and the extended atomic oxygen atmosphere of Io discovered by Brown. Model results explaining the directional feature by a localized emission from the satellite are encouraging, but as yet, inconclusive; whereas for the oxygen cloud, an escape rate of 1 to 2 x 10 to the 27th power atoms/sec or higher from Io is suggested. Preliminary modeling efforts were also initiated for the extended hydrogen ring-atmosphere of Saturn detected by the Voyager spacecraft and for possible extended atmospheres of some of the smaller satellites located in the E-ring. Continuing research efforts reported for the Io sodium cloud include further refinement in the modeling of the east-west asymmetry data, the asymmetric line profile shape, and the intersection of the cloud with the Io plasma torus. In addition, the completed pre-Voyager modeling of Titan's hydrogen torus is included and the near completed model development for the extended atmosphere of comets is discussed.
Combining DRGs and per diem payments in the private sector: the Equitable Payment Model.
Hanning, Brian W T
2005-02-01
The many types of payment models used in the Australian private sector are reviewed. Their features are compared and contrasted to those desirable in an optimal private sector payment model. The EPM(TM) (Equitable Payment Model) is discussed and its consistency with the desirable features of an optimal private sector payment model outlined. These include being based on a robust classification system, nationally benchmarked length of stay (LOS) results, nationally benchmarked relative cost and encouraging continual improvement in efficiency to the benefit of both health funds and private hospitals. The advantages in the context of the private sector of EPM(TM) being a per diem model, albeit very different to current per diem models, are discussed. The advantages of EPM(TM) for hospitals and health funds are outlined.
ERIC Educational Resources Information Center
State Univ. of New York, Buffalo. Coll. at Buffalo.
Project Flagship, the 1974 Distinguished Achievement Awards entry from State University College at Buffalo, New York, is a competency-based teacher education model using laboratory instruction. The special features of this model include a) stated objectives and criteria for evaluation, b) individualized instruction, c) individualized learning…
System and Method for Modeling the Flow Performance Features of an Object
NASA Technical Reports Server (NTRS)
Jorgensen, Charles (Inventor); Ross, James (Inventor)
1997-01-01
The method and apparatus includes a neural network for generating a model of an object in a wind tunnel from performance data on the object. The network is trained from test input signals (e.g., leading edge flap position, trailing edge flap position, angle of attack, and other geometric configurations, and power settings) and test output signals (e.g., lift, drag, pitching moment, or other performance features). In one embodiment, the neural network training method employs a modified Levenberg-Marquardt optimization technique. The model can be generated 'real time' as wind tunnel testing proceeds. Once trained, the model is used to estimate performance features associated with the aircraft given geometric configuration and/or power setting input. The invention can also be applied in other similar static flow modeling applications in aerodynamics, hydrodynamics, fluid dynamics, and other such disciplines. For example, the static testing of cars, sails, and foils, propellers, keels, rudders, turbines, fins, and the like, in a wind tunnel, water trough, or other flowing medium.
Atmospheric electric field and current configurations in the vicinity of mountains
NASA Technical Reports Server (NTRS)
Tzur, I.; Roble, R. G.; Adams, J. C.
1985-01-01
A number of investigations have been conducted regarding the electrical distortion produced by the earth's orography. Hays and Roble (1979) utilized their global model of atmospheric electricity to study the effect of large-scale orographic features on the currents and fields of the global circuit. The present paper is concerned with an extension of the previous work, taking into account an application of model calculations to orographic features with different configurations and an examination of the electric mapping of these features to ionospheric heights. A two-dimensional quasi-static numerical model of atmospheric electricity is employed. The model contains a detailed electrical conductivity profile. The model region extends from the surface to 100 km and includes the equalization layer located above approximately 70 km. The obtained results show that the electric field and current configurations above mountains depend upon the curvature of the mountain slopes, on the width of the mountain, and on the columnar resistance above the mountain (or mountain height).
Kodis, Mali'o; Galante, Peter; Sterling, Eleanor J; Blair, Mary E
2018-04-26
Under the threat of ongoing and projected climate change, communities in the Pacific Islands face challenges of adapting culture and lifestyle to accommodate a changing landscape. Few models can effectively predict how biocultural livelihoods might be impacted. Here, we examine how environmental and anthropogenic factors influence an ecological niche model (ENM) for the realized niche of cultivated taro (Colocasia esculenta) in Hawaii. We created and tuned two sets of ENMs: one using only environmental variables, and one using both environmental and cultural characteristics of Hawaii. These models were projected under two different Intergovernmental Panel on Climate Change (IPCC) Representative Concentration Pathways (RCPs) for 2070. Models were selected and evaluated using average omission rate and area under the receiver operating characteristic curve (AUC). We compared optimal model predictions by comparing the percentage of taro plots predicted present and measured ENM overlap using Schoener's D statistic. The model including only environmental variables consisted of 19 Worldclim bioclimatic variables, in addition to slope, altitude, distance to perennial streams, soil evaporation, and soil moisture. The optimal model with environmental variables plus anthropogenic features also included a road density variable (which we assumed as a proxy for urbanization) and a variable indicating agricultural lands of importance to the state of Hawaii. The model including anthropogenic features performed better than the environment-only model based on omission rate, AUC, and review of spatial projections. The two models also differed in spatial projections for taro under anticipated future climate change. Our results demonstrate how ENMs including anthropogenic features can predict which areas might be best suited to plant cultivated species in the future, and how these areas could change under various climate projections. These predictions might inform biocultural conservation priorities and initiatives. In addition, we discuss the incongruences that arise when traditional ENM theory is applied to species whose distribution has been significantly impacted by human intervention, particularly at a fine scale relevant to biocultural conservation initiatives. © 2018 by the Ecological Society of America.
Yang, Fan; Xu, Ying-Ying; Shen, Hong-Bin
2014-01-01
Human protein subcellular location prediction can provide critical knowledge for understanding a protein's function. Since significant progress has been made on digital microscopy, automated image-based protein subcellular location classification is urgently needed. In this paper, we aim to investigate more representative image features that can be effectively used for dealing with the multilabel subcellular image samples. We prepared a large multilabel immunohistochemistry (IHC) image benchmark from the Human Protein Atlas database and tested the performance of different local texture features, including completed local binary pattern, local tetra pattern, and the standard local binary pattern feature. According to our experimental results from binary relevance multilabel machine learning models, the completed local binary pattern, and local tetra pattern are more discriminative for describing IHC images when compared to the traditional local binary pattern descriptor. The combination of these two novel local pattern features and the conventional global texture features is also studied. The enhanced performance of final binary relevance classification model trained on the combined feature space demonstrates that different features are complementary to each other and thus capable of improving the accuracy of classification.
NASA Astrophysics Data System (ADS)
Mu, Wei; Qi, Jin; Lu, Hong; Schabath, Matthew; Balagurunathan, Yoganand; Tunali, Ilke; Gillies, Robert James
2018-02-01
Purpose: Investigate the ability of using complementary information provided by the fusion of PET/CT images to predict immunotherapy response in non-small cell lung cancer (NSCLC) patients. Materials and methods: We collected 64 patients diagnosed with primary NSCLC treated with anti PD-1 checkpoint blockade. Using PET/CT images, fused images were created following multiple methodologies, resulting in up to 7 different images for the tumor region. Quantitative image features were extracted from the primary image (PET/CT) and the fused images, which included 195 from primary images and 1235 features from the fusion images. Three clinical characteristics were also analyzed. We then used support vector machine (SVM) classification models to identify discriminant features that predict immunotherapy response at baseline. Results: A SVM built with 87 fusion features and 13 primary PET/CT features on validation dataset had an accuracy and area under the ROC curve (AUROC) of 87.5% and 0.82, respectively, compared to a model built with 113 original PET/CT features on validation dataset 78.12% and 0.68. Conclusion: The fusion features shows better ability to predict immunotherapy response prediction compared to individual image features.
Toward Best Practice: An Analysis of the Efficacy of Curriculum Models in Gifted Education
ERIC Educational Resources Information Center
VanTassel-Baska, Joyce; Brown, Elissa F.
2007-01-01
This article provides an overview of existing research on 11 curriculum models in the field of gifted education, including the schoolwide enrichment model and the talent search model, and several others that have been used to shape high-level learning experiences for gifted students. The models are critiqued according to the key features they…
Implications of random variation in the Stand Prognosis Model
David A. Hamilton
1991-01-01
Although the Stand Prognosis Model has several stochastic components, features have been included in the model in an attempt to minimize run-to-run variation attributable to these stochastic components. This has led many users to assume that comparisons of management alternatives could be made based on a single run of the model for each alternative. Recent analyses...
Utility of texture analysis for quantifying hepatic fibrosis on proton density MRI.
Yu, HeiShun; Buch, Karen; Li, Baojun; O'Brien, Michael; Soto, Jorge; Jara, Hernan; Anderson, Stephan W
2015-11-01
To evaluate the potential utility of texture analysis of proton density maps for quantifying hepatic fibrosis in a murine model of hepatic fibrosis. Following Institutional Animal Care and Use Committee (IACUC) approval, a dietary model of hepatic fibrosis was used and 15 ex vivo murine liver tissues were examined. All images were acquired using a 30 mm bore 11.7T magnetic resonance imaging (MRI) scanner with a multiecho spin-echo sequence. A texture analysis was employed extracting multiple texture features including histogram-based, gray-level co-occurrence matrix-based (GLCM), gray-level run-length-based features (GLRL), gray level gradient matrix (GLGM), and Laws' features. Texture features were correlated with histopathologic and digital image analysis of hepatic fibrosis. Histogram features demonstrated very weak to moderate correlations (r = -0.29 to 0.51) with hepatic fibrosis. GLCM features correlation and contrast demonstrated moderate-to-strong correlations (r = -0.71 and 0.59, respectively) with hepatic fibrosis. Moderate correlations were seen between hepatic fibrosis and the GLRL feature short run low gray-level emphasis (SRLGE) (r = -0. 51). GLGM features demonstrate very weak to weak correlations with hepatic fibrosis (r = -0.27 to 0.09). Moderate correlations were seen between hepatic fibrosis and Laws' features L6 and L7 (r = 0.58). This study demonstrates the utility of texture analysis applied to proton density MRI in a murine liver fibrosis model and validates the potential utility of texture-based features for the noninvasive, quantitative assessment of hepatic fibrosis. © 2015 Wiley Periodicals, Inc.
Jin, Mingwu; Deng, Weishu
2018-05-15
There is a spectrum of the progression from healthy control (HC) to mild cognitive impairment (MCI) without conversion to Alzheimer's disease (AD), to MCI with conversion to AD (cMCI), and to AD. This study aims to predict the different disease stages using brain structural information provided by magnetic resonance imaging (MRI) data. The neighborhood component analysis (NCA) is applied to select most powerful features for prediction. The ensemble decision tree classifier is built to predict which group the subject belongs to. The best features and model parameters are determined by cross validation of the training data. Our results show that 16 out of a total of 429 features were selected by NCA using 240 training subjects, including MMSE score and structural measures in memory-related regions. The boosting tree model with NCA features can achieve prediction accuracy of 56.25% on 160 test subjects. Principal component analysis (PCA) and sequential feature selection (SFS) are used for feature selection, while support vector machine (SVM) is used for classification. The boosting tree model with NCA features outperforms all other combinations of feature selection and classification methods. The results suggest that NCA be a better feature selection strategy than PCA and SFS for the data used in this study. Ensemble tree classifier with boosting is more powerful than SVM to predict the subject group. However, more advanced feature selection and classification methods or additional measures besides structural MRI may be needed to improve the prediction performance. Copyright © 2018 Elsevier B.V. All rights reserved.
Nie, Zhi; Vairavan, Srinivasan; Narayan, Vaibhav A; Ye, Jieping; Li, Qingqin S
2018-01-01
Identification of risk factors of treatment resistance may be useful to guide treatment selection, avoid inefficient trial-and-error, and improve major depressive disorder (MDD) care. We extended the work in predictive modeling of treatment resistant depression (TRD) via partition of the data from the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) cohort into a training and a testing dataset. We also included data from a small yet completely independent cohort RIS-INT-93 as an external test dataset. We used features from enrollment and level 1 treatment (up to week 2 response only) of STAR*D to explore the feature space comprehensively and applied machine learning methods to model TRD outcome at level 2. For TRD defined using QIDS-C16 remission criteria, multiple machine learning models were internally cross-validated in the STAR*D training dataset and externally validated in both the STAR*D testing dataset and RIS-INT-93 independent dataset with an area under the receiver operating characteristic curve (AUC) of 0.70-0.78 and 0.72-0.77, respectively. The upper bound for the AUC achievable with the full set of features could be as high as 0.78 in the STAR*D testing dataset. Model developed using top 30 features identified using feature selection technique (k-means clustering followed by χ2 test) achieved an AUC of 0.77 in the STAR*D testing dataset. In addition, the model developed using overlapping features between STAR*D and RIS-INT-93, achieved an AUC of > 0.70 in both the STAR*D testing and RIS-INT-93 datasets. Among all the features explored in STAR*D and RIS-INT-93 datasets, the most important feature was early or initial treatment response or symptom severity at week 2. These results indicate that prediction of TRD prior to undergoing a second round of antidepressant treatment could be feasible even in the absence of biomarker data.
Mixture of autoregressive modeling orders and its implication on single trial EEG classification
Atyabi, Adham; Shic, Frederick; Naples, Adam
2016-01-01
Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331
Bi-level Multi-Source Learning for Heterogeneous Block-wise Missing Data
Xiang, Shuo; Yuan, Lei; Fan, Wei; Wang, Yalin; Thompson, Paul M.; Ye, Jieping
2013-01-01
Bio-imaging technologies allow scientists to collect large amounts of high-dimensional data from multiple heterogeneous sources for many biomedical applications. In the study of Alzheimer's Disease (AD), neuroimaging data, gene/protein expression data, etc., are often analyzed together to improve predictive power. Joint learning from multiple complementary data sources is advantageous, but feature-pruning and data source selection are critical to learn interpretable models from high-dimensional data. Often, the data collected has block-wise missing entries. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), most subjects have MRI and genetic information, but only half have cerebrospinal fluid (CSF) measures, a different half has FDG-PET; only some have proteomic data. Here we propose how to effectively integrate information from multiple heterogeneous data sources when data is block-wise missing. We present a unified “bi-level” learning model for complete multi-source data, and extend it to incomplete data. Our major contributions are: (1) our proposed models unify feature-level and source-level analysis, including several existing feature learning approaches as special cases; (2) the model for incomplete data avoids imputing missing data and offers superior performance; it generalizes to other applications with block-wise missing data sources; (3) we present efficient optimization algorithms for modeling complete and incomplete data. We comprehensively evaluate the proposed models including all ADNI subjects with at least one of four data types at baseline: MRI, FDG-PET, CSF and proteomics. Our proposed models compare favorably with existing approaches. PMID:23988272
Bi-level multi-source learning for heterogeneous block-wise missing data.
Xiang, Shuo; Yuan, Lei; Fan, Wei; Wang, Yalin; Thompson, Paul M; Ye, Jieping
2014-11-15
Bio-imaging technologies allow scientists to collect large amounts of high-dimensional data from multiple heterogeneous sources for many biomedical applications. In the study of Alzheimer's Disease (AD), neuroimaging data, gene/protein expression data, etc., are often analyzed together to improve predictive power. Joint learning from multiple complementary data sources is advantageous, but feature-pruning and data source selection are critical to learn interpretable models from high-dimensional data. Often, the data collected has block-wise missing entries. In the Alzheimer's Disease Neuroimaging Initiative (ADNI), most subjects have MRI and genetic information, but only half have cerebrospinal fluid (CSF) measures, a different half has FDG-PET; only some have proteomic data. Here we propose how to effectively integrate information from multiple heterogeneous data sources when data is block-wise missing. We present a unified "bi-level" learning model for complete multi-source data, and extend it to incomplete data. Our major contributions are: (1) our proposed models unify feature-level and source-level analysis, including several existing feature learning approaches as special cases; (2) the model for incomplete data avoids imputing missing data and offers superior performance; it generalizes to other applications with block-wise missing data sources; (3) we present efficient optimization algorithms for modeling complete and incomplete data. We comprehensively evaluate the proposed models including all ADNI subjects with at least one of four data types at baseline: MRI, FDG-PET, CSF and proteomics. Our proposed models compare favorably with existing approaches. © 2013 Elsevier Inc. All rights reserved.
Simmering, Vanessa R; Miller, Hilary E; Bohache, Kevin
2015-05-01
Research on visual working memory has focused on characterizing the nature of capacity limits as "slots" or "resources" based almost exclusively on adults' performance with little consideration for developmental change. Here we argue that understanding how visual working memory develops can shed new light onto the nature of representations. We present an alternative model, the Dynamic Field Theory (DFT), which can capture effects that have been previously attributed either to "slot" or "resource" explanations. The DFT includes a specific developmental mechanism to account for improvements in both resolution and capacity of visual working memory throughout childhood. Here we show how development in the DFT can account for different capacity estimates across feature types (i.e., color and shape). The current paper tests this account by comparing children's (3, 5, and 7 years of age) performance across different feature types. Results showed that capacity for colors increased faster over development than capacity for shapes. A second experiment confirmed this difference across feature types within subjects, but also showed that the difference can be attenuated by testing memory for less familiar colors. Model simulations demonstrate how developmental changes in connectivity within the model-purportedly arising through experience-can capture differences across feature types.
Models of clinical reasoning with a focus on general practice: A critical review.
Yazdani, Shahram; Hosseinzadeh, Mohammad; Hosseini, Fakhrolsadat
2017-10-01
Diagnosis lies at the heart of general practice. Every day general practitioners (GPs) visit patients with a wide variety of complaints and concerns, with often minor but sometimes serious symptoms. General practice has many features which differentiate it from specialty care setting, but during the last four decades little attention was paid to clinical reasoning in general practice. Therefore, we aimed to critically review the clinical reasoning models with a focus on the clinical reasoning in general practice or clinical reasoning of general practitioners to find out to what extent the existing models explain the clinical reasoning specially in primary care and also identity the gaps of the model for use in primary care settings. A systematic search to find models of clinical reasoning were performed. To have more precision, we excluded the studies that focused on neurobiological aspects of reasoning, reasoning in disciplines other than medicine decision making or decision analysis on treatment or management plan. All the articles and documents were first scanned to see whether they include important relevant contents or any models. The selected studies which described a model of clinical reasoning in general practitioners or with a focus on general practice were then reviewed and appraisal or critics of other authors on these models were included. The reviewed documents on the model were synthesized. Six models of clinical reasoning were identified including hypothetic-deductive model, pattern recognition, a dual process diagnostic reasoning model, pathway for clinical reasoning, an integrative model of clinical reasoning, and model of diagnostic reasoning strategies in primary care. Only one model had specifically focused on general practitioners reasoning. A Model of clinical reasoning that included specific features of general practice to better help the general practitioners with the difficulties of clinical reasoning in this setting is needed.
Yeung, Wing-Fai; Chung, Ka-Fai; Zhang, Nevin Lian-Wen; Zhang, Shi Ping; Yung, Kam-Ping; Chen, Pei-Xian; Ho, Yan-Yee
2016-01-01
Chinese medicine (CM) syndrome (zheng) differentiation is based on the co-occurrence of CM manifestation profiles, such as signs and symptoms, and pulse and tongue features. Insomnia is a symptom that frequently occurs in major depressive disorder despite adequate antidepressant treatment. This study aims to identify co-occurrence patterns in participants with persistent insomnia and major depressive disorder from clinical feature data using latent tree analysis, and to compare the latent variables with relevant CM syndromes. One hundred and forty-two participants with persistent insomnia and a history of major depressive disorder completed a standardized checklist (the Chinese Medicine Insomnia Symptom Checklist) specially developed for CM syndrome classification of insomnia. The checklist covers symptoms and signs, including tongue and pulse features. The clinical features assessed by the checklist were analyzed using Lantern software. CM practitioners with relevant experience compared the clinical feature variables under each latent variable with reference to relevant CM syndromes, based on a previous review of CM syndromes. The symptom data were analyzed to build the latent tree model and the model with the highest Bayes information criterion score was regarded as the best model. This model contained 18 latent variables, each of which divided participants into two clusters. Six clusters represented more than 50 % of the sample. The clinical feature co-occurrence patterns of these six clusters were interpreted as the CM syndromes Liver qi stagnation transforming into fire, Liver fire flaming upward, Stomach disharmony, Hyperactivity of fire due to yin deficiency, Heart-kidney noninteraction, and Qi deficiency of the heart and gallbladder. The clinical feature variables that contributed significant cumulative information coverage (at least 95 %) were identified. Latent tree model analysis on a sample of depressed participants with insomnia revealed 13 clinical feature co-occurrence patterns, four mutual-exclusion patterns, and one pattern with a single clinical feature variable.
Mspire-Simulator: LC-MS shotgun proteomic simulator for creating realistic gold standard data.
Noyce, Andrew B; Smith, Rob; Dalgleish, James; Taylor, Ryan M; Erb, K C; Okuda, Nozomu; Prince, John T
2013-12-06
The most important step in any quantitative proteomic pipeline is feature detection (aka peak picking). However, generating quality hand-annotated data sets to validate the algorithms, especially for lower abundance peaks, is nearly impossible. An alternative for creating gold standard data is to simulate it with features closely mimicking real data. We present Mspire-Simulator, a free, open-source shotgun proteomic simulator that goes beyond previous simulation attempts by generating LC-MS features with realistic m/z and intensity variance along with other noise components. It also includes machine-learned models for retention time and peak intensity prediction and a genetic algorithm to custom fit model parameters for experimental data sets. We show that these methods are applicable to data from three different mass spectrometers, including two fundamentally different types, and show visually and analytically that simulated peaks are nearly indistinguishable from actual data. Researchers can use simulated data to rigorously test quantitation software, and proteomic researchers may benefit from overlaying simulated data on actual data sets.
Stratospheric temperatures and tracer transport in a nudged 4-year middle atmosphere GCM simulation
NASA Astrophysics Data System (ADS)
van Aalst, M. K.; Lelieveld, J.; Steil, B.; Brühl, C.; Jöckel, P.; Giorgetta, M. A.; Roelofs, G.-J.
2005-02-01
We have performed a 4-year simulation with the Middle Atmosphere General Circulation Model MAECHAM5/MESSy, while slightly nudging the model's meteorology in the free troposphere (below 113 hPa) towards ECMWF analyses. We show that the nudging 5 technique, which leaves the middle atmosphere almost entirely free, enables comparisons with synoptic observations. The model successfully reproduces many specific features of the interannual variability, including details of the Antarctic vortex structure. In the Arctic, the model captures general features of the interannual variability, but falls short in reproducing the timing of sudden stratospheric warmings. A 10 detailed comparison of the nudged model simulations with ECMWF data shows that the model simulates realistic stratospheric temperature distributions and variabilities, including the temperature minima in the Antarctic vortex. Some small (a few K) model biases were also identified, including a summer cold bias at both poles, and a general cold bias in the lower stratosphere, most pronounced in midlatitudes. A comparison 15 of tracer distributions with HALOE observations shows that the model successfully reproduces specific aspects of the instantaneous circulation. The main tracer transport deficiencies occur in the polar lowermost stratosphere. These are related to the tropopause altitude as well as the tracer advection scheme and model resolution. The additional nudging of equatorial zonal winds, forcing the quasi-biennial oscillation, sig20 nificantly improves stratospheric temperatures and tracer distributions.
You prime what you code: The fAIM model of priming of pop-out
Meeter, Martijn
2017-01-01
Our visual brain makes use of recent experience to interact with the visual world, and efficiently select relevant information. This is exemplified by speeded search when target- and distractor features repeat across trials versus when they switch, a phenomenon referred to as intertrial priming. Here, we present fAIM, a computational model that demonstrates how priming can be explained by a simple feature-weighting mechanism integrated into an established model of bottom-up vision. In fAIM, such modulations in feature gains are widespread and not just restricted to one or a few features. Consequentially, priming effects result from the overall tuning of visual features to the task at hand. Such tuning allows the model to reproduce priming for different types of stimuli, including for typical stimulus dimensions such as ‘color’ and for less obvious dimensions such as ‘spikiness’ of shapes. Moreover, the model explains some puzzling findings from the literature: it shows how priming can be found for target-distractor stimulus relations rather than for their absolute stimulus values per se, without an explicit representation of relations. Similarly, it simulates effects that have been taken to reflect a modulation of priming by an observers’ goals—without any representation of goals in the model. We conclude that priming is best considered as a consequence of a general adaptation of the brain to visual input, and not as a peculiarity of visual search. PMID:29166386
Adam, Asrul; Mohd Tumari, Mohd Zaidi; Mohamad, Mohd Saberi
2014-01-01
Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model. PMID:25243236
Modeling carbon and nitrogen biogeochemistry in forest ecosystems
Changsheng Li; Carl Trettin; Ge Sun; Steve McNulty; Klaus Butterbach-Bahl
2005-01-01
A forest biogeochemical model, Forest-DNDC, was developed to quantify carbon sequestration in and trace gas emissions from forest ecosystems. Forest-DNDC was constructed by integrating two existing moels, PnET and DNDC, with several new features including nitrification, forest litter layer, soil freezing and thawing etc, PnET is a forest physiological model predicting...
Dierker, Lisa; Rose, Jennifer; Tan, Xianming; Li, Runze
2010-12-01
This paper describes and compares a selection of available modeling techniques for identifying homogeneous population subgroups in the interest of informing targeted substance use intervention. We present a nontechnical review of the common and unique features of three methods: (a) trajectory analysis, (b) functional hierarchical linear modeling (FHLM), and (c) decision tree methods. Differences among the techniques are described, including required data features, strengths and limitations in terms of the flexibility with which outcomes and predictors can be modeled, and the potential of each technique for helping to inform the selection of targets and timing of substance intervention programs.
Extending Primitive Spatial Data Models to Include Semantics
NASA Astrophysics Data System (ADS)
Reitsma, F.; Batcheller, J.
2009-04-01
Our traditional geospatial data model involves associating some measurable quality, such as temperature, or observable feature, such as a tree, with a point or region in space and time. When capturing data we implicitly subscribe to some kind of conceptualisation. If we can make this explicit in an ontology and associate it with the captured data, we can leverage formal semantics to reason with the concepts represented in our spatial data sets. To do so, we extend our fundamental representation of geospatial data in a data model by including a URI in our basic data model that links it to our ontology defining our conceptualisation, We thus extend Goodchild et al's geo-atom [1] with the addition of a URI: (x, Z, z(x), URI) . This provides us with pixel or feature level knowledge and the ability to create layers of data from a set of pixels or features that might be drawn from a database based on their semantics. Using open source tools, we present a prototype that involves simple reasoning as a proof of concept. References [1] M.F. Goodchild, M. Yuan, and T.J. Cova. Towards a general theory of geographic representation in gis. International Journal of Geographical Information Science, 21(3):239-260, 2007.
Why replication is important in landscape genetics: American black bear in the Rocky Mountains
Short, Bull R.A.; Cushman, S.A.; MacE, R.; Chilton, T.; Kendall, K.C.; Landguth, E.L.; Schwartz, Maurice L.; McKelvey, K.; Allendorf, F.W.; Luikart, G.
2011-01-01
We investigated how landscape features influence gene flow of black bears by testing the relative support for 36 alternative landscape resistance hypotheses, including isolation by distance (IBD) in each of 12 study areas in the north central U.S. Rocky Mountains. The study areas all contained the same basic elements, but differed in extent of forest fragmentation, altitude, variation in elevation and road coverage. In all but one of the study areas, isolation by landscape resistance was more supported than IBD suggesting gene flow is likely influenced by elevation, forest cover, and roads. However, the landscape features influencing gene flow varied among study areas. Using subsets of loci usually gave models with the very similar landscape features influencing gene flow as with all loci, suggesting the landscape features influencing gene flow were correctly identified. To test if the cause of the variability of supported landscape features in study areas resulted from landscape differences among study areas, we conducted a limiting factor analysis. We found that features were supported in landscape models only when the features were highly variable. This is perhaps not surprising but suggests an important cautionary note – that if landscape features are not found to influence gene flow, researchers should not automatically conclude that the features are unimportant to the species’ movement and gene flow. Failure to investigate multiple study areas that have a range of variability in landscape features could cause misleading inferences about which landscape features generally limit gene flow. This could lead to potentially erroneous identification of corridors and barriers if models are transferred between areas with different landscape characteristics.
Supporting the Growing Needs of the GIS Industry
NASA Technical Reports Server (NTRS)
2003-01-01
Visual Learning Systems, Inc. (VLS), of Missoula, Montana, has developed a commercial software application called Feature Analyst. Feature Analyst was conceived under a Small Business Innovation Research (SBIR) contract with NASA's Stennis Space Center, and through the Montana State University TechLink Center, an organization funded by NASA and the U.S. Department of Defense to link regional companies with Federal laboratories for joint research and technology transfer. The software provides a paradigm shift to automated feature extraction, as it utilizes spectral, spatial, temporal, and ancillary information to model the feature extraction process; presents the ability to remove clutter; incorporates advanced machine learning techniques to supply unparalleled levels of accuracy; and includes an exceedingly simple interface for feature extraction.
Community Crowd-Funded Solar Finance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jagerson, Gordon "Ty"
The award supported the demonstration and development of the Village Power Platform, which enables community organizations to more readily develop, finance and operate solar installations on local community organizations. The platform enables partial or complete local ownership of the solar installation. The award specifically supported key features including financial modeling tools, community communications tools, crowdfunding mechanisms, a mobile app, and other critical features.
ERIC Educational Resources Information Center
Tillman, Daniel; An, Song; Boren, Rachel; Slykhuis, David
2014-01-01
This study assessed the impact of nine lessons incorporating a NASA-themed transmedia book featuring digital fabrication activities on 5th-grade students (n = 29) recognized as advanced in mathematics based on their academic record. Data collected included a pretest and posttest of science content questions taken from released Virginia Standards…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-18
...-traditional, large, non-metallic panels. In order to provide a level of safety that is equivalent to that... Airplanes; Seats With Non-Traditional, Large, Non-Metallic Panels AGENCY: Federal Aviation Administration... a novel or unusual design feature(s) associated with seats that include non-traditional, large, non...
The origin of absorptive features in the two-dimensional electronic spectra of rhodopsin.
Farag, Marwa H; Jansen, Thomas L C; Knoester, Jasper
2018-05-09
In rhodopsin, the absorption of a photon causes the isomerization of the 11-cis isomer of the retinal chromophore to its all-trans isomer. This isomerization is known to occur through a conical intersection (CI) and the internal conversion through the CI is known to be vibrationally coherent. Recently measured two-dimensional electronic spectra (2DES) showed dramatic absorptive spectral features at early waiting times associated with the transition through the CI. The common two-state two-mode model Hamiltonian was unable to elucidate the origin of these features. To rationalize the source of these features, we employ a three-state three-mode model Hamiltonian where the hydrogen out-of plane (HOOP) mode and a higher-lying electronic state are included. The 2DES of the retinal chromophore in rhodopsin are calculated and compared with the experiment. Our analysis shows that the source of the observed features in the measured 2DES is the excited state absorption to a higher-lying electronic state and not the HOOP mode.
Boykin, K.G.; Thompson, B.C.; Propeck-Gray, S.
2010-01-01
Despite widespread and long-standing efforts to model wildlife-habitat associations using remotely sensed and other spatially explicit data, there are relatively few evaluations of the performance of variables included in predictive models relative to actual features on the landscape. As part of the National Gap Analysis Program, we specifically examined physical site features at randomly selected sample locations in the Southwestern U.S. to assess degree of concordance with predicted features used in modeling vertebrate habitat distribution. Our analysis considered hypotheses about relative accuracy with respect to 30 vertebrate species selected to represent the spectrum of habitat generalist to specialist and categorization of site by relative degree of conservation emphasis accorded to the site. Overall comparison of 19 variables observed at 382 sample sites indicated ???60% concordance for 12 variables. Directly measured or observed variables (slope, soil composition, rock outcrop) generally displayed high concordance, while variables that required judgments regarding descriptive categories (aspect, ecological system, landform) were less concordant. There were no differences detected in concordance among taxa groups, degree of specialization or generalization of selected taxa, or land conservation categorization of sample sites with respect to all sites. We found no support for the hypothesis that accuracy of habitat models is inversely related to degree of taxa specialization when model features for a habitat specialist could be more difficult to represent spatially. Likewise, we did not find support for the hypothesis that physical features will be predicted with higher accuracy on lands with greater dedication to biodiversity conservation than on other lands because of relative differences regarding available information. Accuracy generally was similar (>60%) to that observed for land cover mapping at the ecological system level. These patterns demonstrate resilience of gap analysis deductive model processes to the type of remotely sensed or interpreted data used in habitat feature predictions. ?? 2010 Elsevier B.V.
NASA Astrophysics Data System (ADS)
Vetrivel, Anand; Gerke, Markus; Kerle, Norman; Nex, Francesco; Vosselman, George
2018-06-01
Oblique aerial images offer views of both building roofs and façades, and thus have been recognized as a potential source to detect severe building damages caused by destructive disaster events such as earthquakes. Therefore, they represent an important source of information for first responders or other stakeholders involved in the post-disaster response process. Several automated methods based on supervised learning have already been demonstrated for damage detection using oblique airborne images. However, they often do not generalize well when data from new unseen sites need to be processed, hampering their practical use. Reasons for this limitation include image and scene characteristics, though the most prominent one relates to the image features being used for training the classifier. Recently features based on deep learning approaches, such as convolutional neural networks (CNNs), have been shown to be more effective than conventional hand-crafted features, and have become the state-of-the-art in many domains, including remote sensing. Moreover, often oblique images are captured with high block overlap, facilitating the generation of dense 3D point clouds - an ideal source to derive geometric characteristics. We hypothesized that the use of CNN features, either independently or in combination with 3D point cloud features, would yield improved performance in damage detection. To this end we used CNN and 3D features, both independently and in combination, using images from manned and unmanned aerial platforms over several geographic locations that vary significantly in terms of image and scene characteristics. A multiple-kernel-learning framework, an effective way for integrating features from different modalities, was used for combining the two sets of features for classification. The results are encouraging: while CNN features produced an average classification accuracy of about 91%, the integration of 3D point cloud features led to an additional improvement of about 3% (i.e. an average classification accuracy of 94%). The significance of 3D point cloud features becomes more evident in the model transferability scenario (i.e., training and testing samples from different sites that vary slightly in the aforementioned characteristics), where the integration of CNN and 3D point cloud features significantly improved the model transferability accuracy up to a maximum of 7% compared with the accuracy achieved by CNN features alone. Overall, an average accuracy of 85% was achieved for the model transferability scenario across all experiments. Our main conclusion is that such an approach qualifies for practical use.
Degeneracy in the spectrum and bispectrum among featured inflaton potentials
NASA Astrophysics Data System (ADS)
Gallego Cadavid, Alexander; Enea Romano, Antonio; Sasaki, Misao
2018-05-01
We study the degeneracy of the primordial spectrum and bispectrum of the curvature perturbation in single field inflationary models with a class of features in the inflaton potential. The feature we consider is a discontinuous change in the shape of the potential and is controlled by a couple of parameters that describe the strength of the discontinuity and the change in the potential shape. This feature produces oscillations of the spectrum and bispectrum around the comoving scale k=k0 that exits the horizon when the inflaton passes the discontinuity. We find that the effects on the spectrum and almost all configurations of the bispectrum including the squeezed limit depend on a single quantity which is a function of the two parameters defining the feature. This leads to a degeneracy, i.e. different features of the inflaton potential can produce the same observational effects. However, we find that the degeneracy in the bispectrum is removed at the equilateral limit around k=k0. This can be used to discriminate different models which give the same spectrum.
Intraspecific differences in bacterial responses to modelled reduced gravity
NASA Technical Reports Server (NTRS)
Baker, P. W.; Leff, L. G.
2005-01-01
AIMS: Bacteria are important residents of water systems, including those of space stations which feature specific environmental conditions, such as lowered effects of gravity. The purpose of this study was to compare responses with modelled reduced gravity of space station, water system bacterial isolates with other isolates of the same species. METHODS AND RESULTS: Bacterial isolates, Stenotrophomonas paucimobilis and Acinetobacter radioresistens, originally recovered from the water supply aboard the International Space Station (ISS) were grown in nutrient broth under modelled reduced gravity. Their growth was compared with type strains S. paucimobilis ATCC 10829 and A. radioresistens ATCC 49000. Acinetobacter radioresistens ATCC 49000 and the two ISS isolates showed similar growth profiles under modelled reduced gravity compared with normal gravity, whereas S. paucimobilis ATCC 10829 was negatively affected by modelled reduced gravity. CONCLUSIONS: These results suggest that microgravity might have selected for bacteria that were able to thrive under this unusual condition. These responses, coupled with impacts of other features (such as radiation resistance and ability to persist under very oligotrophic conditions), may contribute to the success of these water system bacteria. SIGNIFICANCE AND IMPACT OF THE STUDY: Water quality is a significant factor in many environments including the ISS. Efforts to remove microbial contaminants are likely to be complicated by the features of these bacteria which allow them to persist under the extreme conditions of the systems.
Cellular automata with object-oriented features for parallel molecular network modeling.
Zhu, Hao; Wu, Yinghui; Huang, Sui; Sun, Yan; Dhar, Pawan
2005-06-01
Cellular automata are an important modeling paradigm for studying the dynamics of large, parallel systems composed of multiple, interacting components. However, to model biological systems, cellular automata need to be extended beyond the large-scale parallelism and intensive communication in order to capture two fundamental properties characteristic of complex biological systems: hierarchy and heterogeneity. This paper proposes extensions to a cellular automata language, Cellang, to meet this purpose. The extended language, with object-oriented features, can be used to describe the structure and activity of parallel molecular networks within cells. Capabilities of this new programming language include object structure to define molecular programs within a cell, floating-point data type and mathematical functions to perform quantitative computation, message passing capability to describe molecular interactions, as well as new operators, statements, and built-in functions. We discuss relevant programming issues of these features, including the object-oriented description of molecular interactions with molecule encapsulation, message passing, and the description of heterogeneity and anisotropy at the cell and molecule levels. By enabling the integration of modeling at the molecular level with system behavior at cell, tissue, organ, or even organism levels, the program will help improve our understanding of how complex and dynamic biological activities are generated and controlled by parallel functioning of molecular networks. Index Terms-Cellular automata, modeling, molecular network, object-oriented.
SVGenes: a library for rendering genomic features in scalable vector graphic format.
Etherington, Graham J; MacLean, Daniel
2013-08-01
Drawing genomic features in attractive and informative ways is a key task in visualization of genomics data. Scalable Vector Graphics (SVG) format is a modern and flexible open standard that provides advanced features including modular graphic design, advanced web interactivity and animation within a suitable client. SVGs do not suffer from loss of image quality on re-scaling and provide the ability to edit individual elements of a graphic on the whole object level independent of the whole image. These features make SVG a potentially useful format for the preparation of publication quality figures including genomic objects such as genes or sequencing coverage and for web applications that require rich user-interaction with the graphical elements. SVGenes is a Ruby-language library that uses SVG primitives to render typical genomic glyphs through a simple and flexible Ruby interface. The library implements a simple Page object that spaces and contains horizontal Track objects that in turn style, colour and positions features within them. Tracks are the level at which visual information is supplied providing the full styling capability of the SVG standard. Genomic entities like genes, transcripts and histograms are modelled in Glyph objects that are attached to a track and take advantage of SVG primitives to render the genomic features in a track as any of a selection of defined glyphs. The feature model within SVGenes is simple but flexible and not dependent on particular existing gene feature formats meaning graphics for any existing datasets can easily be created without need for conversion. The library is provided as a Ruby Gem from https://rubygems.org/gems/bio-svgenes under the MIT license, and open source code is available at https://github.com/danmaclean/bioruby-svgenes also under the MIT License. dan.maclean@tsl.ac.uk.
Substructured multibody molecular dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grest, Gary Stephen; Stevens, Mark Jackson; Plimpton, Steven James
2006-11-01
We have enhanced our parallel molecular dynamics (MD) simulation software LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator, lammps.sandia.gov) to include many new features for accelerated simulation including articulated rigid body dynamics via coupling to the Rensselaer Polytechnic Institute code POEMS (Parallelizable Open-source Efficient Multibody Software). We use new features of the LAMMPS software package to investigate rhodopsin photoisomerization, and water model surface tension and capillary waves at the vapor-liquid interface. Finally, we motivate the recipes of MD for practitioners and researchers in numerical analysis and computational mechanics.
Tamayo, Pablo; Cho, Yoon-Jae; Tsherniak, Aviad; Greulich, Heidi; Ambrogio, Lauren; Schouten-van Meeteren, Netteke; Zhou, Tianni; Buxton, Allen; Kool, Marcel; Meyerson, Matthew; Pomeroy, Scott L.; Mesirov, Jill P.
2011-01-01
Purpose Despite significant progress in the molecular understanding of medulloblastoma, stratification of risk in patients remains a challenge. Focus has shifted from clinical parameters to molecular markers, such as expression of specific genes and selected genomic abnormalities, to improve accuracy of treatment outcome prediction. Here, we show how integration of high-level clinical and genomic features or risk factors, including disease subtype, can yield more comprehensive, accurate, and biologically interpretable prediction models for relapse versus no-relapse classification. We also introduce a novel Bayesian nomogram indicating the amount of evidence that each feature contributes on a patient-by-patient basis. Patients and Methods A Bayesian cumulative log-odds model of outcome was developed from a training cohort of 96 children treated for medulloblastoma, starting with the evidence provided by clinical features of metastasis and histology (model A) and incrementally adding the evidence from gene-expression–derived features representing disease subtype–independent (model B) and disease subtype–dependent (model C) pathways, and finally high-level copy-number genomic abnormalities (model D). The models were validated on an independent test cohort (n = 78). Results On an independent multi-institutional test data set, models A to D attain an area under receiver operating characteristic (au-ROC) curve of 0.73 (95% CI, 0.60 to 0.84), 0.75 (95% CI, 0.64 to 0.86), 0.80 (95% CI, 0.70 to 0.90), and 0.78 (95% CI, 0.68 to 0.88), respectively, for predicting relapse versus no relapse. Conclusion The proposed models C and D outperform the current clinical classification schema (au-ROC, 0.68), our previously published eight-gene outcome signature (au-ROC, 0.71), and several new schemas recently proposed in the literature for medulloblastoma risk stratification. PMID:21357789
Discriminant Features and Temporal Structure of Nonmanuals in American Sign Language
Benitez-Quiroz, C. Fabian; Gökgöz, Kadir; Wilbur, Ronnie B.; Martinez, Aleix M.
2014-01-01
To fully define the grammar of American Sign Language (ASL), a linguistic model of its nonmanuals needs to be constructed. While significant progress has been made to understand the features defining ASL manuals, after years of research, much still needs to be done to uncover the discriminant nonmanual components. The major barrier to achieving this goal is the difficulty in correlating facial features and linguistic features, especially since these correlations may be temporally defined. For example, a facial feature (e.g., head moves down) occurring at the end of the movement of another facial feature (e.g., brows moves up), may specify a Hypothetical conditional, but only if this time relationship is maintained. In other instances, the single occurrence of a movement (e.g., brows move up) can be indicative of the same grammatical construction. In the present paper, we introduce a linguistic–computational approach to efficiently carry out this analysis. First, a linguistic model of the face is used to manually annotate a very large set of 2,347 videos of ASL nonmanuals (including tens of thousands of frames). Second, a computational approach is used to determine which features of the linguistic model are more informative of the grammatical rules under study. We used the proposed approach to study five types of sentences – Hypothetical conditionals, Yes/no questions, Wh-questions, Wh-questions postposed, and Assertions – plus their polarities – positive and negative. Our results verify several components of the standard model of ASL nonmanuals and, most importantly, identify several previously unreported features and their temporal relationship. Notably, our results uncovered a complex interaction between head position and mouth shape. These findings define some temporal structures of ASL nonmanuals not previously detected by other approaches. PMID:24516528
NASA Astrophysics Data System (ADS)
Vasil'Ev, O. F.; Voropaeva, O. F.; Kurbatskii, A. F.
2011-06-01
Specific features of the turbulent transfer of the momentum and heat in stably stratified geophysical flows, as well as possibilities for including them into RANS turbulence models, are analyzed. The momentum (but not heat) transfer by internal gravity waves under conditions of strong stability is, for example, one such feature. Laboratory data and measurements in the atmosphere fix a clear dropping trend of the inverse turbulent Prandtl number with an increasing gradient Richardson number, which must be reproduced by turbulence models. Ignoring this feature can cause a false diffusion of heat under conditions of strong stability and lead, in particular, to noticeable errors in calculations of the temperature in the atmospheric boundary layer. Therefore, models of turbulent transfer must include the effect of the action of buoyancy and internal gravity waves on turbulent flows of the momentum. Such a strategy of modeling the stratified turbulence is presented in the review by a concrete RANS model and original results obtained during the modeling of stratified flows in the environment. Semiempirical turbulence models used for calculations of complex turbulent flows in deep stratified bodies of water are also analyzed. This part of the review is based on the data of investigations within the framework of the large international scientific Comparative Analysis and Rationalization of Second-Moment Turbulence Models (CARTUM) project and other publications of leading specialists. The most economical and effective approach associated with modified two-parameter turbulence models is a real alternative to classical variants of these models. A class of test problems and laboratory and full-scale experiments used by the participants of the CARTUM project for the approbation of numerical models are considered.
Ice-Ridge Pile Up and the Genesis of Martian "Shorelines"
NASA Technical Reports Server (NTRS)
Barnhart, C. J.; Tulaczyk, S.; Asphaug, E.; Kraal, E. R.; Moore, J.
2005-01-01
Unique geomorphologic features such as basin terraces exhibiting topographic continuity have been found within several Martian craters as shown in Viking, MOC, and THEMIS images. These features, showing similarity to terrestrial shorelines, have been mapped and cataloged with significant effort [1]. Currently, open wave action on the surface of paleolakes has been hypothesized as the geomorphologic agent responsible for the generation of these features [2]. As consequence, feature interpretations, including shorelines, wave-cut benches, and bars are, befittingly, lacustrine. Because such interpretations and their formation mechanisms have profound implications for the climate and potential biological history of Mars, confidence is crucial. The insight acquired through linked quantitative modeling of geomorphologic agents and processes is key to accurately interpreting these features. In this vein, recent studies [3,4] involving the water wave energy in theoretical open water basins on Mars show minimal erosional effects due to water waves under Martian conditions. Consequently, sub-glacial lake flattens the surface, produces a local velocity increase over the lake, and creates a deviation of the ice flow from the main flow direction [11]. These consequences of ice flow are observed at Lake Vostok, Antarctica an excellent Martian analogue [11]. Martian observations include reticulate terrain exhibiting sharp inter-connected ridges speculated to reflect the deposition and reworking of ice blocks at the periphery of ice-covered lakes throughout Hellas [12]. Our model determines to what extent ice, a terrestrial geomorphologic agent, can alter the Martian landscape. Method: We study the evolution of crater ice plugs as the formation mechanism of surface features frequently identified as shorelines. In particular, we perform model integrations involving parameters such as ice slope and purity, atmospheric pressure and temperature, crater shape and composition, and an energy balance between solar flux, geothermal flux, latent heat, and ablation. Our ultimate goal is to understand how an intracrater ice plug could create the observed shoreline features and how these
Deep Correlated Holistic Metric Learning for Sketch-Based 3D Shape Retrieval.
Dai, Guoxian; Xie, Jin; Fang, Yi
2018-07-01
How to effectively retrieve desired 3D models with simple queries is a long-standing problem in computer vision community. The model-based approach is quite straightforward but nontrivial, since people could not always have the desired 3D query model available by side. Recently, large amounts of wide-screen electronic devices are prevail in our daily lives, which makes the sketch-based 3D shape retrieval a promising candidate due to its simpleness and efficiency. The main challenge of sketch-based approach is the huge modality gap between sketch and 3D shape. In this paper, we proposed a novel deep correlated holistic metric learning (DCHML) method to mitigate the discrepancy between sketch and 3D shape domains. The proposed DCHML trains two distinct deep neural networks (one for each domain) jointly, which learns two deep nonlinear transformations to map features from both domains into a new feature space. The proposed loss, including discriminative loss and correlation loss, aims to increase the discrimination of features within each domain as well as the correlation between different domains. In the new feature space, the discriminative loss minimizes the intra-class distance of the deep transformed features and maximizes the inter-class distance of the deep transformed features to a large margin within each domain, while the correlation loss focused on mitigating the distribution discrepancy across different domains. Different from existing deep metric learning methods only with loss at the output layer, our proposed DCHML is trained with loss at both hidden layer and output layer to further improve the performance by encouraging features in the hidden layer also with desired properties. Our proposed method is evaluated on three benchmarks, including 3D Shape Retrieval Contest 2013, 2014, and 2016 benchmarks, and the experimental results demonstrate the superiority of our proposed method over the state-of-the-art methods.
NASA's Great Observatories: Paper Model.
ERIC Educational Resources Information Center
National Aeronautics and Space Administration, Washington, DC.
This educational brief discusses observatory stations built by the National Aeronautics and Space Administration (NASA) for looking at the universe. This activity for grades 5-12 has students build paper models of the observatories and study their history, features, and functions. Templates for the observatories are included. (MVL)
NASA Technical Reports Server (NTRS)
Tanveer, S.; Foster, M. R.
2002-01-01
We report progress in three areas of investigation related to dendritic crystal growth. Those items include: 1. Selection of tip features dendritic crystal growth; 2) Investigation of nonlinear evolution for two-sided model; and 3) Rigorous mathematical justification.
Environmental Education Activities & Programs 1998-1999.
ERIC Educational Resources Information Center
Bureau of Reclamation (Dept. of Interior), Denver, CO.
This document features descriptions of interactive learning models and presentations in environmental education concerning groundwater, geology, the environment, weather, water activities, and interactive games. Activities include: (1) GW-Standard; (2) GW-w/no Leaky Underground Storage Tank (No UST); (3) GW-Karst; (4) GW-Landfill Models--Standard…
Caywood, Matthew S.; Roberts, Daniel M.; Colombe, Jeffrey B.; Greenwald, Hal S.; Weiland, Monica Z.
2017-01-01
There is increasing interest in real-time brain-computer interfaces (BCIs) for the passive monitoring of human cognitive state, including cognitive workload. Too often, however, effective BCIs based on machine learning techniques may function as “black boxes” that are difficult to analyze or interpret. In an effort toward more interpretable BCIs, we studied a family of N-back working memory tasks using a machine learning model, Gaussian Process Regression (GPR), which was both powerful and amenable to analysis. Participants performed the N-back task with three stimulus variants, auditory-verbal, visual-spatial, and visual-numeric, each at three working memory loads. GPR models were trained and tested on EEG data from all three task variants combined, in an effort to identify a model that could be predictive of mental workload demand regardless of stimulus modality. To provide a comparison for GPR performance, a model was additionally trained using multiple linear regression (MLR). The GPR model was effective when trained on individual participant EEG data, resulting in an average standardized mean squared error (sMSE) between true and predicted N-back levels of 0.44. In comparison, the MLR model using the same data resulted in an average sMSE of 0.55. We additionally demonstrate how GPR can be used to identify which EEG features are relevant for prediction of cognitive workload in an individual participant. A fraction of EEG features accounted for the majority of the model’s predictive power; using only the top 25% of features performed nearly as well as using 100% of features. Subsets of features identified by linear models (ANOVA) were not as efficient as subsets identified by GPR. This raises the possibility of BCIs that require fewer model features while capturing all of the information needed to achieve high predictive accuracy. PMID:28123359
NASA Astrophysics Data System (ADS)
Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin
2016-03-01
How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.
NASA Astrophysics Data System (ADS)
Leandro, J.; Schumann, A.; Pfister, A.
2016-04-01
Some of the major challenges in modelling rainfall-runoff in urbanised areas are the complex interaction between the sewer system and the overland surface, and the spatial heterogeneity of the urban key features. The former requires the sewer network and the system of surface flow paths to be solved simultaneously. The latter is still an unresolved issue because the heterogeneity of runoff formation requires high detailed information and includes a large variety of feature specific rainfall-runoff dynamics. This paper discloses a methodology for considering the variability of building types and the spatial heterogeneity of land surfaces. The former is achieved by developing a specific conceptual rainfall-runoff model and the latter by defining a fully distributed approach for infiltration processes in urban areas with limited storage capacity dependent on OpenStreetMaps (OSM). The model complexity is increased stepwise by adding components to an existing 2D overland flow model. The different steps are defined as modelling levels. The methodology is applied in a German case study. Results highlight that: (a) spatial heterogeneity of urban features has a medium to high impact on the estimated overland flood-depths, (b) the addition of multiple urban features have a higher cumulative effect due to the dynamic effects simulated by the model, (c) connecting the runoff from buildings to the sewer contributes to the non-linear effects observed on the overland flood-depths, and (d) OSM data is useful in identifying pounding areas (for which infiltration plays a decisive role) and permeable natural surface flow paths (which delay the flood propagation).
Kim, Bumhwi; Ban, Sang-Woo; Lee, Minho
2013-10-01
Humans can efficiently perceive arbitrary visual objects based on an incremental learning mechanism with selective attention. This paper proposes a new task specific top-down attention model to locate a target object based on its form and color representation along with a bottom-up saliency based on relativity of primitive visual features and some memory modules. In the proposed model top-down bias signals corresponding to the target form and color features are generated, which draw the preferential attention to the desired object by the proposed selective attention model in concomitance with the bottom-up saliency process. The object form and color representation and memory modules have an incremental learning mechanism together with a proper object feature representation scheme. The proposed model includes a Growing Fuzzy Topology Adaptive Resonance Theory (GFTART) network which plays two important roles in object color and form biased attention; one is to incrementally learn and memorize color and form features of various objects, and the other is to generate a top-down bias signal to localize a target object by focusing on the candidate local areas. Moreover, the GFTART network can be utilized for knowledge inference which enables the perception of new unknown objects on the basis of the object form and color features stored in the memory during training. Experimental results show that the proposed model is successful in focusing on the specified target objects, in addition to the incremental representation and memorization of various objects in natural scenes. In addition, the proposed model properly infers new unknown objects based on the form and color features of previously trained objects. Copyright © 2013 Elsevier Ltd. All rights reserved.
Atmospheric prediction model survey
NASA Technical Reports Server (NTRS)
Wellck, R. E.
1976-01-01
As part of the SEASAT Satellite program of NASA, a survey of representative primitive equation atmospheric prediction models that exist in the world today was written for the Jet Propulsion Laboratory. Seventeen models developed by eleven different operational and research centers throughout the world are included in the survey. The surveys are tutorial in nature describing the features of the various models in a systematic manner.
Organization Domain Modeling. Volume 1. Conceptual Foundations, Process and Workproduct Description
1993-07-31
J.A. Hess, W.E. Novak, and A.S. Peterson. Feature-Oriented Domain Analysis ( FODA ) Feasibility Study. Technical Report CMU/SEI-90-TR-21, Software...domain analysis (DA) and modeling, including a structured set of workproducts, a tailorable process model and a set of modeling techniques and guidelines...23 5.3.1 U sability Analysis (Rescoping) ..................................................... 24
PAHFIT: Properties of PAH Emission
NASA Astrophysics Data System (ADS)
Smith, J. D.; Draine, Bruce
2012-10-01
PAHFIT is an IDL tool for decomposing Spitzer IRS spectra of PAH emission sources, with a special emphasis on the careful recovery of ambiguous silicate absorption, and weak, blended dust emission features. PAHFIT is primarily designed for use with full 5-35 micron Spitzer low-resolution IRS spectra. PAHFIT is a flexible tool for fitting spectra, and you can add or disable features, compute combined flux bands, change fitting limits, etc., without changing the code. PAHFIT uses a simple, physically-motivated model, consisting of starlight, thermal dust continuum in a small number of fixed temperature bins, resolved dust features and feature blends, prominent emission lines (which themselves can be blended with dust features), as well as simple fully-mixed or screen dust extinction, dominated by the silicate absorption bands at 9.7 and 18 microns. Most model components are held fixed or are tightly constrained. PAHFIT uses Drude profiles to recover the full strength of dust emission features and blends, including the significant power in the wings of the broad emission profiles. This means the resulting feature strengths are larger (by factors of 2-4) than are recovered by methods which estimate the underlying continuum using line segments or spline curves fit through fiducial wavelength anchors.
Using connectome-based predictive modeling to predict individual behavior from brain connectivity
Shen, Xilin; Finn, Emily S.; Scheinost, Dustin; Rosenberg, Monica D.; Chun, Marvin M.; Papademetris, Xenophon; Constable, R Todd
2017-01-01
Neuroimaging is a fast developing research area where anatomical and functional images of human brains are collected using techniques such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and electroencephalography (EEG). Technical advances and large-scale datasets have allowed for the development of models capable of predicting individual differences in traits and behavior using brain connectivity measures derived from neuroimaging data. Here, we present connectome-based predictive modeling (CPM), a data-driven protocol for developing predictive models of brain-behavior relationships from connectivity data using cross-validation. This protocol includes the following steps: 1) feature selection, 2) feature summarization, 3) model building, and 4) assessment of prediction significance. We also include suggestions for visualizing the most predictive features (i.e., brain connections). The final result should be a generalizable model that takes brain connectivity data as input and generates predictions of behavioral measures in novel subjects, accounting for a significant amount of the variance in these measures. It has been demonstrated that the CPM protocol performs equivalently or better than most of the existing approaches in brain-behavior prediction. However, because CPM focuses on linear modeling and a purely data-driven driven approach, neuroscientists with limited or no experience in machine learning or optimization would find it easy to implement the protocols. Depending on the volume of data to be processed, the protocol can take 10–100 minutes for model building, 1–48 hours for permutation testing, and 10–20 minutes for visualization of results. PMID:28182017
The Dubna-Mainz-Taipei Dynamical Model for πN Scattering and π Electromagnetic Production
NASA Astrophysics Data System (ADS)
Yang, Shin Nan
Some of the featured results of the Dubna-Mainz-Taipei (DMT) dynamical model for πN scattering and π0 electromagnetic production are summarized. These include results for threshold π0 production, deformation of Δ(1232),and the extracted properties of higher resonances below 2 GeV. The excellent agreement of DMT model's predictions with threshold π0 production data, including the recent precision measurements from MAMI establishes results of DMT model as a benchmark for experimentalists and theorists in dealing with threshold pion production.
Cai, Hongmin; Peng, Yanxia; Ou, Caiwen; Chen, Minsheng; Li, Li
2014-01-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is increasingly used for breast cancer diagnosis as supplementary to conventional imaging techniques. Combining of diffusion-weighted imaging (DWI) of morphology and kinetic features from DCE-MRI to improve the discrimination power of malignant from benign breast masses is rarely reported. The study comprised of 234 female patients with 85 benign and 149 malignant lesions. Four distinct groups of features, coupling with pathological tests, were estimated to comprehensively characterize the pictorial properties of each lesion, which was obtained by a semi-automated segmentation method. Classical machine learning scheme including feature subset selection and various classification schemes were employed to build prognostic model, which served as a foundation for evaluating the combined effects of the multi-sided features for predicting of the types of lesions. Various measurements including cross validation and receiver operating characteristics were used to quantify the diagnostic performances of each feature as well as their combination. Seven features were all found to be statistically different between the malignant and the benign groups and their combination has achieved the highest classification accuracy. The seven features include one pathological variable of age, one morphological variable of slope, three texture features of entropy, inverse difference and information correlation, one kinetic feature of SER and one DWI feature of apparent diffusion coefficient (ADC). Together with the selected diagnostic features, various classical classification schemes were used to test their discrimination power through cross validation scheme. The averaged measurements of sensitivity, specificity, AUC and accuracy are 0.85, 0.89, 90.9% and 0.93, respectively. Multi-sided variables which characterize the morphological, kinetic, pathological properties and DWI measurement of ADC can dramatically improve the discriminatory power of breast lesions.
Space Shuttle propulsion performance reconstruction from flight data
NASA Technical Reports Server (NTRS)
Rogers, Robert M.
1989-01-01
The aplication of extended Kalman filtering to estimating Space Shuttle Solid Rocket Booster (SRB) performance, specific impulse, from flight data in a post-flight processing computer program. The flight data used includes inertial platform acceleration, SRB head pressure, and ground based radar tracking data. The key feature in this application is the model used for the SRBs, which represents a reference quasi-static internal ballistics model normalized to the propellant burn depth. Dynamic states of mass overboard and propellant burn depth are included in the filter model to account for real-time deviations from the reference model used. Aerodynamic, plume, wind and main engine uncertainties are included.
Report of the LSPI/NASA Workshop on Lunar Base Methodology Development
NASA Technical Reports Server (NTRS)
Nozette, Stewart; Roberts, Barney
1985-01-01
Groundwork was laid for computer models which will assist in the design of a manned lunar base. The models, herein described, will provide the following functions for the successful conclusion of that task: strategic planning; sensitivity analyses; impact analyses; and documentation. Topics addressed include: upper level model description; interrelationship matrix; user community; model features; model descriptions; system implementation; model management; and plans for future action.
Automated method for the systematic interpretation of resonance peaks in spectrum data
Damiano, B.; Wood, R.T.
1997-04-22
A method is described for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical model. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system. 1 fig.
Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara
2017-01-01
In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.
NASA Astrophysics Data System (ADS)
Madison, Lindsey R.; Mosley, Jonathan; Mauney, Daniel; Duncan, Michael A.; McCoy, Anne B.
2016-06-01
Formaldehyde is the smallest organic molecule and is a prime candidate for a thorough investigation regarding the anharmonic approximations made in computationally modeling its infrared spectrum. Mass-selected ion spectroscopy was used to detect mass 30 cations which include of HCOH^+ and CH_2O^+. In order to elucidate the differences between the structures of these isomers, infrared spectroscopy was performed on the mass 30 cations using Ar predissociation. Interestingly, several additional spectral features are observed that cannot be explained by the fundamental OH and CH stretch vibrations alone. By including anharmonic coupling between OH and CH stretching and various overtones and combination bands involving lower frequency vibrations, we are able to identify how specific modes couple and lead to the experimentally observed spectral features. We combine straight-forward, ab initio calculations of the anharmonic frequencies of the mass 30 cations with higher order, adiabatic approximations and Fermi resonance models. By including anharmonic effects we are able to confirm that the isomers of the CH_2O^+\\cdotAr system have substantially different, and thus distinguishable, IR spectra and that many of the features can only be explained with anharmonic treatments.
Microstructure-Tensile Properties Correlation for the Ti-6Al-4V Titanium Alloy
NASA Astrophysics Data System (ADS)
Shi, Xiaohui; Zeng, Weidong; Sun, Yu; Han, Yuanfei; Zhao, Yongqing; Guo, Ping
2015-04-01
Finding the quantitative microstructure-tensile properties correlations is the key to achieve performance optimization for various materials. However, it is extremely difficult due to their non-linear and highly interactive interrelations. In the present investigation, the lamellar microstructure features-tensile properties correlations of the Ti-6Al-4V alloy are studied using an error back-propagation artificial neural network (ANN-BP) model. Forty-eight thermomechanical treatments were conducted to prepare the Ti-6Al-4V alloy with different lamellar microstructure features. In the proposed model, the input variables are microstructure features including the α platelet thickness, colony size, and β grain size, which were extracted using Image Pro Plus software. The output variables are the tensile properties, including ultimate tensile strength, yield strength, elongation, and reduction of area. Fourteen hidden-layer neurons which can make ANN-BP model present the most excellent performance were applied. The training results show that all the relative errors between the predicted and experimental values are within 6%, which means that the trained ANN-BP model is capable of providing precise prediction of the tensile properties for Ti-6Al-4V alloy. Based on the corresponding relations between the tensile properties predicted by ANN-BP model and the lamellar microstructure features, it can be found that the yield strength decreases with increasing α platelet thickness continuously. However, the α platelet thickness exerts influence on the elongation in a more complicated way. In addition, for a given α platelet thickness, the yield strength and the elongation both increase with decreasing β grain size and colony size. In general, the β grain size and colony size play a more important role in affecting the tensile properties of Ti-6Al-4V alloy than the α platelet thickness.
Modeling the development of drug addiction in male and female animals.
Lynch, Wendy J
2018-01-01
An increasing emphasis has been placed on the development and use of animal models of addiction that capture defining features of human drug addiction, including escalation/binge drug use, enhanced motivation for the drug, preference for the drug over other reward options, use despite negative consequences, and enhanced drug-seeking/relapse vulnerability. The need to examine behavior in both males and females has also become apparent given evidence demonstrating that the addiction process occurs differently in males and females. This review discusses the procedures that are used to model features of addiction in animals, as well as factors that influence their development. Individual differences are also discussed, with a particular focus on sex differences. While no one procedure consistently produces all characteristics, different models have been developed to focus on certain characteristics. A history of escalating/binge patterns of use appears to be critical for producing other features characteristic of addiction, including an enhanced motivation for the drug, enhanced drug seeking, and use despite negative consequences. These characteristics tend to emerge over abstinence, and appear to increase rather than decrease in magnitude over time. In females, these characteristics develop sooner during abstinence and/or following less drug exposure as compared to males, and for psychostimulant addiction, may require estradiol. Although preference for the drug over other reward options has been demonstrated in non-human primates, it has been more difficult to establish in rats. Future research is needed to define the parameters that optimally induce each of these features of addiction in the majority of animals. Such models are essential for advancing our understanding of human drug addiction and its treatment in men and women. Copyright © 2017 Elsevier Inc. All rights reserved.
VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox
NASA Astrophysics Data System (ADS)
Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.
2016-12-01
VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.
Fast and efficient indexing approach for object recognition
NASA Astrophysics Data System (ADS)
Hefnawy, Alaa; Mashali, Samia A.; Rashwan, Mohsen; Fikri, Magdi
1999-08-01
This paper introduces a fast and efficient indexing approach for both 2D and 3D model-based object recognition in the presence of rotation, translation, and scale variations of objects. The indexing entries are computed after preprocessing the data by Haar wavelet decomposition. The scheme is based on a unified image feature detection approach based on Zernike moments. A set of low level features, e.g. high precision edges, gray level corners, are estimated by a set of orthogonal Zernike moments, calculated locally around every image point. A high dimensional, highly descriptive indexing entries are then calculated based on the correlation of these local features and employed for fast access to the model database to generate hypotheses. A list of the most candidate models is then presented by evaluating the hypotheses. Experimental results are included to demonstrate the effectiveness of the proposed indexing approach.
Estimation of Symptom Severity During Chemotherapy From Passively Sensed Data: Exploratory Study
Dey, Anind K; Ferreira, Denzil; Kamarck, Thomas; Sun, Weijing; Bae, Sangwon; Doryab, Afsaneh
2017-01-01
Background Physical and psychological symptoms are common during chemotherapy in cancer patients, and real-time monitoring of these symptoms can improve patient outcomes. Sensors embedded in mobile phones and wearable activity trackers could be potentially useful in monitoring symptoms passively, with minimal patient burden. Objective The aim of this study was to explore whether passively sensed mobile phone and Fitbit data could be used to estimate daily symptom burden during chemotherapy. Methods A total of 14 patients undergoing chemotherapy for gastrointestinal cancer participated in the 4-week study. Participants carried an Android phone and wore a Fitbit device for the duration of the study and also completed daily severity ratings of 12 common symptoms. Symptom severity ratings were summed to create a total symptom burden score for each day, and ratings were centered on individual patient means and categorized into low, average, and high symptom burden days. Day-level features were extracted from raw mobile phone sensor and Fitbit data and included features reflecting mobility and activity, sleep, phone usage (eg, duration of interaction with phone and apps), and communication (eg, number of incoming and outgoing calls and messages). We used a rotation random forests classifier with cross-validation and resampling with replacement to evaluate population and individual model performance and correlation-based feature subset selection to select nonredundant features with the best predictive ability. Results Across 295 days of data with both symptom and sensor data, a number of mobile phone and Fitbit features were correlated with patient-reported symptom burden scores. We achieved an accuracy of 88.1% for our population model. The subset of features with the best accuracy included sedentary behavior as the most frequent activity, fewer minutes in light physical activity, less variable and average acceleration of the phone, and longer screen-on time and interactions with apps on the phone. Mobile phone features had better predictive ability than Fitbit features. Accuracy of individual models ranged from 78.1% to 100% (mean 88.4%), and subsets of relevant features varied across participants. Conclusions Passive sensor data, including mobile phone accelerometer and usage and Fitbit-assessed activity and sleep, were related to daily symptom burden during chemotherapy. These findings highlight opportunities for long-term monitoring of cancer patients during chemotherapy with minimal patient burden as well as real-time adaptive interventions aimed at early management of worsening or severe symptoms. PMID:29258977
Estimation of Symptom Severity During Chemotherapy From Passively Sensed Data: Exploratory Study.
Low, Carissa A; Dey, Anind K; Ferreira, Denzil; Kamarck, Thomas; Sun, Weijing; Bae, Sangwon; Doryab, Afsaneh
2017-12-19
Physical and psychological symptoms are common during chemotherapy in cancer patients, and real-time monitoring of these symptoms can improve patient outcomes. Sensors embedded in mobile phones and wearable activity trackers could be potentially useful in monitoring symptoms passively, with minimal patient burden. The aim of this study was to explore whether passively sensed mobile phone and Fitbit data could be used to estimate daily symptom burden during chemotherapy. A total of 14 patients undergoing chemotherapy for gastrointestinal cancer participated in the 4-week study. Participants carried an Android phone and wore a Fitbit device for the duration of the study and also completed daily severity ratings of 12 common symptoms. Symptom severity ratings were summed to create a total symptom burden score for each day, and ratings were centered on individual patient means and categorized into low, average, and high symptom burden days. Day-level features were extracted from raw mobile phone sensor and Fitbit data and included features reflecting mobility and activity, sleep, phone usage (eg, duration of interaction with phone and apps), and communication (eg, number of incoming and outgoing calls and messages). We used a rotation random forests classifier with cross-validation and resampling with replacement to evaluate population and individual model performance and correlation-based feature subset selection to select nonredundant features with the best predictive ability. Across 295 days of data with both symptom and sensor data, a number of mobile phone and Fitbit features were correlated with patient-reported symptom burden scores. We achieved an accuracy of 88.1% for our population model. The subset of features with the best accuracy included sedentary behavior as the most frequent activity, fewer minutes in light physical activity, less variable and average acceleration of the phone, and longer screen-on time and interactions with apps on the phone. Mobile phone features had better predictive ability than Fitbit features. Accuracy of individual models ranged from 78.1% to 100% (mean 88.4%), and subsets of relevant features varied across participants. Passive sensor data, including mobile phone accelerometer and usage and Fitbit-assessed activity and sleep, were related to daily symptom burden during chemotherapy. These findings highlight opportunities for long-term monitoring of cancer patients during chemotherapy with minimal patient burden as well as real-time adaptive interventions aimed at early management of worsening or severe symptoms. ©Carissa A Low, Anind K Dey, Denzil Ferreira, Thomas Kamarck, Weijing Sun, Sangwon Bae, Afsaneh Doryab. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 19.12.2017.
Hayat, Maqsood; Khan, Asifullah
2013-05-01
Membrane protein is the prime constituent of a cell, which performs a role of mediator between intra and extracellular processes. The prediction of transmembrane (TM) helix and its topology provides essential information regarding the function and structure of membrane proteins. However, prediction of TM helix and its topology is a challenging issue in bioinformatics and computational biology due to experimental complexities and lack of its established structures. Therefore, the location and orientation of TM helix segments are predicted from topogenic sequences. In this regard, we propose WRF-TMH model for effectively predicting TM helix segments. In this model, information is extracted from membrane protein sequences using compositional index and physicochemical properties. The redundant and irrelevant features are eliminated through singular value decomposition. The selected features provided by these feature extraction strategies are then fused to develop a hybrid model. Weighted random forest is adopted as a classification approach. We have used two benchmark datasets including low and high-resolution datasets. tenfold cross validation is employed to assess the performance of WRF-TMH model at different levels including per protein, per segment, and per residue. The success rates of WRF-TMH model are quite promising and are the best reported so far on the same datasets. It is observed that WRF-TMH model might play a substantial role, and will provide essential information for further structural and functional studies on membrane proteins. The accompanied web predictor is accessible at http://111.68.99.218/WRF-TMH/ .
State estimation and prediction using clustered particle filters.
Lee, Yoonsang; Majda, Andrew J
2016-12-20
Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors.
State estimation and prediction using clustered particle filters
Lee, Yoonsang; Majda, Andrew J.
2016-01-01
Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors. PMID:27930332
Probabilistic wind/tornado/missile analyses for hazard and fragility evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Y.J.; Reich, M.
Detailed analysis procedures and examples are presented for the probabilistic evaluation of hazard and fragility against high wind, tornado, and tornado-generated missiles. In the tornado hazard analysis, existing risk models are modified to incorporate various uncertainties including modeling errors. A significant feature of this paper is the detailed description of the Monte-Carlo simulation analyses of tornado-generated missiles. A simulation procedure, which includes the wind field modeling, missile injection, solution of flight equations, and missile impact analysis, is described with application examples.
Using Simplistic Shape/Surface Models to Predict Brightness in Estimation Filters
NASA Astrophysics Data System (ADS)
Wetterer, C.; Sheppard, D.; Hunt, B.
The prerequisite for using brightness (radiometric flux intensity) measurements in an estimation filter is to have a measurement function that accurately predicts a space objects brightness for variations in the parameters of interest. These parameters include changes in attitude and articulations of particular components (e.g. solar panel east-west offsets to direct sun-tracking). Typically, shape models and bidirectional reflectance distribution functions are combined to provide this forward light curve modeling capability. To achieve precise orbit predictions with the inclusion of shape/surface dependent forces such as radiation pressure, relatively complex and sophisticated modeling is required. Unfortunately, increasing the complexity of the models makes it difficult to estimate all those parameters simultaneously because changes in light curve features can now be explained by variations in a number of different properties. The classic example of this is the connection between the albedo and the area of a surface. If, however, the desire is to extract information about a single and specific parameter or feature from the light curve, a simple shape/surface model could be used. This paper details an example of this where a complex model is used to create simulated light curves, and then a simple model is used in an estimation filter to extract out a particular feature of interest. In order for this to be successful, however, the simple model must be first constructed using training data where the feature of interest is known or at least known to be constant.
Interfaces for End-User Information Seeking.
ERIC Educational Resources Information Center
Marchionini, Gary
1992-01-01
Discusses essential features of interfaces to support end-user information seeking. Highlights include cognitive engineering; task models and task analysis; the problem-solving nature of information seeking; examples of systems for end-users, including online public access catalogs (OPACs), hypertext, and help systems; and suggested research…
Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P
2010-06-01
The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it was possible to interchange certain descriptors (i.e. molecular weight and melting point) without incurring a loss of model quality. Such synergy suggested that a model constructed from discrete terms in an equation may not be the most appropriate way of representing mechanistic understandings of skin absorption.
NASA Technical Reports Server (NTRS)
Pelletier, R. E.
1984-01-01
A need exists for digitized information pertaining to linear features such as roads, streams, water bodies and agricultural field boundaries as component parts of a data base. For many areas where this data may not yet exist or is in need of updating, these features may be extracted from remotely sensed digital data. This paper examines two approaches for identifying linear features, one utilizing raw data and the other classified data. Each approach uses a series of data enhancement procedures including derivation of standard deviation values, principal component analysis and filtering procedures using a high-pass window matrix. Just as certain bands better classify different land covers, so too do these bands exhibit high spectral contrast by which boundaries between land covers can be delineated. A few applications for this kind of data are briefly discussed, including its potential in a Universal Soil Loss Equation Model.
The NGC 4013 tale: a pseudo-bulged, late-type spiral shaped by a major merger
NASA Astrophysics Data System (ADS)
Wang, Jianling; Hammer, Francois; Puech, Mathieu; Yang, Yanbin; Flores, Hector
2015-10-01
Many spiral galaxy haloes show stellar streams with various morphologies when observed with deep images. The origin of these tidal features is discussed, either coming from a satellite infall or caused by residuals of an ancient, gas-rich major merger. By modelling the formation of the peculiar features observed in the NGC 4013 halo, we investigate their origin. By using GADGET-2 with implemented gas cooling, star formation, and feedback, we have modelled the overall NGC 4013 galaxy and its associated halo features. A gas-rich major merger occurring 2.7-4.6 Gyr ago succeeds in reproducing the NGC 4013 galaxy properties, including all the faint stellar features, strong gas warp, boxy-shaped halo and vertical 3.6 μm luminosity distribution. High gas fractions in the progenitors are sufficient to reproduce the observed thin and thick discs, with a small bulge fraction, as observed. A major merger is able to reproduce the overall NGC 4013 system, including the warp strength, the red colour and the high stellar mass density of the loop, while a minor merger model cannot. Because the gas-rich model suffices to create a pseudo-bulge with a small fraction of the light, NGC 4013 is perhaps the archetype of a late-type galaxy formed by a relatively recent merger. Then late type, pseudo-bulge spirals are not mandatorily made through secular evolution, and the NGC 4013 properties also illustrate that strong warps in isolated galaxies may well occur at a late phase of a gas-rich major merger.
MODEST: A Tool for Geodesy and Astronomy
NASA Technical Reports Server (NTRS)
Sovers, Ojars J.; Jacobs, Christopher S.; Lanyi, Gabor E.
2004-01-01
Features of the JPL VLBI modeling and estimation software "MODEST" are reviewed. Its main advantages include thoroughly documented model physics, portability, and detailed error modeling. Two unique models are included: modeling of source structure and modeling of both spatial and temporal correlations in tropospheric delay noise. History of the code parallels the development of the astrometric and geodetic VLBI technique and the software retains many of the models implemented during its advancement. The code has been traceably maintained since the early 1980s, and will continue to be updated with recent IERS standards. Scripts are being developed to facilitate user-friendly data processing in the era of e-VLBI.
Sadeque, Farig; Xu, Dongfang; Bethard, Steven
2017-01-01
The 2017 CLEF eRisk pilot task focuses on automatically detecting depression as early as possible from a users’ posts to Reddit. In this paper we present the techniques employed for the University of Arizona team’s participation in this early risk detection shared task. We leveraged external information beyond the small training set, including a preexisting depression lexicon and concepts from the Unified Medical Language System as features. For prediction, we used both sequential (recurrent neural network) and non-sequential (support vector machine) models. Our models perform decently on the test data, and the recurrent neural models perform better than the non-sequential support vector machines while using the same feature sets. PMID:29075167
Spectroscopic Diagnosis of Arsenic Contamination in Agricultural Soils
Shi, Tiezhu; Liu, Huizeng; Chen, Yiyun; Fei, Teng; Wang, Junjie; Wu, Guofeng
2017-01-01
This study investigated the abilities of pre-processing, feature selection and machine-learning methods for the spectroscopic diagnosis of soil arsenic contamination. The spectral data were pre-processed by using Savitzky-Golay smoothing, first and second derivatives, multiplicative scatter correction, standard normal variate, and mean centering. Principle component analysis (PCA) and the RELIEF algorithm were used to extract spectral features. Machine-learning methods, including random forests (RF), artificial neural network (ANN), radial basis function- and linear function- based support vector machine (RBF- and LF-SVM) were employed for establishing diagnosis models. The model accuracies were evaluated and compared by using overall accuracies (OAs). The statistical significance of the difference between models was evaluated by using McNemar’s test (Z value). The results showed that the OAs varied with the different combinations of pre-processing, feature selection, and classification methods. Feature selection methods could improve the modeling efficiencies and diagnosis accuracies, and RELIEF often outperformed PCA. The optimal models established by RF (OA = 86%), ANN (OA = 89%), RBF- (OA = 89%) and LF-SVM (OA = 87%) had no statistical difference in diagnosis accuracies (Z < 1.96, p < 0.05). These results indicated that it was feasible to diagnose soil arsenic contamination using reflectance spectroscopy. The appropriate combination of multivariate methods was important to improve diagnosis accuracies. PMID:28471412
Xia, Junfeng; Yue, Zhenyu; Di, Yunqiang; Zhu, Xiaolei; Zheng, Chun-Hou
2016-01-01
The identification of hot spots, a small subset of protein interfaces that accounts for the majority of binding free energy, is becoming more important for the research of drug design and cancer development. Based on our previous methods (APIS and KFC2), here we proposed a novel hot spot prediction method. For each hot spot residue, we firstly constructed a wide variety of 108 sequence, structural, and neighborhood features to characterize potential hot spot residues, including conventional ones and new one (pseudo hydrophobicity) exploited in this study. We then selected 3 top-ranking features that contribute the most in the classification by a two-step feature selection process consisting of minimal-redundancy-maximal-relevance algorithm and an exhaustive search method. We used support vector machines to build our final prediction model. When testing our model on an independent test set, our method showed the highest F1-score of 0.70 and MCC of 0.46 comparing with the existing state-of-the-art hot spot prediction methods. Our results indicate that these features are more effective than the conventional features considered previously, and that the combination of our and traditional features may support the creation of a discriminative feature set for efficient prediction of hot spots in protein interfaces. PMID:26934646
For Students: A Model Courtoom
ERIC Educational Resources Information Center
Morisseau, James J.
1973-01-01
Describes a model courtroom in which law school students at the University of the Pacific in Sacramento, California, can conduct practice trials. The courtroom design is circular and new features include an extensive security system, videotaping equipment, a press room, a technicians room, and an isolation room. (Author/DN)
NASA Technical Reports Server (NTRS)
Justh, H. L.; Justus, C. G.
2008-01-01
The Mars Global Reference Atmospheric Model (Mars-GRAM) is an engineering-level atmospheric model widely used for diverse mission applications. Mars-GRAM s perturbation modeling capability is commonly used, in a Monte-Carlo mode, to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL) [1]. From the surface to 80 km altitude, Mars-GRAM is based on the NASA Ames Mars General Circulation Model (MGCM). Mars-GRAM and MGCM use surface topography from Mars Global Surveyor Mars Orbiter Laser Altimeter (MOLA), with altitudes referenced to the MOLA areoid, or constant potential surface. Traditional Mars-GRAM options for representing the mean atmosphere along entry corridors include: (1) Thermal Emission Spectrometer (TES) mapping years 1 and 2, with Mars-GRAM data coming from NASA Ames Mars General Circulation Model (MGCM) results driven by observed TES dust optical depth or (2) TES mapping year 0, with user-controlled dust optical depth and Mars-GRAM data interpolated from MGCM model results driven by selected values of globally-uniform dust optical depth. Mars-GRAM 2005 has been validated [2] against Radio Science data, and both nadir and limb data from TES [3]. There are several new features included in Mars-GRAM 2005. The first is the option to use input data sets from MGCM model runs that were designed to closely simulate conditions observed during the first two years of TES observations at Mars. The TES Year 1 option includes values from April 1999 through January 2001. The TES Year 2 option includes values from February 2001 through December 2002. The second new feature is the option to read and use any auxiliary profile of temperature and density versus altitude. In exercising the auxiliary profile Mars-GRAM option, values from the auxiliary profile replace data from the original MGCM databases. Some examples of auxiliary profiles include data from TES nadir or limb observations and Mars mesoscale model output at a particular location and time. The final new feature is the addition of two Mars-GRAM parameters that allow standard deviations of Mars-GRAM perturbations to be adjusted. The parameter rpscale can be used to scale density perturbations up or down while rwscale can be used to scale wind perturbations.
SU-F-R-35: Repeatability of Texture Features in T1- and T2-Weighted MR Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahon, R; Weiss, E; Karki, K
Purpose: To evaluate repeatability of lung tumor texture features from inspiration/expiration MR image pairs for potential use in patient specific care models and applications. Repeatability is a desirable and necessary characteristic of features included in such models. Methods: T1-weighted Volumetric Interpolation Breath-Hold Examination (VIBE) and/or T2-weighted MRI scans were acquired for 15 patients with non-small cell lung cancer before and during radiotherapy for a total of 32 and 34 same session inspiration-expiration breath-hold image pairs respectively. Bias correction was applied to the VIBE (VIBE-BC) and T2-weighted (T2-BC) images. Fifty-nine texture features at five wavelet decomposition ratios were extracted from themore » delineated primary tumor including: histogram(HIST), gray level co-occurrence matrix(GLCM), gray level run length matrix(GLRLM), gray level size zone matrix(GLSZM), and neighborhood gray tone different matrix (NGTDM) based features. Repeatability of the texture features for VIBE, VIBE-BC, T2-weighted, and T2-BC image pairs was evaluated by the concordance correlation coefficient (CCC) between corresponding image pairs, with a value greater than 0.90 indicating repeatability. Results: For the VIBE image pairs, the percentage of repeatable texture features by wavelet ratio was between 20% and 24% of the 59 extracted features; the T2-weighted image pairs exhibited repeatability in the range of 44–49%. The percentage dropped to 10–20% for the VIBE-BC images, and 12–14% for the T2-BC images. In addition, five texture features were found to be repeatable in all four image sets including two GLRLM, two GLZSM, and one NGTDN features. No single texture feature category was repeatable among all three image types; however, certain categories performed more consistently on a per image type basis. Conclusion: We identified repeatable texture features on T1- and T2-weighted MRI scans. These texture features should be further investigated for use in specific applications such as tissue classification and changes during radiation therapy utilizing a standard imaging protocol. Authors have the following disclosures: a research agreement with Philips Medical systems (Hugo, Weiss), a license agreement with Varian Medical Systems (Hugo, Weiss), research grants from the National Institute of Health (Hugo, Weiss), UpToDate royalties (Weiss), and none(Mahon, Ford, Karki). Authors have no potential conflicts of interest to disclose.« less
User's Guide To CHEAP0 II-Economic Analysis of Stand Prognosis Model Outputs
Joseph E. Horn; E. Lee Medema; Ervin G. Schuster
1986-01-01
CHEAP0 II provides supplemental economic analysis capability for users of version 5.1 of the Stand Prognosis Model, including recent regeneration and insect outbreak extensions. Although patterned after the old CHEAP0 model, CHEAP0 II has more features and analytic capabilities, especially for analysis of existing and uneven-aged stands....
Stephens, David; Diesing, Markus
2014-01-01
Detailed seabed substrate maps are increasingly in demand for effective planning and management of marine ecosystems and resources. It has become common to use remotely sensed multibeam echosounder data in the form of bathymetry and acoustic backscatter in conjunction with ground-truth sampling data to inform the mapping of seabed substrates. Whilst, until recently, such data sets have typically been classified by expert interpretation, it is now obvious that more objective, faster and repeatable methods of seabed classification are required. This study compares the performances of a range of supervised classification techniques for predicting substrate type from multibeam echosounder data. The study area is located in the North Sea, off the north-east coast of England. A total of 258 ground-truth samples were classified into four substrate classes. Multibeam bathymetry and backscatter data, and a range of secondary features derived from these datasets were used in this study. Six supervised classification techniques were tested: Classification Trees, Support Vector Machines, k-Nearest Neighbour, Neural Networks, Random Forest and Naive Bayes. Each classifier was trained multiple times using different input features, including i) the two primary features of bathymetry and backscatter, ii) a subset of the features chosen by a feature selection process and iii) all of the input features. The predictive performances of the models were validated using a separate test set of ground-truth samples. The statistical significance of model performances relative to a simple baseline model (Nearest Neighbour predictions on bathymetry and backscatter) were tested to assess the benefits of using more sophisticated approaches. The best performing models were tree based methods and Naive Bayes which achieved accuracies of around 0.8 and kappa coefficients of up to 0.5 on the test set. The models that used all input features didn't generally perform well, highlighting the need for some means of feature selection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip
This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip
This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less
Return to Work After Lumbar Microdiscectomy - Personalizing Approach Through Predictive Modeling.
Papić, Monika; Brdar, Sanja; Papić, Vladimir; Lončar-Turukalo, Tatjana
2016-01-01
Lumbar disc herniation (LDH) is the most common disease among working population requiring surgical intervention. This study aims to predict the return to work after operative treatment of LDH based on the observational study including 153 patients. The classification problem was approached using decision trees (DT), support vector machines (SVM) and multilayer perception (MLP) combined with RELIEF algorithm for feature selection. MLP provided best recall of 0.86 for the class of patients not returning to work, which combined with the selected features enables early identification and personalized targeted interventions towards subjects at risk of prolonged disability. The predictive modeling indicated at the most decisive risk factors in prolongation of work absence: psychosocial factors, mobility of the spine and structural changes of facet joints and professional factors including standing, sitting and microclimate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, R; Aguilera, T; Shultz, D
2014-06-15
Purpose: This study aims to develop predictive models of patient outcome by extracting advanced imaging features (i.e., Radiomics) from FDG-PET images. Methods: We acquired pre-treatment PET scans for 51 stage I NSCLC patients treated with SABR. We calculated 139 quantitative features from each patient PET image, including 5 morphological features, 8 statistical features, 27 texture features, and 100 features from the intensity-volume histogram. Based on the imaging features, we aim to distinguish between 2 risk groups of patients: those with regional failure or distant metastasis versus those without. We investigated 3 pattern classification algorithms: linear discriminant analysis (LDA), naive Bayesmore » (NB), and logistic regression (LR). To avoid the curse of dimensionality, we performed feature selection by first removing redundant features and then applying sequential forward selection using the wrapper approach. To evaluate the predictive performance, we performed 10-fold cross validation with 1000 random splits of the data and calculated the area under the ROC curve (AUC). Results: Feature selection identified 2 texture features (homogeneity and/or wavelet decompositions) for NB and LR, while for LDA SUVmax and one texture feature (correlation) were identified. All 3 classifiers achieved statistically significant improvements over conventional PET imaging metrics such as tumor volume (AUC = 0.668) and SUVmax (AUC = 0.737). Overall, NB achieved the best predictive performance (AUC = 0.806). This also compares favorably with MTV using the best threshold at an SUV of 11.6 (AUC = 0.746). At a sensitivity of 80%, NB achieved 69% specificity, while SUVmax and tumor volume only had 36% and 47% specificity. Conclusion: Through a systematic analysis of advanced PET imaging features, we are able to build models with improved predictive value over conventional imaging metrics. If validated in a large independent cohort, the proposed techniques could potentially aid in identifying patients who might benefit from adjuvant therapy.« less
QUAGMIRE v1.3: a quasi-geostrophic model for investigating rotating fluids experiments
NASA Astrophysics Data System (ADS)
Williams, P. D.; Haine, T. W. N.; Read, P. L.; Lewis, S. R.; Yamazaki, Y. H.
2009-04-01
The QUAGMIRE model has recently been made freely available for public use. QUAGMIRE is a quasi-geostrophic numerical model for performing fast, high-resolution simulations of multi-layer rotating annulus laboratory experiments on a desktop personal computer. This presentation describes the model's main features. QUAGMIRE uses a hybrid finite-difference/spectral approach to numerically integrate the coupled nonlinear partial differential equations of motion in cylindrical geometry in each layer. Version 1.3 implements the special case of two fluid layers of equal resting depths. The flow is forced either by a differentially rotating lid, or by relaxation to specified streamfunction or potential vorticity fields, or both. Dissipation is achieved through Ekman layer pumping and suction at the horizontal boundaries, including the internal interface. The effects of weak interfacial tension are included, as well as the linear topographic beta-effect and the quadratic centripetal beta-effect. Stochastic forcing may optionally be activated, to represent approximately the effects of random unresolved features. A leapfrog time stepping scheme is used, with a Robert filter. Flows simulated by the model agree well with those observed in the corresponding laboratory experiments.
Robust mislabel logistic regression without modeling mislabel probabilities.
Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun
2018-03-01
Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.
Eguchi, Akihiro; Isbister, James B; Ahmad, Nasir; Stringer, Simon
2018-07-01
We present a hierarchical neural network model, in which subpopulations of neurons develop fixed and regularly repeating temporal chains of spikes (polychronization), which respond specifically to randomized Poisson spike trains representing the input training images. The performance is improved by including top-down and lateral synaptic connections, as well as introducing multiple synaptic contacts between each pair of pre- and postsynaptic neurons, with different synaptic contacts having different axonal delays. Spike-timing-dependent plasticity thus allows the model to select the most effective axonal transmission delay between neurons. Furthermore, neurons representing the binding relationship between low-level and high-level visual features emerge through visually guided learning. This begins to provide a way forward to solving the classic feature binding problem in visual neuroscience and leads to a new hypothesis concerning how information about visual features at every spatial scale may be projected upward through successive neuronal layers. We name this hypothetical upward projection of information the "holographic principle." (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Real-Time Tracking by Double Templates Matching Based on Timed Motion History Image with HSV Feature
Li, Zhiyong; Li, Pengfei; Yu, Xiaoping; Hashem, Mervat
2014-01-01
It is a challenge to represent the target appearance model for moving object tracking under complex environment. This study presents a novel method with appearance model described by double templates based on timed motion history image with HSV color histogram feature (tMHI-HSV). The main components include offline template and online template initialization, tMHI-HSV-based candidate patches feature histograms calculation, double templates matching (DTM) for object location, and templates updating. Firstly, we initialize the target object region and calculate its HSV color histogram feature as offline template and online template. Secondly, the tMHI-HSV is used to segment the motion region and calculate these candidate object patches' color histograms to represent their appearance models. Finally, we utilize the DTM method to trace the target and update the offline template and online template real-timely. The experimental results show that the proposed method can efficiently handle the scale variation and pose change of the rigid and nonrigid objects, even in illumination change and occlusion visual environment. PMID:24592185
Axelrod Model of Social Influence with Cultural Hybridization
NASA Astrophysics Data System (ADS)
Radillo-Díaz, Alejandro; Pérez, Luis A.; Del Castillo-Mussot, Marcelo
2012-10-01
Since cultural interactions between a pair of social agents involve changes in both individuals, we present simulations of a new model based on Axelrod's homogenization mechanism that includes hybridization or mixture of the agents' features. In this new hybridization model, once a cultural feature of a pair of agents has been chosen for the interaction, the average of the values for this feature is reassigned as the new value for both agents after interaction. Moreover, a parameter representing social tolerance is implemented in order to quantify whether agents are similar enough to engage in interaction, as well as to determine whether they belong to the same cluster of similar agents after the system has reached the frozen state. The transitions from a homogeneous state to a fragmented one decrease in abruptness as tolerance is increased. Additionally, the entropy associated to the system presents a maximum within the transition, the width of which increases as tolerance does. Moreover, a plateau was found inside the transition for a low-tolerance system of agents with only two cultural features.
Dead Reckoning in the Desert Ant: A Defence of Connectionist Models.
Mole, Christopher
2014-01-01
Dead reckoning is a feature of the navigation behaviour shown by several creatures, including the desert ant. Recent work by C. Randy Gallistel shows that some connectionist models of dead reckoning face important challenges. These challenges are thought to arise from essential features of the connectionist approach, and have therefore been taken to show that connectionist models are unable to explain even the most primitive of psychological phenomena. I show that Gallistel's challenges are successfully met by one recent connectionist model, proposed by Ulysses Bernardet, Sergi Bermúdez i Badia, and Paul F.M.J. Verschure. The success of this model suggests that there are ways to implement dead reckoning with neural circuits that fall within the bounds of what many people regard as neurobiologically plausible, and so that the wholesale dismissal of the connectionist modelling project remains premature.
Improving liver fibrosis diagnosis based on forward and backward second harmonic generation signals
NASA Astrophysics Data System (ADS)
Peng, Qiwen; Zhuo, Shuangmu; So, Peter T. C.; Yu, Hanry
2015-02-01
The correlation of forward second harmonic generation (SHG) signal and backward SHG signal in different liver fibrosis stages was investigated. We found that three features, including the collagen percentage for forward SHG, the collagen percentage for backward SHG, and the average intensity ratio of two kinds of SHG signals, can quantitatively stage liver fibrosis in thioacetamide-induced rat model. We demonstrated that the combination of all three features by using a support vector machine classification algorithm can provide a more accurate prediction than each feature alone in fibrosis diagnosis.
NASA Astrophysics Data System (ADS)
Turner, Andrew
2014-05-01
In this study we examine monsoon onset characteristics in 20th century historical and AMIP integrations of the CMIP5 multi-model database. We use a period of 1979-2005, common to both the AMIP and historical integrations. While all available observed boundary conditions, including sea-surface temperature (SST), are prescribed in the AMIP integrations, the historical integrations feature ocean-atmosphere models that generate SSTs via air-sea coupled processes. The onset of Indian monsoon rainfall is shown to be systematically earlier in the AMIP integrations when comparing groups of models that provide both experiments, and in the multi-model ensemble means for each experiment in turn. We also test some common circulation indices of the monsoon onset including the horizontal shear in the lower troposphere and wind kinetic energy. Since AMIP integrations are forced by observed SSTs and CMIP5 models are known to have large cold SST biases in the northern Arabian Sea during winter and spring that limits their monsoon rainfall, we relate the delayed onset in the coupled historical integrations to cold Arabian Sea SST biases. This study provides further motivation for solving cold SST biases in the Arabian Sea in coupled models.
Moving Beyond ERP Components: A Selective Review of Approaches to Integrate EEG and Behavior
Bridwell, David A.; Cavanagh, James F.; Collins, Anne G. E.; Nunez, Michael D.; Srinivasan, Ramesh; Stober, Sebastian; Calhoun, Vince D.
2018-01-01
Relationships between neuroimaging measures and behavior provide important clues about brain function and cognition in healthy and clinical populations. While electroencephalography (EEG) provides a portable, low cost measure of brain dynamics, it has been somewhat underrepresented in the emerging field of model-based inference. We seek to address this gap in this article by highlighting the utility of linking EEG and behavior, with an emphasis on approaches for EEG analysis that move beyond focusing on peaks or “components” derived from averaging EEG responses across trials and subjects (generating the event-related potential, ERP). First, we review methods for deriving features from EEG in order to enhance the signal within single-trials. These methods include filtering based on user-defined features (i.e., frequency decomposition, time-frequency decomposition), filtering based on data-driven properties (i.e., blind source separation, BSS), and generating more abstract representations of data (e.g., using deep learning). We then review cognitive models which extract latent variables from experimental tasks, including the drift diffusion model (DDM) and reinforcement learning (RL) approaches. Next, we discuss ways to access associations among these measures, including statistical models, data-driven joint models and cognitive joint modeling using hierarchical Bayesian models (HBMs). We think that these methodological tools are likely to contribute to theoretical advancements, and will help inform our understandings of brain dynamics that contribute to moment-to-moment cognitive function. PMID:29632480
Yahya, Noorazrul; Ebert, Martin A; Bulsara, Max; Kennedy, Angel; Joseph, David J; Denham, James W
2016-08-01
Most predictive models are not sufficiently validated for prospective use. We performed independent external validation of published predictive models for urinary dysfunctions following radiotherapy of the prostate. Multivariable models developed to predict atomised and generalised urinary symptoms, both acute and late, were considered for validation using a dataset representing 754 participants from the TROG 03.04-RADAR trial. Endpoints and features were harmonised to match the predictive models. The overall performance, calibration and discrimination were assessed. 14 models from four publications were validated. The discrimination of the predictive models in an independent external validation cohort, measured using the area under the receiver operating characteristic (ROC) curve, ranged from 0.473 to 0.695, generally lower than in internal validation. 4 models had ROC >0.6. Shrinkage was required for all predictive models' coefficients ranging from -0.309 (prediction probability was inverse to observed proportion) to 0.823. Predictive models which include baseline symptoms as a feature produced the highest discrimination. Two models produced a predicted probability of 0 and 1 for all patients. Predictive models vary in performance and transferability illustrating the need for improvements in model development and reporting. Several models showed reasonable potential but efforts should be increased to improve performance. Baseline symptoms should always be considered as potential features for predictive models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Automated method for the systematic interpretation of resonance peaks in spectrum data
Damiano, Brian; Wood, Richard T.
1997-01-01
A method for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system.
NASA Astrophysics Data System (ADS)
Schmidt-Rohr, Klaus; Chen, Q.
2006-03-01
The perfluorinated ionomer, Nafion, which consists of a (-CF2-)n backbone and charged side branches, is useful as a proton exchange membrane in H2/O2 fuel cells. A modified model of the nanometer-scale structure of hydrated Nafion will be presented. It features hydrated ionic clusters familiar from some previous models, but is based most prominently on pronounced backbone rigidity between branch points and limited orientational correlation of local chain axes. These features have been revealed by solid-state NMR measurements, which take advantage of fast rotations of the backbones around their local axes. The resulting alternating curvature of the backbones towards the hydrated clusters also better satisfies the requirement of dense space filling in solids. Simulations based on this ``alternating curvature'' model reproduce orientational correlation data from NMR, as well as scattering features such as the ionomer peak and the I(q) ˜ 1/q power law at small q values, which can be attributed to modulated cylinders resulting from the chain stiffness. The shortcomings of previous models, including Gierke's cluster model and more recent lamellar or bundle models, in matching all requirements imposed by the experimental data will be discussed.
ANALYSIS OF CLINICAL AND DERMOSCOPIC FEATURES FOR BASAL CELL CARCINOMA NEURAL NETWORK CLASSIFICATION
Cheng, Beibei; Stanley, R. Joe; Stoecker, William V; Stricklin, Sherea M.; Hinton, Kristen A.; Nguyen, Thanh K.; Rader, Ryan K.; Rabinovitz, Harold S.; Oliviero, Margaret; Moss, Randy H.
2012-01-01
Background Basal cell carcinoma (BCC) is the most commonly diagnosed cancer in the United States. In this research, we examine four different feature categories used for diagnostic decisions, including patient personal profile (patient age, gender, etc.), general exam (lesion size and location), common dermoscopic (blue-gray ovoids, leaf-structure dirt trails, etc.), and specific dermoscopic lesion (white/pink areas, semitranslucency, etc.). Specific dermoscopic features are more restricted versions of the common dermoscopic features. Methods Combinations of the four feature categories are analyzed over a data set of 700 lesions, with 350 BCCs and 350 benign lesions, for lesion discrimination using neural network-based techniques, including Evolving Artificial Neural Networks and Evolving Artificial Neural Network Ensembles. Results Experiment results based on ten-fold cross validation for training and testing the different neural network-based techniques yielded an area under the receiver operating characteristic curve as high as 0.981 when all features were combined. The common dermoscopic lesion features generally yielded higher discrimination results than other individual feature categories. Conclusions Experimental results show that combining clinical and image information provides enhanced lesion discrimination capability over either information source separately. This research highlights the potential of data fusion as a model for the diagnostic process. PMID:22724561
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkova, Svitlana; Shaffer, Kyle J.; Jang, Jin Yea
Pew research polls report 62 percent of U.S. adults get news on social media (Gottfried and Shearer, 2016). In a December poll, 64 percent of U.S. adults said that “made-up news” has caused a “great deal of confusion” about the facts of current events (Barthel et al., 2016). Fabricated stories spread in social media, ranging from deliberate propaganda to hoaxes and satire, contributes to this confusion in addition to having serious effects on global stability. In this work we build predictive models to classify 130 thousand news tweets as suspicious or verified, and predict four subtypes of suspicious news –more » satire, hoaxes, clickbait and propaganda. We demonstrate that neural network models trained on tweet content and social network interactions outperform lexical models. Unlike previous work on deception detection, we find that adding syntax and grammar features to our models decreases performance. Incorporating linguistic features, including bias and subjectivity, improves classification results, however social interaction features are most informative for finer-grained separation between our four types of suspicious news posts.« less
The NASA MSFC Earth Global Reference Atmospheric Model-2007 Version
NASA Technical Reports Server (NTRS)
Leslie, F.W.; Justus, C.G.
2008-01-01
Reference or standard atmospheric models have long been used for design and mission planning of various aerospace systems. The NASA/Marshall Space Flight Center (MSFC) Global Reference Atmospheric Model (GRAM) was developed in response to the need for a design reference atmosphere that provides complete global geographical variability, and complete altitude coverage (surface to orbital altitudes) as well as complete seasonal and monthly variability of the thermodynamic variables and wind components. A unique feature of GRAM is that, addition to providing the geographical, height, and monthly variation of the mean atmospheric state, it includes the ability to simulate spatial and temporal perturbations in these atmospheric parameters (e.g. fluctuations due to turbulence and other atmospheric perturbation phenomena). A summary comparing GRAM features to characteristics and features of other reference or standard atmospheric models, can be found Guide to Reference and Standard Atmosphere Models. The original GRAM has undergone a series of improvements over the years with recent additions and changes. The software program is called Earth-GRAM2007 to distinguish it from similar programs for other bodies (e.g. Mars, Venus, Neptune, and Titan). However, in order to make this Technical Memorandum (TM) more readable, the software will be referred to simply as GRAM07 or GRAM unless additional clarity is needed. Section 1 provides an overview of the basic features of GRAM07 including the newly added features. Section 2 provides a more detailed description of GRAM07 and how the model output generated. Section 3 presents sample results. Appendices A and B describe the Global Upper Air Climatic Atlas (GUACA) data and the Global Gridded Air Statistics (GGUAS) database. Appendix C provides instructions for compiling and running GRAM07. Appendix D gives a description of the required NAMELIST format input. Appendix E gives sample output. Appendix F provides a list of available parameters to enable the user to generate special output. Appendix G gives an example and guidance on incorporating GRAM07 as a subroutine in other programs such as trajectory codes or orbital propagation routines.
Fast method for reactor and feature scale coupling in ALD and CVD
Yanguas-Gil, Angel; Elam, Jeffrey W.
2017-08-08
Transport and surface chemistry of certain deposition techniques is modeled. Methods provide a model of the transport inside nanostructures as a single-particle discrete Markov chain process. This approach decouples the complexity of the surface chemistry from the transport model, thus allowing its application under general surface chemistry conditions, including atomic layer deposition (ALD) and chemical vapor deposition (CVD). Methods provide for determination of determine statistical information of the trajectory of individual molecules, such as the average interaction time or the number of wall collisions for molecules entering the nanostructures as well as to track the relative contributions to thin-film growth of different independent reaction pathways at each point of the feature.
Discontinuous Galerkin methods for modeling Hurricane storm surge
NASA Astrophysics Data System (ADS)
Dawson, Clint; Kubatko, Ethan J.; Westerink, Joannes J.; Trahan, Corey; Mirabito, Christopher; Michoski, Craig; Panda, Nishant
2011-09-01
Storm surge due to hurricanes and tropical storms can result in significant loss of life, property damage, and long-term damage to coastal ecosystems and landscapes. Computer modeling of storm surge can be used for two primary purposes: forecasting of surge as storms approach land for emergency planning and evacuation of coastal populations, and hindcasting of storms for determining risk, development of mitigation strategies, coastal restoration and sustainability. Storm surge is modeled using the shallow water equations, coupled with wind forcing and in some events, models of wave energy. In this paper, we will describe a depth-averaged (2D) model of circulation in spherical coordinates. Tides, riverine forcing, atmospheric pressure, bottom friction, the Coriolis effect and wind stress are all important for characterizing the inundation due to surge. The problem is inherently multi-scale, both in space and time. To model these problems accurately requires significant investments in acquiring high-fidelity input (bathymetry, bottom friction characteristics, land cover data, river flow rates, levees, raised roads and railways, etc.), accurate discretization of the computational domain using unstructured finite element meshes, and numerical methods capable of capturing highly advective flows, wetting and drying, and multi-scale features of the solution. The discontinuous Galerkin (DG) method appears to allow for many of the features necessary to accurately capture storm surge physics. The DG method was developed for modeling shocks and advection-dominated flows on unstructured finite element meshes. It easily allows for adaptivity in both mesh ( h) and polynomial order ( p) for capturing multi-scale spatial events. Mass conservative wetting and drying algorithms can be formulated within the DG method. In this paper, we will describe the application of the DG method to hurricane storm surge. We discuss the general formulation, and new features which have been added to the model to better capture surge in complex coastal environments. These features include modifications to the method to handle spherical coordinates and maintain still flows, improvements in the stability post-processing (i.e. slope-limiting), and the modeling of internal barriers for capturing overtopping of levees and other structures. We will focus on applications of the model to recent events in the Gulf of Mexico, including Hurricane Ike.
NASA Astrophysics Data System (ADS)
de Smet, J. H.; van den Berg, A. P.; Vlaar, N. J.
1998-10-01
The long-term growth and stability of compositionally layered continental upper mantle has been investigated by numerical modelling. We present the first numerical model of a convecting mantle including differentiation through partial melting resulting in a stable compositionally layered continental upper mantle structure. This structure includes a continental root extending to a depth of about 200 km. The model covers the upper mantle including the crust and incorporates physical features important for the study of the continental upper mantle during secular cooling of the Earth since the Archaean. Among these features are: a partial melt generation mechanism allowing consistent recurrent melting, time-dependent non-uniform radiogenic heat production, and a temperature- and pressure-dependent rheology. The numerical results reveal a long-term growth mechanism of the continental compositional root. This mechanism operates through episodical injection of small diapiric upwellings from the deep layer of undepleted mantle into the continental root which consists of compositionally distinct depleted mantle material. Our modelling results show the layered continental structure to remain stable during at least 1.5 Ga. After this period mantle differentiation through partial melting ceases due to the prolonged secular cooling and small-scale instabilities set in through continental delamination. This stable period of 1.5 Ga is related to a number of limitations in our model. By improving on these limitations in the future this stable period will be extended to more realistic values.
Models for At Risk Youth. Final Report.
ERIC Educational Resources Information Center
Woloszyk, Carl A.
Secondary data sources, including the ERIC and National Dropout Prevention Center databases, were reviewed to identify programs and strategies effective in keeping at-risk youth in school and helping them make successful school-to-work transitions. The dropout prevention model that was identified features a system of prevention, mediation,…
Demonstration of the Capabilities of the KINEROS2 – AGWA 3.0 Suite of Modeling Tools
This poster and computer demonstration illustrates a sampling of the wide range of applications that are possible using the KINEROS2 - AGWA suite of modeling tools. Applications include: 1) Incorporation of Low Impact Development (LID) features; 2) A real-time flash flood forecas...
ERIC Educational Resources Information Center
Ciechanowski, Kathryn M.
2014-01-01
This research explores third-grade science and language instruction for emergent bilinguals designed through a framework of planning, lessons, and assessment in an interconnected model including content, linguistic features, and functions. Participants were a team of language specialist, classroom teacher, and researcher who designed…
Museums in the Commercial Marketplace: The Need for Licensing Agreements.
ERIC Educational Resources Information Center
Hodes, Scott; Gross, Karen
1978-01-01
Discussed are the major features of a Model Licensing Agreement to be used by museums in any commercial reproduction venture. They include rights and obligations of the respective parties, maintenance of quality control, payment of royalties, arbitration, and termination provisions. The Model Licensing Agreement is appended. (JMD)
Word Recognition: Theoretical Issues and Instructional Hints.
ERIC Educational Resources Information Center
Smith, Edward E.; Kleiman, Glenn M.
Research on adult readers' word recognition skills is used in this paper to develop a general information processing model of reading. Stages of the model include feature extraction, interpretation, lexical access, working memory, and integration. Of those stages, particular attention is given to the units of interpretation, speech recoding and…
NASA Technical Reports Server (NTRS)
Ahrens, Thomas J.
2001-01-01
We examined the von Mises and Mohr-Coulomb strength models with and without damage effects and developed a model for dilatancy. The models and results are given in O'Keefe et al. We found that by incorporating damage into the models that we could in a single integrated impact calculation, starting with the bolide in the atmosphere produce final crater profiles having the major features found in the field measurements. These features included a central uplift, an inner ring, circular terracing and faulting. This was accomplished with undamaged surface strengths of approximately 0.1 GPa and at depth strengths of approximately 1.0 GPa. We modeled the damage in geologic materials using a phenomenological approach, which coupled the Johnson-Cook damage model with the CTH code geologic strength model. The objective here was not to determine the distribution of fragment sizes, but rather to determine the effect of brecciated and comminuted material on the crater evolution, fault production, ejecta distribution, and final crater morphology.
TSAPA: identification of tissue-specific alternative polyadenylation sites in plants.
Ji, Guoli; Chen, Moliang; Ye, Wenbin; Zhu, Sheng; Ye, Congting; Su, Yaru; Peng, Haonan; Wu, Xiaohui
2018-06-15
Alternative polyadenylation (APA) is now emerging as a widespread mechanism modulated tissue-specifically, which highlights the need to define tissue-specific poly(A) sites for profiling APA dynamics across tissues. We have developed an R package called TSAPA based on the machine learning model for identifying tissue-specific poly(A) sites in plants. A feature space including more than 200 features was assembled to specifically characterize poly(A) sites in plants. The classification model in TSAPA can be customized by selecting desirable features or classifiers. TSAPA is also capable of predicting tissue-specific poly(A) sites in unannotated intergenic regions. TSAPA will be a valuable addition to the community for studying dynamics of APA in plants. https://github.com/BMILAB/TSAPA. Supplementary data are available at Bioinformatics online.
Efficient robust conditional random fields.
Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A
2015-10-01
Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.
3P: Personalized Pregnancy Prediction in IVF Treatment Process
NASA Astrophysics Data System (ADS)
Uyar, Asli; Ciray, H. Nadir; Bener, Ayse; Bahceci, Mustafa
We present an intelligent learning system for improving pregnancy success rate of IVF treatment. Our proposed model uses an SVM based classification system for training a model from past data and making predictions on implantation outcome of new embryos. This study employs an embryo-centered approach. Each embryo is represented with a data feature vector including 17 features related to patient characteristics, clinical diagnosis, treatment method and embryo morphological parameters. Our experimental results demonstrate a prediction accuracy of 82.7%. We have obtained the IVF dataset from Bahceci Women Health, Care Centre, in Istanbul, Turkey.
Jena, Manas Kumar; Samantaray, Subhransu Ranjan
2016-01-01
This paper presents a data-mining-based intelligent differential relaying scheme for transmission lines, including flexible ac transmission system device, such as unified power flow controller (UPFC) and wind farms. Initially, the current and voltage signals are processed through extended Kalman filter phasor measurement unit for phasor estimation, and 21 potential features are computed at both ends of the line. Once the features are extracted at both ends, the corresponding differential features are derived. These differential features are fed to a data-mining model known as decision tree (DT) to provide the final relaying decision. The proposed technique has been extensively tested for single-circuit transmission line, including UPFC and wind farms with in-feed, double-circuit line with UPFC on one line and wind farm as one of the substations with wide variations in operating parameters. The test results obtained from simulation as well as in real-time digital simulator testing indicate that the DT-based intelligent differential relaying scheme is highly reliable and accurate with a response time of 2.25 cycles from the fault inception.
Breaking the polar-nonpolar division in solvation free energy prediction.
Wang, Bao; Wang, Chengzhang; Wu, Kedi; Wei, Guo-Wei
2018-02-05
Implicit solvent models divide solvation free energies into polar and nonpolar additive contributions, whereas polar and nonpolar interactions are inseparable and nonadditive. We present a feature functional theory (FFT) framework to break this ad hoc division. The essential ideas of FFT are as follows: (i) representability assumption: there exists a microscopic feature vector that can uniquely characterize and distinguish one molecule from another; (ii) feature-function relationship assumption: the macroscopic features, including solvation free energy, of a molecule is a functional of microscopic feature vectors; and (iii) similarity assumption: molecules with similar microscopic features have similar macroscopic properties, such as solvation free energies. Based on these assumptions, solvation free energy prediction is carried out in the following protocol. First, we construct a molecular microscopic feature vector that is efficient in characterizing the solvation process using quantum mechanics and Poisson-Boltzmann theory. Microscopic feature vectors are combined with macroscopic features, that is, physical observable, to form extended feature vectors. Additionally, we partition a solvation dataset into queries according to molecular compositions. Moreover, for each target molecule, we adopt a machine learning algorithm for its nearest neighbor search, based on the selected microscopic feature vectors. Finally, from the extended feature vectors of obtained nearest neighbors, we construct a functional of solvation free energy, which is employed to predict the solvation free energy of the target molecule. The proposed FFT model has been extensively validated via a large dataset of 668 molecules. The leave-one-out test gives an optimal root-mean-square error (RMSE) of 1.05 kcal/mol. FFT predictions of SAMPL0, SAMPL1, SAMPL2, SAMPL3, and SAMPL4 challenge sets deliver the RMSEs of 0.61, 1.86, 1.64, 0.86, and 1.14 kcal/mol, respectively. Using a test set of 94 molecules and its associated training set, the present approach was carefully compared with a classic solvation model based on weighted solvent accessible surface area. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Schädler, Marc René; Warzybok, Anna; Ewert, Stephan D; Kollmeier, Birger
2016-05-01
A framework for simulating auditory discrimination experiments, based on an approach from Schädler, Warzybok, Hochmuth, and Kollmeier [(2015). Int. J. Audiol. 54, 100-107] which was originally designed to predict speech recognition thresholds, is extended to also predict psychoacoustic thresholds. The proposed framework is used to assess the suitability of different auditory-inspired feature sets for a range of auditory discrimination experiments that included psychoacoustic as well as speech recognition experiments in noise. The considered experiments were 2 kHz tone-in-broadband-noise simultaneous masking depending on the tone length, spectral masking with simultaneously presented tone signals and narrow-band noise maskers, and German Matrix sentence test reception threshold in stationary and modulated noise. The employed feature sets included spectro-temporal Gabor filter bank features, Mel-frequency cepstral coefficients, logarithmically scaled Mel-spectrograms, and the internal representation of the Perception Model from Dau, Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102(5), 2892-2905]. The proposed framework was successfully employed to simulate all experiments with a common parameter set and obtain objective thresholds with less assumptions compared to traditional modeling approaches. Depending on the feature set, the simulated reference-free thresholds were found to agree with-and hence to predict-empirical data from the literature. Across-frequency processing was found to be crucial to accurately model the lower speech reception threshold in modulated noise conditions than in stationary noise conditions.
Stimulus information contaminates summation tests of independent neural representations of features
NASA Technical Reports Server (NTRS)
Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.
2002-01-01
Many models of visual processing assume that visual information is analyzed into separable and independent neural codes, or features. A common psychophysical test of independent features is known as a summation study, which measures performance in a detection, discrimination, or visual search task as the number of proposed features increases. Improvement in human performance with increasing number of available features is typically attributed to the summation, or combination, of information across independent neural coding of the features. In many instances, however, increasing the number of available features also increases the stimulus information in the task, as assessed by an optimal observer that does not include the independent neural codes. In a visual search task with spatial frequency and orientation as the component features, a particular set of stimuli were chosen so that all searches had equivalent stimulus information, regardless of the number of features. In this case, human performance did not improve with increasing number of features, implying that the improvement observed with additional features may be due to stimulus information and not the combination across independent features.
State-and-transition simulation models: a framework for forecasting landscape change
Daniel, Colin; Frid, Leonardo; Sleeter, Benjamin M.; Fortin, Marie-Josée
2016-01-01
SummaryA wide range of spatially explicit simulation models have been developed to forecast landscape dynamics, including models for projecting changes in both vegetation and land use. While these models have generally been developed as separate applications, each with a separate purpose and audience, they share many common features.We present a general framework, called a state-and-transition simulation model (STSM), which captures a number of these common features, accompanied by a software product, called ST-Sim, to build and run such models. The STSM method divides a landscape into a set of discrete spatial units and simulates the discrete state of each cell forward as a discrete-time-inhomogeneous stochastic process. The method differs from a spatially interacting Markov chain in several important ways, including the ability to add discrete counters such as age and time-since-transition as state variables, to specify one-step transition rates as either probabilities or target areas, and to represent multiple types of transitions between pairs of states.We demonstrate the STSM method using a model of land-use/land-cover (LULC) change for the state of Hawai'i, USA. Processes represented in this example include expansion/contraction of agricultural lands, urbanization, wildfire, shrub encroachment into grassland and harvest of tree plantations; the model also projects shifts in moisture zones due to climate change. Key model output includes projections of the future spatial and temporal distribution of LULC classes and moisture zones across the landscape over the next 50 years.State-and-transition simulation models can be applied to a wide range of landscapes, including questions of both land-use change and vegetation dynamics. Because the method is inherently stochastic, it is well suited for characterizing uncertainty in model projections. When combined with the ST-Sim software, STSMs offer a simple yet powerful means for developing a wide range of models of landscape dynamics.
Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling
Cuperlovic-Culf, Miroslava
2018-01-01
Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies. PMID:29324649
Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling.
Cuperlovic-Culf, Miroslava
2018-01-11
Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies.
SU-F-R-24: Identifying Prognostic Imaging Biomarkers in Early Stage Lung Cancer Using Radiomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, X; Wu, J; Cui, Y
2016-06-15
Purpose: Patients diagnosed with early stage lung cancer have favorable outcomes when treated with surgery or stereotactic radiotherapy. However, a significant proportion (∼20%) of patients will develop metastatic disease and eventually die of the disease. The purpose of this work is to identify quantitative imaging biomarkers from CT for predicting overall survival in early stage lung cancer. Methods: In this institutional review board-approved HIPPA-compliant retrospective study, we retrospectively analyzed the diagnostic CT scans of 110 patients with early stage lung cancer. Data from 70 patients were used for training/discovery purposes, while those of remaining 40 patients were used for independentmore » validation. We extracted 191 radiomic features, including statistical, histogram, morphological, and texture features. Cox proportional hazard regression model, coupled with the least absolute shrinkage and selection operator (LASSO), was used to predict overall survival based on the radiomic features. Results: The optimal prognostic model included three image features from the Law’s feature and wavelet texture. In the discovery cohort, this model achieved a concordance index or CI=0.67, and it separated the low-risk from high-risk groups in predicting overall survival (hazard ratio=2.72, log-rank p=0.007). In the independent validation cohort, this radiomic signature achieved a CI=0.62, and significantly stratified the low-risk and high-risk groups in terms of overall survival (hazard ratio=2.20, log-rank p=0.042). Conclusion: We identified CT imaging characteristics associated with overall survival in early stage lung cancer. If prospectively validated, this could potentially help identify high-risk patients who might benefit from adjuvant systemic therapy.« less
Torres-Mejía, Gabriela; De Stavola, Bianca; Allen, Diane S; Pérez-Gavilán, Juan J; Ferreira, Jorge M; Fentiman, Ian S; Dos Santos Silva, Isabel
2005-05-01
Mammographic features are known to be associated with breast cancer but the magnitude of the effect differs markedly from study to study. Methods to assess mammographic features range from subjective qualitative classifications to computer-automated quantitative measures. We used data from the UK Guernsey prospective studies to examine the relative value of these methods in predicting breast cancer risk. In all, 3,211 women ages > or =35 years who had a mammogram taken in 1986 to 1989 were followed-up to the end of October 2003, with 111 developing breast cancer during this period. Mammograms were classified using the subjective qualitative Wolfe classification and several quantitative mammographic features measured using computer-based techniques. Breast cancer risk was positively associated with high-grade Wolfe classification, percent breast density and area of dense tissue, and negatively associated with area of lucent tissue, fractal dimension, and lacunarity. Inclusion of the quantitative measures in the same model identified area of dense tissue and lacunarity as the best predictors of breast cancer, with risk increasing by 59% [95% confidence interval (95% CI), 29-94%] per SD increase in total area of dense tissue but declining by 39% (95% CI, 53-22%) per SD increase in lacunarity, after adjusting for each other and for other confounders. Comparison of models that included both the qualitative Wolfe classification and these two quantitative measures to models that included either the qualitative or the two quantitative variables showed that they all made significant contributions to prediction of breast cancer risk. These findings indicate that breast cancer risk is affected not only by the amount of mammographic density but also by the degree of heterogeneity of the parenchymal pattern and, presumably, by other features captured by the Wolfe classification.
Machine-learning in grading of gliomas based on multi-parametric magnetic resonance imaging at 3T.
Citak-Er, Fusun; Firat, Zeynep; Kovanlikaya, Ilhami; Ture, Ugur; Ozturk-Isik, Esin
2018-06-15
The objective of this study was to assess the contribution of multi-parametric (mp) magnetic resonance imaging (MRI) quantitative features in the machine learning-based grading of gliomas with a multi-region-of-interests approach. Forty-three patients who were newly diagnosed as having a glioma were included in this study. The patients were scanned prior to any therapy using a standard brain tumor magnetic resonance (MR) imaging protocol that included T1 and T2-weighted, diffusion-weighted, diffusion tensor, MR perfusion and MR spectroscopic imaging. Three different regions-of-interest were drawn for each subject to encompass tumor, immediate tumor periphery, and distant peritumoral edema/normal. The normalized mp-MRI features were used to build machine-learning models for differentiating low-grade gliomas (WHO grades I and II) from high grades (WHO grades III and IV). In order to assess the contribution of regional mp-MRI quantitative features to the classification models, a support vector machine-based recursive feature elimination method was applied prior to classification. A machine-learning model based on support vector machine algorithm with linear kernel achieved an accuracy of 93.0%, a specificity of 86.7%, and a sensitivity of 96.4% for the grading of gliomas using ten-fold cross validation based on the proposed subset of the mp-MRI features. In this study, machine-learning based on multiregional and multi-parametric MRI data has proven to be an important tool in grading glial tumors accurately even in this limited patient population. Future studies are needed to investigate the use of machine learning algorithms for brain tumor classification in a larger patient cohort. Copyright © 2018. Published by Elsevier Ltd.
Geist; Dauble
1998-09-01
/ Knowledge of the three-dimensional connectivity between rivers and groundwater within the hyporheic zone can be used to improve the definition of fall chinook salmon (Oncorhynchus tshawytscha) spawning habitat. Information exists on the microhabitat characteristics that define suitable salmon spawning habitat. However, traditional spawning habitat models that use these characteristics to predict available spawning habitat are restricted because they can not account for the heterogeneous nature of rivers. We present a conceptual spawning habitat model for fall chinook salmon that describes how geomorphic features of river channels create hydraulic processes, including hyporheic flows, that influence where salmon spawn in unconstrained reaches of large mainstem alluvial rivers. Two case studies based on empirical data from fall chinook salmon spawning areas in the Hanford Reach of the Columbia River are presented to illustrate important aspects of our conceptual model. We suggest that traditional habitat models and our conceptual model be combined to predict the limits of suitable fall chinook salmon spawning habitat. This approach can incorporate quantitative measures of river channel morphology, including general descriptors of geomorphic features at different spatial scales, in order to understand the processes influencing redd site selection and spawning habitat use. This information is needed in order to protect existing salmon spawning habitat in large rivers, as well as to recover habitat already lost.KEY WORDS: Hyporheic zone; Geomorphology; Spawning habitat; Large rivers; Fall chinook salmon; Habitat management
Quantitative Analysis of the Cervical Texture by Ultrasound and Correlation with Gestational Age.
Baños, Núria; Perez-Moreno, Alvaro; Migliorelli, Federico; Triginer, Laura; Cobo, Teresa; Bonet-Carne, Elisenda; Gratacos, Eduard; Palacio, Montse
2017-01-01
Quantitative texture analysis has been proposed to extract robust features from the ultrasound image to detect subtle changes in the textures of the images. The aim of this study was to evaluate the feasibility of quantitative cervical texture analysis to assess cervical tissue changes throughout pregnancy. This was a cross-sectional study including singleton pregnancies between 20.0 and 41.6 weeks of gestation from women who delivered at term. Cervical length was measured, and a selected region of interest in the cervix was delineated. A model to predict gestational age based on features extracted from cervical images was developed following three steps: data splitting, feature transformation, and regression model computation. Seven hundred images, 30 per gestational week, were included for analysis. There was a strong correlation between the gestational age at which the images were obtained and the estimated gestational age by quantitative analysis of the cervical texture (R = 0.88). This study provides evidence that quantitative analysis of cervical texture can extract features from cervical ultrasound images which correlate with gestational age. Further research is needed to evaluate its applicability as a biomarker of the risk of spontaneous preterm birth, as well as its role in cervical assessment in other clinical situations in which cervical evaluation might be relevant. © 2016 S. Karger AG, Basel.
A computer program for predicting nonlinear uniaxial material responses using viscoplastic models
NASA Technical Reports Server (NTRS)
Chang, T. Y.; Thompson, R. L.
1984-01-01
A computer program was developed for predicting nonlinear uniaxial material responses using viscoplastic constitutive models. Four specific models, i.e., those due to Miller, Walker, Krieg-Swearengen-Rhode, and Robinson, are included. Any other unified model is easily implemented into the program in the form of subroutines. Analysis features include stress-strain cycling, creep response, stress relaxation, thermomechanical fatigue loop, or any combination of these responses. An outline is given on the theoretical background of uniaxial constitutive models, analysis procedure, and numerical integration methods for solving the nonlinear constitutive equations. In addition, a discussion on the computer program implementation is also given. Finally, seven numerical examples are included to demonstrate the versatility of the computer program developed.
BehavePlus fire modeling system, version 5.0: Design and Features
Faith Ann Heinsch; Patricia L. Andrews
2010-01-01
The BehavePlus fire modeling system is a computer program that is based on mathematical models that describe wildland fire behavior and effects and the fire environment. It is a flexible system that produces tables, graphs, and simple diagrams. It can be used for a host of fire management applications, including projecting the behavior of an ongoing fire, planning...
Introducing improved structural properties and salt dependence into a coarse-grained model of DNA
NASA Astrophysics Data System (ADS)
Snodin, Benedict E. K.; Randisi, Ferdinando; Mosayebi, Majid; Šulc, Petr; Schreck, John S.; Romano, Flavio; Ouldridge, Thomas E.; Tsukanov, Roman; Nir, Eyal; Louis, Ard A.; Doye, Jonathan P. K.
2015-06-01
We introduce an extended version of oxDNA, a coarse-grained model of deoxyribonucleic acid (DNA) designed to capture the thermodynamic, structural, and mechanical properties of single- and double-stranded DNA. By including explicit major and minor grooves and by slightly modifying the coaxial stacking and backbone-backbone interactions, we improve the ability of the model to treat large (kilobase-pair) structures, such as DNA origami, which are sensitive to these geometric features. Further, we extend the model, which was previously parameterised to just one salt concentration ([Na+] = 0.5M), so that it can be used for a range of salt concentrations including those corresponding to physiological conditions. Finally, we use new experimental data to parameterise the oxDNA potential so that consecutive adenine bases stack with a different strength to consecutive thymine bases, a feature which allows a more accurate treatment of systems where the flexibility of single-stranded regions is important. We illustrate the new possibilities opened up by the updated model, oxDNA2, by presenting results from simulations of the structure of large DNA objects and by using the model to investigate some salt-dependent properties of DNA.
Introducing improved structural properties and salt dependence into a coarse-grained model of DNA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snodin, Benedict E. K., E-mail: benedict.snodin@chem.ox.ac.uk; Mosayebi, Majid; Schreck, John S.
2015-06-21
We introduce an extended version of oxDNA, a coarse-grained model of deoxyribonucleic acid (DNA) designed to capture the thermodynamic, structural, and mechanical properties of single- and double-stranded DNA. By including explicit major and minor grooves and by slightly modifying the coaxial stacking and backbone-backbone interactions, we improve the ability of the model to treat large (kilobase-pair) structures, such as DNA origami, which are sensitive to these geometric features. Further, we extend the model, which was previously parameterised to just one salt concentration ([Na{sup +}] = 0.5M), so that it can be used for a range of salt concentrations including thosemore » corresponding to physiological conditions. Finally, we use new experimental data to parameterise the oxDNA potential so that consecutive adenine bases stack with a different strength to consecutive thymine bases, a feature which allows a more accurate treatment of systems where the flexibility of single-stranded regions is important. We illustrate the new possibilities opened up by the updated model, oxDNA2, by presenting results from simulations of the structure of large DNA objects and by using the model to investigate some salt-dependent properties of DNA.« less
The contraction/expansion history of Charon with implications for its planetary-scale tectonic belt
NASA Astrophysics Data System (ADS)
Malamud, Uri; Perets, Hagai B.; Schubert, Gerald
2017-06-01
The New Horizons mission to the Kuiper belt has recently revealed intriguing features on the surface of Charon, including a network of chasmata, cutting across or around a series of high topography features, conjoining to form a belt. It is proposed that this tectonic belt is a consequence of contraction/expansion episodes in the moon's evolution associated particularly with compaction, differentiation and geochemical reactions of the interior. The proposed scenario involves no need for solidification of a vast subsurface ocean and/or a warm initial state. This scenario is based on a new, detailed thermo-physical evolution model of Charon that includes multiple processes. According to the model, Charon experiences two contraction/expansion episodes in its history that may provide the proper environment for the formation of the tectonic belt. This outcome remains qualitatively the same, for several different initial conditions and parameter variations. The precise orientation of Charon's tectonic belt, and the cryovolcanic features observed south of the tectonic belt may have involved a planetary-scale impact, that occurred only after the belt had already formed.
Simmering, Vanessa R.; Miller, Hilary E.; Bohache, Kevin
2015-01-01
Research on visual working memory has focused on characterizing the nature of capacity limits as “slots” or “resources” based almost exclusively on adults’ performance with little consideration for developmental change. Here we argue that understanding how visual working memory develops can shed new light onto the nature of representations. We present an alternative model, the Dynamic Field Theory (DFT), which can capture effects that have been previously attributed either to “slot” or “resource” explanations. The DFT includes a specific developmental mechanism to account for improvements in both resolution and capacity of visual working memory throughout childhood. Here we show how development in the DFT can account for different capacity estimates across feature types (i.e., color and shape). The current paper tests this account by comparing children’s (3, 5, and 7 years of age) performance across different feature types. Results showed that capacity for colors increased faster over development than capacity for shapes. A second experiment confirmed this difference across feature types within subjects, but also showed that the difference can be attenuated by testing memory for less-familiar colors. Model simulations demonstrate how developmental changes in connectivity within the model—purportedly arising through experience—can capture differences across feature types. PMID:25737253
Leontidis, Georgios
2017-11-01
Human retina is a diverse and important tissue, vastly studied for various retinal and other diseases. Diabetic retinopathy (DR), a leading cause of blindness, is one of them. This work proposes a novel and complete framework for the accurate and robust extraction and analysis of a series of retinal vascular geometric features. It focuses on studying the registered bifurcations in successive years of progression from diabetes (no DR) to DR, in order to identify the vascular alterations. Retinal fundus images are utilised, and multiple experimental designs are employed. The framework includes various steps, such as image registration and segmentation, extraction of features, statistical analysis and classification models. Linear mixed models are utilised for making the statistical inferences, alongside the elastic-net logistic regression, boruta algorithm, and regularised random forests for the feature selection and classification phases, in order to evaluate the discriminative potential of the investigated features and also build classification models. A number of geometric features, such as the central retinal artery and vein equivalents, are found to differ significantly across the experiments and also have good discriminative potential. The classification systems yield promising results with the area under the curve values ranging from 0.821 to 0.968, across the four different investigated combinations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Variability In Long-Wave Runup as a Function of Nearshore Bathymetric Features
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunkin, Lauren McNeill
Beaches and barrier islands are vulnerable to extreme storm events, such as hurricanes, that can cause severe erosion and overwash to the system. Having dunes and a wide beach in front of coastal infrastructure can provide protection during a storm, but the influence that nearshore bathymetric features have in protecting the beach and barrier island system is not completely understood. The spatial variation in nearshore features, such as sand bars and beach cusps, can alter nearshore hydrodynamics, including wave setup and runup. The influence of bathymetric features on long-wave runup can be used in evaluating the vulnerability of coastal regionsmore » to erosion and dune overtopping, evaluating the changing morphology, and implementing plans to protect infrastructure. In this thesis, long-wave runup variation due to changing bathymetric features as determined with the numerical model XBeach is quantified (eXtreme Beach behavior model). Wave heights are analyzed to determine the energy through the surfzone. XBeach assumes that coastal erosion at the land-sea interface is dominated by bound long-wave processes. Several hydrodynamic conditions are used to force the numerical model. The XBeach simulation results suggest that bathymetric irregularity induces significant changes in the extreme long-wave runup at the beach and the energy indicator through the surfzone.« less
Hayer, Cari-Ann; Chipps, Steven R.; Stone, James J.
2011-01-01
Elevated mercury concentration has been documented in a variety of fish and is a growing concern for human consumption. Here, we explore the influence of physiochemical and watershed attributes on mercury concentration in walleye (Sander vitreus, M.) from natural, glacial lakes in South Dakota. Regression analysis showed that water quality attributes were poor predictors of walleye mercury concentration (R2 = 0.57, p = 0.13). In contrast, models based on watershed features (e.g., lake level changes, watershed slope, agricultural land, wetlands) and local habitat features (i.e., substrate composition, maximum lake depth) explained 81% (p = 0.001) and 80% (p = 0.002) of the variation in walleye mercury concentration. Using an information theoretic approach we evaluated hypotheses related to water quality, physical habitat and watershed features. The best model explaining variation in walleye mercury concentration included local habitat features (Wi = 0.991). These results show that physical habitat and watershed features were better predictors of walleye mercury concentration than water chemistry in glacial lakes of the Northern Great Plains.
Estimating the Diets of Animals Using Stable Isotopes and a Comprehensive Bayesian Mixing Model
Hopkins, John B.; Ferguson, Jake M.
2012-01-01
Using stable isotope mixing models (SIMMs) as a tool to investigate the foraging ecology of animals is gaining popularity among researchers. As a result, statistical methods are rapidly evolving and numerous models have been produced to estimate the diets of animals—each with their benefits and their limitations. Deciding which SIMM to use is contingent on factors such as the consumer of interest, its food sources, sample size, the familiarity a user has with a particular framework for statistical analysis, or the level of inference the researcher desires to make (e.g., population- or individual-level). In this paper, we provide a review of commonly used SIMM models and describe a comprehensive SIMM that includes all features commonly used in SIMM analysis and two new features. We used data collected in Yosemite National Park to demonstrate IsotopeR's ability to estimate dietary parameters. We then examined the importance of each feature in the model and compared our results to inferences from commonly used SIMMs. IsotopeR's user interface (in R) will provide researchers a user-friendly tool for SIMM analysis. The model is also applicable for use in paleontology, archaeology, and forensic studies as well as estimating pollution inputs. PMID:22235246
Sequence determinants of improved CRISPR sgRNA design.
Xu, Han; Xiao, Tengfei; Chen, Chen-Hao; Li, Wei; Meyer, Clifford A; Wu, Qiu; Wu, Di; Cong, Le; Zhang, Feng; Liu, Jun S; Brown, Myles; Liu, X Shirley
2015-08-01
The CRISPR/Cas9 system has revolutionized mammalian somatic cell genetics. Genome-wide functional screens using CRISPR/Cas9-mediated knockout or dCas9 fusion-mediated inhibition/activation (CRISPRi/a) are powerful techniques for discovering phenotype-associated gene function. We systematically assessed the DNA sequence features that contribute to single guide RNA (sgRNA) efficiency in CRISPR-based screens. Leveraging the information from multiple designs, we derived a new sequence model for predicting sgRNA efficiency in CRISPR/Cas9 knockout experiments. Our model confirmed known features and suggested new features including a preference for cytosine at the cleavage site. The model was experimentally validated for sgRNA-mediated mutation rate and protein knockout efficiency. Tested on independent data sets, the model achieved significant results in both positive and negative selection conditions and outperformed existing models. We also found that the sequence preference for CRISPRi/a is substantially different from that for CRISPR/Cas9 knockout and propose a new model for predicting sgRNA efficiency in CRISPRi/a experiments. These results facilitate the genome-wide design of improved sgRNA for both knockout and CRISPRi/a studies. © 2015 Xu et al.; Published by Cold Spring Harbor Laboratory Press.
Dynamic visual attention: motion direction versus motion magnitude
NASA Astrophysics Data System (ADS)
Bur, A.; Wurtz, P.; Müri, R. M.; Hügli, H.
2008-02-01
Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.
Martinez-Torteya, Antonio; Rodriguez-Rojas, Juan; Celaya-Padilla, José M; Galván-Tejada, Jorge I; Treviño, Victor; Tamez-Peña, Jose
2014-10-01
Early diagnoses of Alzheimer's disease (AD) would confer many benefits. Several biomarkers have been proposed to achieve such a task, where features extracted from magnetic resonance imaging (MRI) have played an important role. However, studies have focused exclusively on morphological characteristics. This study aims to determine whether features relating to the signal and texture of the image could predict mild cognitive impairment (MCI) to AD progression. Clinical, biological, and positron emission tomography information and MRI images of 62 subjects from the AD neuroimaging initiative were used in this study, extracting 4150 features from each MRI. Within this multimodal database, a feature selection algorithm was used to obtain an accurate and small logistic regression model, generated by a methodology that yielded a mean blind test accuracy of 0.79. This model included six features, five of them obtained from the MRI images, and one obtained from genotyping. A risk analysis divided the subjects into low-risk and high-risk groups according to a prognostic index. The groups were statistically different ([Formula: see text]). These results demonstrated that MRI features related to both signal and texture add MCI to AD predictive power, and supported the ongoing notion that multimodal biomarkers outperform single-modality ones.
Tectonic models for Yucca Mountain, Nevada
O'Leary, Dennis W.
2006-01-01
Performance of a high-level nuclear waste repository at Yucca Mountain hinges partly on long-term structural stability of the mountain, its susceptibility to tectonic disruption that includes fault displacement, seismic ground motion, and igneous intrusion. Because of the uncertainty involved with long-term (10,000 yr minimum) prediction of tectonic events (e.g., earthquakes) and the incomplete understanding of the history of strain and its mechanisms in the Yucca Mountain region, a tectonic model is needed. A tectonic model should represent the structural assemblage of the mountain in its tectonic setting and account for that assemblage through a history of deformation in which all of the observed deformation features are linked in time and space. Four major types of tectonic models have been proposed for Yucca Mountain: a caldera model; simple shear (detachment fault) models; pure shear (planar fault) models; and lateral shear models. Most of the models seek to explain local features in the context of well-accepted regional deformation mechanisms. Evaluation of the models in light of site characterization shows that none of them completely accounts for all the known tectonic features of Yucca Mountain or is fully compatible with the deformation history. The Yucca Mountain project does not endorse a preferred tectonic model. However, most experts involved in the probabilistic volcanic hazards analysis and the probabilistic seismic hazards analysis preferred a planar fault type model. ?? 2007 Geological Society of America. All rights reserved.
Additional Improvements to the NASA Lewis Ice Accretion Code LEWICE
NASA Technical Reports Server (NTRS)
Wright, William B.; Bidwell, Colin S.
1995-01-01
Due to the feedback of the user community, three major features have been added to the NASA Lewis ice accretion code LEWICE. These features include: first, further improvements to the numerics of the code so that more time steps can be run and so that the code is more stable; second, inclusion and refinement of the roughness prediction model described in an earlier paper; third, inclusion of multi-element trajectory and ice accretion capabilities to LEWICE. This paper will describe each of these advancements in full and make comparisons with the experimental data available. Further refinement of these features and inclusion of additional features will be performed as more feedback is received.
Measuring and modeling salience with the theory of visual attention.
Krüger, Alexander; Tünnermann, Jan; Scharlau, Ingrid
2017-08-01
For almost three decades, the theory of visual attention (TVA) has been successful in mathematically describing and explaining a wide variety of phenomena in visual selection and recognition with high quantitative precision. Interestingly, the influence of feature contrast on attention has been included in TVA only recently, although it has been extensively studied outside the TVA framework. The present approach further develops this extension of TVA's scope by measuring and modeling salience. An empirical measure of salience is achieved by linking different (orientation and luminance) contrasts to a TVA parameter. In the modeling part, the function relating feature contrasts to salience is described mathematically and tested against alternatives by Bayesian model comparison. This model comparison reveals that the power function is an appropriate model of salience growth in the dimensions of orientation and luminance contrast. Furthermore, if contrasts from the two dimensions are combined, salience adds up additively.
Comparisons of the Standard Galaxy Model with observations in two fields
NASA Technical Reports Server (NTRS)
Bahcall, J. N.; Ratnatunga, K. U.
1985-01-01
The Bahcall-Soneira (1984) model for the distribution of stars in the Galaxy is compared with the observations reported by Gilmore, Reid, and Hewett (1984) in two directions in the sky, the pole and the Morton-Tritton (1982) region. It is shown that the Galaxy model is in good agreement with the observations everywhere it has been tested with modern data, including the magnitude range, V = 17-18, and provided that the globular cluster feature is included in the luminosity function of the field Population II stars.
Lescroart, Mark D.; Stansbury, Dustin E.; Gallant, Jack L.
2015-01-01
Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue. PMID:26594164
ERIC Educational Resources Information Center
Fox, William
2012-01-01
The purpose of our modeling effort is to predict future outcomes. We assume the data collected are both accurate and relatively precise. For our oscillating data, we examined several mathematical modeling forms for predictions. We also examined both ignoring the oscillations as an important feature and including the oscillations as an important…
Study of Varying Boundary Layer Height on Turret Flow Structures
2011-06-01
fluid dynamics. The difficulties of the problem arise in modeling several complex flow features including separation, reattachment, three-dimensional...impossible. In this case, the approach is to create a model to calculate the properties of interest. The main issue with resolving turbulent flows...operation and their effect is modeled through subgrid scale models . As a result, the the most important turbulent scales are resolved and the
Library of Advanced Materials for Engineering (LAME) 4.44.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherzinger, William M.; Lester, Brian T.
Accurate and efficient constitutive modeling remains a cornerstone issues for solid mechanics analysis. Over the years, the LAME advanced material model library has grown to address this challenge by implementing models capable of describing material systems spanning soft polymers to s ti ff ceramics including both isotropic and anisotropic responses. Inelastic behaviors including (visco) plasticity, damage, and fracture have all incorporated for use in various analyses. This multitude of options and flexibility, however, comes at the cost of many capabilities, features, and responses and the ensuing complexity in the resulting implementation. Therefore, to enhance confidence and enable the utilization ofmore » the LAME library in application, this effort seeks to document and verify the various models in the LAME library. Specifically, the broader strategy, organization, and interface of the library itself is first presented. The physical theory, numerical implementation, and user guide for a large set of models is then discussed. Importantly, a number of verification tests are performed with each model to not only have confidence in the model itself but also highlight some important response characteristics and features that may be of interest to end-users. Finally, in looking ahead to the future, approaches to add material models to this library and further expand the capabilities are presented.« less
Library of Advanced Materials for Engineering (LAME) 4.48.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherzinger, William M.; Lester, Brian T.
Accurate and efficient constitutive modeling remains a cornerstone issues for solid mechanics analysis. Over the years, the LAME advanced material model library has grown to address this challenge by implement- ing models capable of describing material systems spanning soft polymers to stiff ceramics including both isotropic and anisotropic responses. Inelastic behaviors including (visco)plasticity, damage, and fracture have all incorporated for use in various analyses. This multitude of options and flexibility, however, comes at the cost of many capabilities, features, and responses and the ensuing complexity in the resulting imple- mentation. Therefore, to enhance confidence and enable the utilization of themore » LAME library in application, this effort seeks to document and verify the various models in the LAME library. Specifically, the broader strategy, organization, and interface of the library itself is first presented. The physical theory, numerical implementation, and user guide for a large set of models is then discussed. Importantly, a number of verifi- cation tests are performed with each model to not only have confidence in the model itself but also highlight some important response characteristics and features that may be of interest to end-users. Finally, in looking ahead to the future, approaches to add material models to this library and further expand the capabilities are presented.« less
Models of clinical reasoning with a focus on general practice: A critical review
YAZDANI, SHAHRAM; HOSSEINZADEH, MOHAMMAD; HOSSEINI, FAKHROLSADAT
2017-01-01
Introduction: Diagnosis lies at the heart of general practice. Every day general practitioners (GPs) visit patients with a wide variety of complaints and concerns, with often minor but sometimes serious symptoms. General practice has many features which differentiate it from specialty care setting, but during the last four decades little attention was paid to clinical reasoning in general practice. Therefore, we aimed to critically review the clinical reasoning models with a focus on the clinical reasoning in general practice or clinical reasoning of general practitioners to find out to what extent the existing models explain the clinical reasoning specially in primary care and also identity the gaps of the model for use in primary care settings. Methods: A systematic search to find models of clinical reasoning were performed. To have more precision, we excluded the studies that focused on neurobiological aspects of reasoning, reasoning in disciplines other than medicine decision making or decision analysis on treatment or management plan. All the articles and documents were first scanned to see whether they include important relevant contents or any models. The selected studies which described a model of clinical reasoning in general practitioners or with a focus on general practice were then reviewed and appraisal or critics of other authors on these models were included. The reviewed documents on the model were synthesized. Results: Six models of clinical reasoning were identified including hypothetic-deductive model, pattern recognition, a dual process diagnostic reasoning model, pathway for clinical reasoning, an integrative model of clinical reasoning, and model of diagnostic reasoning strategies in primary care. Only one model had specifically focused on general practitioners reasoning. Conclusion: A Model of clinical reasoning that included specific features of general practice to better help the general practitioners with the difficulties of clinical reasoning in this setting is needed. PMID:28979912
Scribner, Elizabeth; Fathallah-Shaykh, Hassan M
2017-01-01
Glioblastoma (GBM) is a malignant brain tumor that continues to be associated with neurological morbidity and poor survival times. Brain invasion is a fundamental property of malignant glioma cells. The Go-or-Grow (GoG) phenotype proposes that cancer cell motility and proliferation are mutually exclusive. Here, we construct and apply a single glioma cell mathematical model that includes motility and angiogenesis and lacks the GoG phenotype. Simulations replicate key features of GBM including its multilayer structure (i.e.edema, enhancement, and necrosis), its progression patterns associated with bevacizumab treatment, and replicate the survival times of GBM treated or untreated with bevacizumab. These results suggest that the GoG phenotype is not a necessary property for the formation of the multilayer structure, recurrence patterns, and the poor survival times of patients diagnosed with GBM.
Processing LiDAR Data to Predict Natural Hazards
NASA Technical Reports Server (NTRS)
Fairweather, Ian; Crabtree, Robert; Hager, Stacey
2008-01-01
ELF-Base and ELF-Hazards (wherein 'ELF' signifies 'Extract LiDAR Features' and 'LiDAR' signifies 'light detection and ranging') are developmental software modules for processing remote-sensing LiDAR data to identify past natural hazards (principally, landslides) and predict future ones. ELF-Base processes raw LiDAR data, including LiDAR intensity data that are often ignored in other software, to create digital terrain models (DTMs) and digital feature models (DFMs) with sub-meter accuracy. ELF-Hazards fuses raw LiDAR data, data from multispectral and hyperspectral optical images, and DTMs and DFMs generated by ELF-Base to generate hazard risk maps. Advanced algorithms in these software modules include line-enhancement and edge-detection algorithms, surface-characterization algorithms, and algorithms that implement innovative data-fusion techniques. The line-extraction and edge-detection algorithms enable users to locate such features as faults and landslide headwall scarps. Also implemented in this software are improved methodologies for identification and mapping of past landslide events by use of (1) accurate, ELF-derived surface characterizations and (2) three LiDAR/optical-data-fusion techniques: post-classification data fusion, maximum-likelihood estimation modeling, and hierarchical within-class discrimination. This software is expected to enable faster, more accurate forecasting of natural hazards than has previously been possible.
Effects of metric hierarchy and rhyme predictability on word duration in The Cat in the Hat.
Breen, Mara
2018-05-01
Word durations convey many types of linguistic information, including intrinsic lexical features like length and frequency and contextual features like syntactic and semantic structure. The current study was designed to investigate whether hierarchical metric structure and rhyme predictability account for durational variation over and above other features in productions of a rhyming, metrically-regular children's book: The Cat in the Hat (Dr. Seuss, 1957). One-syllable word durations and inter-onset intervals were modeled as functions of segment number, lexical frequency, word class, syntactic structure, repetition, and font emphasis. Consistent with prior work, factors predicting longer word durations and inter-onset intervals included more phonemes, lower frequency, first mention, alignment with a syntactic boundary, and capitalization. A model parameter corresponding to metric grid height improved model fit of word durations and inter-onset intervals. Specifically, speakers realized five levels of metric hierarchy with inter-onset intervals such that interval duration increased linearly with increased height in the metric hierarchy. Conversely, speakers realized only three levels of metric hierarchy with word duration, demonstrating that they shortened the highly predictable rhyme resolutions. These results further understanding of the factors that affect spoken word duration, and demonstrate the myriad cues that children receive about linguistic structure from nursery rhymes. Copyright © 2018 Elsevier B.V. All rights reserved.
Mahmoudabadi, Ebrahim; Karimi, Alireza; Haghnia, Gholam Hosain; Sepehr, Adel
2017-09-11
Digital soil mapping has been introduced as a viable alternative to the traditional mapping methods due to being fast and cost-effective. The objective of the present study was to investigate the capability of the vegetation features and spectral indices as auxiliary variables in digital soil mapping models to predict soil properties. A region with an area of 1225 ha located in Bajgiran rangelands, Khorasan Razavi province, northeastern Iran, was chosen. A total of 137 sampling sites, each containing 3-5 plots with 10-m interval distance along a transect established based on randomized-systematic method, were investigated. In each plot, plant species names and numbers as well as vegetation cover percentage (VCP) were recorded, and finally one composite soil sample was taken from each transect at each site (137 soil samples in total). Terrain attributes were derived from a digital elevation model, different bands and spectral indices were obtained from the Landsat7 ETM+ images, and vegetation features were calculated in the plots, all of which were used as auxiliary variables to predict soil properties using artificial neural network, gene expression programming, and multivariate linear regression models. According to R 2 RMSE and MBE values, artificial neutral network was obtained as the most accurate soil properties prediction function used in scorpan model. Vegetation features and indices were more effective than remotely sensed data and terrain attributes in predicting soil properties including calcium carbonate equivalent, clay, bulk density, total nitrogen, carbon, sand, silt, and saturated moisture capacity. It was also shown that vegetation indices including NDVI, SAVI, MSAVI, SARVI, RDVI, and DVI were more effective in estimating the majority of soil properties compared to separate bands and even some soil spectral indices.
Balabin, Roman M; Smirnov, Sergey V
2011-04-29
During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm(-1)) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic techniques application, such as Raman, ultraviolet-visible (UV-vis), or nuclear magnetic resonance (NMR) spectroscopies, can be greatly improved by an appropriate feature selection choice. Copyright © 2011 Elsevier B.V. All rights reserved.
Interictal epileptiform discharge characteristics underlying expert interrater agreement.
Bagheri, Elham; Dauwels, Justin; Dean, Brian C; Waters, Chad G; Westover, M Brandon; Halford, Jonathan J
2017-10-01
The presence of interictal epileptiform discharges (IED) in the electroencephalogram (EEG) is a key finding in the medical workup of a patient with suspected epilepsy. However, inter-rater agreement (IRA) regarding the presence of IED is imperfect, leading to incorrect and delayed diagnoses. An improved understanding of which IED attributes mediate expert IRA might help in developing automatic methods for IED detection able to emulate the abilities of experts. Therefore, using a set of IED scored by a large number of experts, we set out to determine which attributes of IED predict expert agreement regarding the presence of IED. IED were annotated on a 5-point scale by 18 clinical neurophysiologists within 200 30-s EEG segments from recordings of 200 patients. 5538 signal analysis features were extracted from the waveforms, including wavelet coefficients, morphological features, signal energy, nonlinear energy operator response, electrode location, and spectrogram features. Feature selection was performed by applying elastic net regression and support vector regression (SVR) was applied to predict expert opinion, with and without the feature selection procedure and with and without several types of signal normalization. Multiple types of features were useful for predicting expert annotations, but particular types of wavelet features performed best. Local EEG normalization also enhanced best model performance. As the size of the group of EEGers used to train the models was increased, the performance of the models leveled off at a group size of around 11. The features that best predict inter-rater agreement among experts regarding the presence of IED are wavelet features, using locally standardized EEG. Our models for predicting expert opinion based on EEGer's scores perform best with a large group of EEGers (more than 10). By examining a large group of EEG signal analysis features we found that wavelet features with certain wavelet basis functions performed best to identify IEDs. Local normalization also improves predictability, suggesting the importance of IED morphology over amplitude-based features. Although most IED detection studies in the past have used opinion from three or fewer experts, our study suggests a "wisdom of the crowd" effect, such that pooling over a larger number of expert opinions produces a better correlation between expert opinion and objectively quantifiable features of the EEG. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Multiresolution texture models for brain tumor segmentation in MRI.
Iftekharuddin, Khan M; Ahmed, Shaheen; Hossen, Jakir
2011-01-01
In this study we discuss different types of texture features such as Fractal Dimension (FD) and Multifractional Brownian Motion (mBm) for estimating random structures and varying appearance of brain tissues and tumors in magnetic resonance images (MRI). We use different selection techniques including KullBack - Leibler Divergence (KLD) for ranking different texture and intensity features. We then exploit graph cut, self organizing maps (SOM) and expectation maximization (EM) techniques to fuse selected features for brain tumors segmentation in multimodality T1, T2, and FLAIR MRI. We use different similarity metrics to evaluate quality and robustness of these selected features for tumor segmentation in MRI for real pediatric patients. We also demonstrate a non-patient-specific automated tumor prediction scheme by using improved AdaBoost classification based on these image features.
Assessing sufficiency of thermal riverscapes for resilient ...
Resilient salmon populations require river networks that provide water temperature regimes sufficient to support a diversity of salmonid life histories across space and time. Efforts to protect, enhance and restore watershed thermal regimes for salmon may target specific locations and features within stream networks hypothesized to provide disproportionately high-value functional resilience to salmon populations. These include relatively small-scale features such as thermal refuges, and larger-scale features such as entire watersheds or aquifers that support thermal regimes buffered from local climatic conditions. Quantifying the value of both small and large scale thermal features to salmon populations has been challenged by both the difficulty of mapping thermal regimes at sufficient spatial and temporal resolutions, and integrating thermal regimes into population models. We attempt to address these challenges by using newly-available datasets and modeling approaches to link thermal regimes to salmon populations across scales. We will describe an individual-based modeling approach for assessing sufficiency of thermal refuges for migrating salmon and steelhead in large rivers, as well as a population modeling approach for assessing large-scale climate refugia for salmon in the Pacific Northwest. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec
Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-01-01
The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052
Piepers, Daniel W.; Robbins, Rachel A.
2012-01-01
It is widely agreed that the human face is processed differently from other objects. However there is a lack of consensus on what is meant by a wide array of terms used to describe this “special” face processing (e.g., holistic and configural) and the perceptually relevant information within a face (e.g., relational properties and configuration). This paper will review existing models of holistic/configural processing, discuss how they differ from one another conceptually, and review the wide variety of measures used to tap into these concepts. In general we favor a model where holistic processing of a face includes some or all of the interrelations between features and has separate coding for features. However, some aspects of the model remain unclear. We propose the use of moving faces as a way of clarifying what types of information are included in the holistic representation of a face. PMID:23413184
Early Admissions at Selective Colleges. NBER Working Paper No. 14844
ERIC Educational Resources Information Center
Avery, Christopher; Levin, Jonathan D.
2009-01-01
Early admissions is widely used by selective colleges and universities. We identify some basic facts about early admissions policies, including the admissions advantage enjoyed by early applicants and patterns in application behavior, and propose a game-theoretic model that matches these facts. The key feature of the model is that colleges want to…
A modeling framework for life history-based conservation planning
Eileen S. Burns; Sandor F. Toth; Robert G. Haight
2013-01-01
Reserve site selection models can be enhanced by including habitat conditions that populations need for food, shelter, and reproduction. We present a new population protection function that determines whether minimum areas of land with desired habitat features are present within the desired spatial conditions in the protected sites. Embedding the protection function as...
Applications of the Functional Writing Model in Technical and Professional Writing.
ERIC Educational Resources Information Center
Brostoff, Anita
The functional writing model is a method by which students learn to devise and organize a written argument. Salient features of functional writing include the organizing idea (a component that logically unifies a paragraph or sequence of paragraphs), the reader's frame of reference, forecasting (prediction of the sequence by which the organizing…
A Model for Effective Implementation of Flexible Programme Delivery
ERIC Educational Resources Information Center
Normand, Carey; Littlejohn, Allison; Falconer, Isobel
2008-01-01
The model developed here is the outcome of a project funded by the Quality Assurance Agency Scotland to support implementation of flexible programme delivery (FPD) in post-compulsory education. We highlight key features of FPD, including explicit and implicit assumptions about why flexibility is needed and the perceived barriers and solutions to…
NASA Astrophysics Data System (ADS)
Ma, L.; Zhou, M.; Li, C.
2017-09-01
In this study, a Random Forest (RF) based land covers classification method is presented to predict the types of land covers in Miyun area. The returned full-waveforms which were acquired by a LiteMapper 5600 airborne LiDAR system were processed, including waveform filtering, waveform decomposition and features extraction. The commonly used features that were distance, intensity, Full Width at Half Maximum (FWHM), skewness and kurtosis were extracted. These waveform features were used as attributes of training data for generating the RF prediction model. The RF prediction model was applied to predict the types of land covers in Miyun area as trees, buildings, farmland and ground. The classification results of these four types of land covers were obtained according to the ground truth information acquired from CCD image data of the same region. The RF classification results were compared with that of SVM method and show better results. The RF classification accuracy reached 89.73% and the classification Kappa was 0.8631.
The display of molecular models with the Ames Interactive Modeling System (AIMS)
NASA Technical Reports Server (NTRS)
Egan, J. T.; Hart, J.; Burt, S. K.; Macelroy, R. D.
1982-01-01
A visualization of molecular models can lead to a clearer understanding of the models. Sophisticated graphics devices supported by minicomputers make it possible for the chemist to interact with the display of a very large model, altering its structure. In addition to user interaction, the need arises also for other ways of displaying information. These include the production of viewgraphs, film presentation, as well as publication quality prints of various models. To satisfy these needs, the display capability of the Ames Interactive Modeling System (AIMS) has been enhanced to provide a wide range of graphics and plotting capabilities. Attention is given to an overview of the AIMS system, graphics hardware used by the AIMS display subsystem, a comparison of graphics hardware, the representation of molecular models, graphics software used by the AIMS display subsystem, the display of a model obtained from data stored in molecule data base, a graphics feature for obtaining single frame permanent copy displays, and a feature for producing multiple frame displays.
Optical-Correlator Neural Network Based On Neocognitron
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin; Stoner, William W.
1994-01-01
Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.
Dependency-based long short term memory network for drug-drug interaction extraction.
Wang, Wei; Yang, Xi; Yang, Canqun; Guo, Xiaowei; Zhang, Xiang; Wu, Chengkun
2017-12-28
Drug-drug interaction extraction (DDI) needs assistance from automated methods to address the explosively increasing biomedical texts. In recent years, deep neural network based models have been developed to address such needs and they have made significant progress in relation identification. We propose a dependency-based deep neural network model for DDI extraction. By introducing the dependency-based technique to a bi-directional long short term memory network (Bi-LSTM), we build three channels, namely, Linear channel, DFS channel and BFS channel. All of these channels are constructed with three network layers, including embedding layer, LSTM layer and max pooling layer from bottom up. In the embedding layer, we extract two types of features, one is distance-based feature and another is dependency-based feature. In the LSTM layer, a Bi-LSTM is instituted in each channel to better capture relation information. Then max pooling is used to get optimal features from the entire encoding sequential data. At last, we concatenate the outputs of all channels and then link it to the softmax layer for relation identification. To the best of our knowledge, our model achieves new state-of-the-art performance with the F-score of 72.0% on the DDIExtraction 2013 corpus. Moreover, our approach obtains much higher Recall value compared to the existing methods. The dependency-based Bi-LSTM model can learn effective relation information with less feature engineering in the task of DDI extraction. Besides, the experimental results show that our model excels at balancing the Precision and Recall values.
Hand ultrasound: a high-fidelity simulation of lung sliding.
Shokoohi, Hamid; Boniface, Keith
2012-09-01
Simulation training has been effectively used to integrate didactic knowledge and technical skills in emergency and critical care medicine. In this article, we introduce a novel model of simulating lung ultrasound and the features of lung sliding and pneumothorax by performing a hand ultrasound. The simulation model involves scanning the palmar aspect of the hand to create normal lung sliding in varying modes of scanning and to mimic ultrasound features of pneumothorax, including "stratosphere/barcode sign" and "lung point." The simple, reproducible, and readily available simulation model we describe demonstrates a high-fidelity simulation surrogate that can be used to rapidly illustrate the signs of normal and abnormal lung sliding at the bedside. © 2012 by the Society for Academic Emergency Medicine.
Parkinson, Craig; Foley, Kieran; Whybra, Philip; Hills, Robert; Roberts, Ashley; Marshall, Chris; Staffurth, John; Spezi, Emiliano
2018-04-11
Prognosis in oesophageal cancer (OC) is poor. The 5-year overall survival (OS) rate is approximately 15%. Personalised medicine is hoped to increase the 5- and 10-year OS rates. Quantitative analysis of PET is gaining substantial interest in prognostic research but requires the accurate definition of the metabolic tumour volume. This study compares prognostic models developed in the same patient cohort using individual PET segmentation algorithms and assesses the impact on patient risk stratification. Consecutive patients (n = 427) with biopsy-proven OC were included in final analysis. All patients were staged with PET/CT between September 2010 and July 2016. Nine automatic PET segmentation methods were studied. All tumour contours were subjectively analysed for accuracy, and segmentation methods with < 90% accuracy were excluded. Standardised image features were calculated, and a series of prognostic models were developed using identical clinical data. The proportion of patients changing risk classification group were calculated. Out of nine PET segmentation methods studied, clustering means (KM2), general clustering means (GCM3), adaptive thresholding (AT) and watershed thresholding (WT) methods were included for analysis. Known clinical prognostic factors (age, treatment and staging) were significant in all of the developed prognostic models. AT and KM2 segmentation methods developed identical prognostic models. Patient risk stratification was dependent on the segmentation method used to develop the prognostic model with up to 73 patients (17.1%) changing risk stratification group. Prognostic models incorporating quantitative image features are dependent on the method used to delineate the primary tumour. This has a subsequent effect on risk stratification, with patients changing groups depending on the image segmentation method used.
Detection of reflecting surfaces by a statistical model
NASA Astrophysics Data System (ADS)
He, Qiang; Chu, Chee-Hung H.
2009-02-01
Remote sensing is widely used assess the destruction from natural disasters and to plan relief and recovery operations. How to automatically extract useful features and segment interesting objects from digital images, including remote sensing imagery, becomes a critical task for image understanding. Unfortunately, current research on automated feature extraction is ignorant of contextual information. As a result, the fidelity of populating attributes corresponding to interesting features and objects cannot be satisfied. In this paper, we present an exploration on meaningful object extraction integrating reflecting surfaces. Detection of specular reflecting surfaces can be useful in target identification and then can be applied to environmental monitoring, disaster prediction and analysis, military, and counter-terrorism. Our method is based on a statistical model to capture the statistical properties of specular reflecting surfaces. And then the reflecting surfaces are detected through cluster analysis.
Murata, Chiharu; Ramírez, Ana Belén; Ramírez, Guadalupe; Cruz, Alonso; Morales, José Luis; Lugo-Reyes, Saul Oswaldo
2015-01-01
The features in a clinical history from a patient with suspected primary immunodeficiency (PID) direct the differential diagnosis through pattern recognition. PIDs are a heterogeneous group of more than 250 congenital diseases with increased susceptibility to infection, inflammation, autoimmunity, allergy and malignancy. Linear discriminant analysis (LDA) is a multivariate supervised classification method to sort objects of study into groups by finding linear combinations of a number of variables. To identify the features that best explain membership of pediatric PID patients to a group of defect or disease. An analytic cross-sectional study was done with a pre-existing database with clinical and laboratory records from 168 patients with PID, followed at the National Institute of Pediatrics during 1991-2012, it was used to build linear discriminant models that would explain membership of each patient to the different group defects and to the most prevalent PIDs in our registry. After a preliminary run only 30 features were included (4 demographic, 10 clinical, 10 laboratory, 6 germs), with which the training models were developed through a stepwise regression algorithm. We compared the automatic feature selection with a selection made by a human expert, and then assessed the diagnostic usefulness of the resulting models (sensitivity, specificity, prediction accuracy and kappa coefficient), with 95% confidence intervals. The models incorporated 6 to 14 features to explain membership of PID patients to the five most abundant defect groups (combined, antibody, well-defined, dysregulation and phagocytosis), and to the four most prevalent PID diseases (X-linked agammaglobulinemia, chronic granulomatous disease, common variable immunodeficiency and ataxiatelangiectasia). In practically all cases of feature selection the machine outperformed the human expert. Diagnosis prediction using the equations created had a global accuracy of 83 to 94%, with sensitivity of 60 to 100%, specificity of 83 to 95% and kappa coefficient of 0.37 to 0.76. In general, the selection of features has clinical plausibility, and the practical advantage of utilizing only clinical attributes, infecting germs and routine lab results (blood cell counts and serum immunoglobulins). The performance of the model as a diagnostic tool was acceptable. The study's main limitations are a limited sample size and a lack of cross validation. This is only the first step in the construction of a machine learning system, with a wider approach that includes a larger database and different methodologies, to assist the clinical diagnosis of primary immunodeficiencies.
Helium in double-detonation models of type Ia supernovae
NASA Astrophysics Data System (ADS)
Boyle, Aoife; Sim, Stuart A.; Hachinger, Stephan; Kerzendorf, Wolfgang
2017-03-01
The double-detonation explosion model has been considered a candidate for explaining astrophysical transients with a wide range of luminosities. In this model, a carbon-oxygen white dwarf star explodes following detonation of a surface layer of helium. One potential signature of this explosion mechanism is the presence of unburned helium in the outer ejecta, left over from the surface helium layer. In this paper we present simple approximations to estimate the optical depths of important He I lines in the ejecta of double-detonation models. We use these approximations to compute synthetic spectra, including the He I lines, for double-detonation models obtained from hydrodynamical explosion simulations. Specifically, we focus on photospheric-phase predictions for the near-infrared 10 830 Å and 2 μm lines of He I. We first consider a double detonation model with a luminosity corresponding roughly to normal SNe Ia. This model has a post-explosion unburned He mass of 0.03 M⊙ and our calculations suggest that the 2 μm feature is expected to be very weak but that the 10 830 Å feature may have modest opacity in the outer ejecta. Consequently, we suggest that a moderate-to-weak He I 10 830 Å feature may be expected to form in double-detonation explosions at epochs around maximum light. However, the high velocities of unburned helium predicted by the model ( 19 000 km s-1) mean that the He I 10 830 Å feature may be confused or blended with the C I 10 690 Å line forming at lower velocities. We also present calculations for the He I 10 830 Å and 2 μm lines for a lower mass (low luminosity) double detonation model, which has a post-explosion He mass of 0.077 M⊙. In this case, both the He I features we consider are strong and can provide a clear observational signature of the double-detonation mechanism.
NASA Astrophysics Data System (ADS)
Folkert, Michael R.; Setton, Jeremy; Apte, Aditya P.; Grkovski, Milan; Young, Robert J.; Schöder, Heiko; Thorstad, Wade L.; Lee, Nancy Y.; Deasy, Joseph O.; Oh, Jung Hun
2017-07-01
In this study, we investigate the use of imaging feature-based outcomes research (‘radiomics’) combined with machine learning techniques to develop robust predictive models for the risk of all-cause mortality (ACM), local failure (LF), and distant metastasis (DM) following definitive chemoradiation therapy (CRT). One hundred seventy four patients with stage III-IV oropharyngeal cancer (OC) treated at our institution with CRT with retrievable pre- and post-treatment 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) scans were identified. From pre-treatment PET scans, 24 representative imaging features of FDG-avid disease regions were extracted. Using machine learning-based feature selection methods, multiparameter logistic regression models were built incorporating clinical factors and imaging features. All model building methods were tested by cross validation to avoid overfitting, and final outcome models were validated on an independent dataset from a collaborating institution. Multiparameter models were statistically significant on 5 fold cross validation with the area under the receiver operating characteristic curve (AUC) = 0.65 (p = 0.004), 0.73 (p = 0.026), and 0.66 (p = 0.015) for ACM, LF, and DM, respectively. The model for LF retained significance on the independent validation cohort with AUC = 0.68 (p = 0.029) whereas the models for ACM and DM did not reach statistical significance, but resulted in comparable predictive power to the 5 fold cross validation with AUC = 0.60 (p = 0.092) and 0.65 (p = 0.062), respectively. In the largest study of its kind to date, predictive features including increasing metabolic tumor volume, increasing image heterogeneity, and increasing tumor surface irregularity significantly correlated to mortality, LF, and DM on 5 fold cross validation in a relatively uniform single-institution cohort. The LF model also retained significance in an independent population.
Access to a Schoolwide Thinking Curriculum: Leadership Challenges and Solutions.
ERIC Educational Resources Information Center
Morocco, Catherine Cobb; Walker, Andrea; Lewis, Leslie R.
2003-01-01
This article discusses how an urban middle school designed to reflect a Schools for Thought model has demonstrated that urban schools can achieve excellent results on statewide testing for all students, including those with disabilities. Key school features are highlighted, including the use of "cross-talk" to stimulate discussion and student…
Cruz-Roa, Angel; Díaz, Gloria; Romero, Eduardo; González, Fabio A.
2011-01-01
Histopathological images are an important resource for clinical diagnosis and biomedical research. From an image understanding point of view, the automatic annotation of these images is a challenging problem. This paper presents a new method for automatic histopathological image annotation based on three complementary strategies, first, a part-based image representation, called the bag of features, which takes advantage of the natural redundancy of histopathological images for capturing the fundamental patterns of biological structures, second, a latent topic model, based on non-negative matrix factorization, which captures the high-level visual patterns hidden in the image, and, third, a probabilistic annotation model that links visual appearance of morphological and architectural features associated to 10 histopathological image annotations. The method was evaluated using 1,604 annotated images of skin tissues, which included normal and pathological architectural and morphological features, obtaining a recall of 74% and a precision of 50%, which improved a baseline annotation method based on support vector machines in a 64% and 24%, respectively. PMID:22811960
Chaddad, Ahmad; Daniel, Paul; Niazi, Tamim
2018-01-01
Colorectal cancer (CRC) is markedly heterogeneous and develops progressively toward malignancy through several stages which include stroma (ST), benign hyperplasia (BH), intraepithelial neoplasia (IN) or precursor cancerous lesion, and carcinoma (CA). Identification of the malignancy stage of CRC pathology tissues (PT) allows the most appropriate therapeutic intervention. This study investigates multiscale texture features extracted from CRC pathology sections using 3D wavelet transform (3D-WT) filter. Multiscale features were extracted from digital whole slide images of 39 patients that were segmented in a pre-processing step using an active contour model. The capacity for multiscale texture to compare and classify between PTs was investigated using ANOVA significance test and random forest classifier models, respectively. 12 significant features derived from the multiscale texture (i.e., variance, entropy, and energy) were found to discriminate between CRC grades at a significance value of p < 0.01 after correction. Combining multiscale texture features lead to a better predictive capacity compared to prediction models based on individual scale features with an average (±SD) classification accuracy of 93.33 (±3.52)%, sensitivity of 88.33 (± 4.12)%, and specificity of 96.89 (± 3.88)%. Entropy was found to be the best classifier feature across all the PT grades with an average of the area under the curve (AUC) value of 91.17, 94.21, 97.70, 100% for ST, BH, IN, and CA, respectively. Our results suggest that multiscale texture features based on 3D-WT are sensitive enough to discriminate between CRC grades with the entropy feature, the best predictor of pathology grade.
Constant size descriptors for accurate machine learning models of molecular properties
NASA Astrophysics Data System (ADS)
Collins, Christopher R.; Gordon, Geoffrey J.; von Lilienfeld, O. Anatole; Yaron, David J.
2018-06-01
Two different classes of molecular representations for use in machine learning of thermodynamic and electronic properties are studied. The representations are evaluated by monitoring the performance of linear and kernel ridge regression models on well-studied data sets of small organic molecules. One class of representations studied here counts the occurrence of bonding patterns in the molecule. These require only the connectivity of atoms in the molecule as may be obtained from a line diagram or a SMILES string. The second class utilizes the three-dimensional structure of the molecule. These include the Coulomb matrix and Bag of Bonds, which list the inter-atomic distances present in the molecule, and Encoded Bonds, which encode such lists into a feature vector whose length is independent of molecular size. Encoded Bonds' features introduced here have the advantage of leading to models that may be trained on smaller molecules and then used successfully on larger molecules. A wide range of feature sets are constructed by selecting, at each rank, either a graph or geometry-based feature. Here, rank refers to the number of atoms involved in the feature, e.g., atom counts are rank 1, while Encoded Bonds are rank 2. For atomization energies in the QM7 data set, the best graph-based feature set gives a mean absolute error of 3.4 kcal/mol. Inclusion of 3D geometry substantially enhances the performance, with Encoded Bonds giving 2.4 kcal/mol, when used alone, and 1.19 kcal/mol, when combined with graph features.
Krishna, B. Suresh; Treue, Stefan
2016-01-01
Paying attention to a sensory feature improves its perception and impairs that of others. Recent work has shown that a Normalization Model of Attention (NMoA) can account for a wide range of physiological findings and the influence of different attentional manipulations on visual performance. A key prediction of the NMoA is that attention to a visual feature like an orientation or a motion direction will increase the response of neurons preferring the attended feature (response gain) rather than increase the sensory input strength of the attended stimulus (input gain). This effect of feature-based attention on neuronal responses should translate to similar patterns of improvement in behavioral performance, with psychometric functions showing response gain rather than input gain when attention is directed to the task-relevant feature. In contrast, we report here that when human subjects are cued to attend to one of two motion directions in a transparent motion display, attentional effects manifest as a combination of input and response gain. Further, the impact on input gain is greater when attention is directed towards a narrow range of motion directions than when it is directed towards a broad range. These results are captured by an extended NMoA, which either includes a stimulus-independent attentional contribution to normalization or utilizes direction-tuned normalization. The proposed extensions are consistent with the feature-similarity gain model of attention and the attentional modulation in extrastriate area MT, where neuronal responses are enhanced and suppressed by attention to preferred and non-preferred motion directions respectively. PMID:27977679
Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing
Jung, Jaewook; Sohn, Gunho; Bang, Kiin; Wichmann, Andreas; Armenakis, Costas; Kada, Martin
2016-01-01
A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH) method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1) feature extraction; (2) similarity measure; and matching, and (3) estimating exterior orientation parameters (EOPs) of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process. PMID:27338410
Scalzo, Fabien; Alger, Jeffry R; Hu, Xiao; Saver, Jeffrey L; Dani, Krishna A; Muir, Keith W; Demchuk, Andrew M; Coutts, Shelagh B; Luby, Marie; Warach, Steven; Liebeskind, David S
2013-07-01
Permeability images derived from magnetic resonance (MR) perfusion images are sensitive to blood-brain barrier derangement of the brain tissue and have been shown to correlate with subsequent development of hemorrhagic transformation (HT) in acute ischemic stroke. This paper presents a multi-center retrospective study that evaluates the predictive power in terms of HT of six permeability MRI measures including contrast slope (CS), final contrast (FC), maximum peak bolus concentration (MPB), peak bolus area (PB), relative recirculation (rR), and percentage recovery (%R). Dynamic T2*-weighted perfusion MR images were collected from 263 acute ischemic stroke patients from four medical centers. An essential aspect of this study is to exploit a classifier-based framework to automatically identify predictive patterns in the overall intensity distribution of the permeability maps. The model is based on normalized intensity histograms that are used as input features to the predictive model. Linear and nonlinear predictive models are evaluated using a cross-validation to measure generalization power on new patients and a comparative analysis is provided for the different types of parameters. Results demonstrate that perfusion imaging in acute ischemic stroke can predict HT with an average accuracy of more than 85% using a predictive model based on a nonlinear regression model. Results also indicate that the permeability feature based on the percentage of recovery performs significantly better than the other features. This novel model may be used to refine treatment decisions in acute stroke. Copyright © 2013 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
You, D; Aryal, M; Samuels, S
Purpose: A previous study showed that large sub-volumes of tumor with low blood volume (BV) (poorly perfused) in head-and-neck (HN) cancers are significantly associated with local-regional failure (LRF) after chemoradiation therapy, and could be targeted with intensified radiation doses. This study aimed to develop an automated and scalable model to extract voxel-wise contrast-enhanced temporal features of dynamic contrastenhanced (DCE) MRI in HN cancers for predicting LRF. Methods: Our model development consists of training and testing stages. The training stage includes preprocessing of individual-voxel DCE curves from tumors for intensity normalization and temporal alignment, temporal feature extraction from the curves, featuremore » selection, and training classifiers. For feature extraction, multiresolution Haar discrete wavelet transformation is applied to each DCE curve to capture temporal contrast-enhanced features. The wavelet coefficients as feature vectors are selected. Support vector machine classifiers are trained to classify tumor voxels having either low or high BV, for which a BV threshold of 7.6% is previously established and used as ground truth. The model is tested by a new dataset. The voxel-wise DCE curves for training and testing were from 14 and 8 patients, respectively. A posterior probability map of the low BV class was created to examine the tumor sub-volume classification. Voxel-wise classification accuracy was computed to evaluate performance of the model. Results: Average classification accuracies were 87.2% for training (10-fold crossvalidation) and 82.5% for testing. The lowest and highest accuracies (patient-wise) were 68.7% and 96.4%, respectively. Posterior probability maps of the low BV class showed the sub-volumes extracted by our model similar to ones defined by the BV maps with most misclassifications occurred near the sub-volume boundaries. Conclusion: This model could be valuable to support adaptive clinical trials with further validation. The framework could be extendable and scalable to extract temporal contrastenhanced features of DCE-MRI in other tumors. We would like to acknowledge NIH for funding support: UO1 CA183848.« less
Kim, Hae Young; Park, Ji Hoon; Lee, Yoon Jin; Lee, Sung Soo; Jeon, Jong-June; Lee, Kyoung Ho
2018-04-01
Purpose To perform a systematic review and meta-analysis to identify computed tomographic (CT) features for differentiating complicated appendicitis in patients suspected of having appendicitis and to summarize their diagnostic accuracy. Materials and Methods Studies on diagnostic accuracy of CT features for differentiating complicated appendicitis (perforated or gangrenous appendicitis) in patients suspected of having appendicitis were searched in Ovid-MEDLINE, EMBASE, and the Cochrane Library. Overlapping descriptors used in different studies to denote the same image finding were subsumed under a single CT feature. Pooled diagnostic accuracy of the CT features was calculated by using a bivariate random effects model. CT features with pooled diagnostic odds ratios with 95% confidence intervals not including 1 were considered as informative. Results Twenty-three studies were included, and 184 overlapping descriptors for various CT findings were subsumed under 14 features. Of these, 10 features were informative for complicated appendicitis. There was a general tendency for these features to show relatively high specificity but low sensitivity. Extraluminal appendicolith, abscess, appendiceal wall enhancement defect, extraluminal air, ileus, periappendiceal fluid collection, ascites, intraluminal air, and intraluminal appendicolith showed pooled specificity greater than 70% (range, 74%-100%), but sensitivity was limited (range, 14%-59%). Periappendiceal fat stranding was the only feature that showed high sensitivity (94%; 95% confidence interval: 86%, 98%) but low specificity (40%; 95% confidence interval, 23%, 60%). Conclusion Ten informative CT features for differentiating complicated appendicitis were identified in this study, nine of which showed high specificity, but low sensitivity. © RSNA, 2017 Online supplemental material is available for this article.
An investigation of the astronomical theory of the ice ages using a simple climate-ice sheet model
NASA Technical Reports Server (NTRS)
Pollard, D.
1978-01-01
The astronomical theory of the Quaternary ice ages is incorporated into a simple climate model for global weather; important features of the model include the albedo feedback, topography and dynamics of the ice sheets. For various parameterizations of the orbital elements, the model yields realistic assessments of the northern ice sheet. Lack of a land-sea heat capacity contrast represents one of the chief difficulties of the model.
Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.
Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun
2016-01-01
Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.
Clinical Named Entity Recognition Using Deep Learning Models.
Wu, Yonghui; Jiang, Min; Xu, Jun; Zhi, Degui; Xu, Hua
2017-01-01
Clinical Named Entity Recognition (NER) is a critical natural language processing (NLP) task to extract important concepts (named entities) from clinical narratives. Researchers have extensively investigated machine learning models for clinical NER. Recently, there have been increasing efforts to apply deep learning models to improve the performance of current clinical NER systems. This study examined two popular deep learning architectures, the Convolutional Neural Network (CNN) and the Recurrent Neural Network (RNN), to extract concepts from clinical texts. We compared the two deep neural network architectures with three baseline Conditional Random Fields (CRFs) models and two state-of-the-art clinical NER systems using the i2b2 2010 clinical concept extraction corpus. The evaluation results showed that the RNN model trained with the word embeddings achieved a new state-of-the- art performance (a strict F1 score of 85.94%) for the defined clinical NER task, outperforming the best-reported system that used both manually defined and unsupervised learning features. This study demonstrates the advantage of using deep neural network architectures for clinical concept extraction, including distributed feature representation, automatic feature learning, and long-term dependencies capture. This is one of the first studies to compare the two widely used deep learning models and demonstrate the superior performance of the RNN model for clinical NER.
Clinical Named Entity Recognition Using Deep Learning Models
Wu, Yonghui; Jiang, Min; Xu, Jun; Zhi, Degui; Xu, Hua
2017-01-01
Clinical Named Entity Recognition (NER) is a critical natural language processing (NLP) task to extract important concepts (named entities) from clinical narratives. Researchers have extensively investigated machine learning models for clinical NER. Recently, there have been increasing efforts to apply deep learning models to improve the performance of current clinical NER systems. This study examined two popular deep learning architectures, the Convolutional Neural Network (CNN) and the Recurrent Neural Network (RNN), to extract concepts from clinical texts. We compared the two deep neural network architectures with three baseline Conditional Random Fields (CRFs) models and two state-of-the-art clinical NER systems using the i2b2 2010 clinical concept extraction corpus. The evaluation results showed that the RNN model trained with the word embeddings achieved a new state-of-the- art performance (a strict F1 score of 85.94%) for the defined clinical NER task, outperforming the best-reported system that used both manually defined and unsupervised learning features. This study demonstrates the advantage of using deep neural network architectures for clinical concept extraction, including distributed feature representation, automatic feature learning, and long-term dependencies capture. This is one of the first studies to compare the two widely used deep learning models and demonstrate the superior performance of the RNN model for clinical NER. PMID:29854252
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-04-11
This manual is intended primarily for use as a reference by analysts applying the WORLD model to regional studies. It also provides overview information on WORLD features of potential interest to managers and analysts. Broadly, the manual covers WORLD model features in progressively increasing detail. Section 2 provides an overview of the WORLD model, how it has evolved, what its design goals are, what it produces, and where it can be taken with further enhancements. Section 3 reviews model management covering data sources, managing over-optimization, calibration and seasonality, check-points for case construction and common errors. Section 4 describes in detailmore » the WORLD system, including: data and program systems in overview; details of mainframe and PC program control and files;model generation, size management, debugging and error analysis; use with different optimizers; and reporting and results analysis. Section 5 provides a detailed description of every WORLD model data table, covering model controls, case and technology data. Section 6 goes into the details of WORLD matrix structure. It provides an overview, describes how regional definitions are controlled and defines the naming conventions for-all model rows, columns, right-hand sides, and bounds. It also includes a discussion of the formulation of product blending and specifications in WORLD. Several Appendices supplement the main sections.« less
Encoding properties of haltere neurons enable motion feature detection in a biological gyroscope
Fox, Jessica L.; Fairhall, Adrienne L.; Daniel, Thomas L.
2010-01-01
The halteres of dipteran insects are essential sensory organs for flight control. They are believed to detect Coriolis and other inertial forces associated with body rotation during flight. Flies use this information for rapid flight control. We show that the primary afferent neurons of the haltere’s mechanoreceptors respond selectively with high temporal precision to multiple stimulus features. Although we are able to identify many stimulus features contributing to the response using principal component analysis, predictive models using only two features, common across the cell population, capture most of the cells’ encoding activity. However, different sensitivity to these two features permits each cell to respond to sinusoidal stimuli with a different preferred phase. This feature similarity, combined with diverse phase encoding, allows the haltere to transmit information at a high rate about numerous inertial forces, including Coriolis forces. PMID:20133721
Aeolian features and processes at the Mars Pathfinder landing site
Greeley, Ronald; Kraft, Michael; Sullivan, Robert; Wilson, Gregory; Bridges, Nathan; Herkenhoff, Ken; Kuzmin, Ruslan O.; Malin, Michael; Ward, Wes
1999-01-01
The Mars Pathfinder landing site contains abundant features attributed to aeolian, or wind, processes. These include wind tails, drift deposits, duneforms of various types, ripplelike features, and ventifacts (the first clearly seen on Mars). Many of these features are consistant with formation involving sand-size particles. Although some features, such as dunes, could develop from saltating sand-size aggregates of finer grains, the discovery of ventifact flutes cut in rocks strongly suggests that at least some of the grains are crystalline, rather than aggregates. Excluding the ventifacts, the orientations of the wind-related features correlate well with the orientations of bright wind steaks seen on Viking Orbiter images in the general area. They also correlate with wind direction predictions from the NASA-Ames General Circulation Model (GCM) which show that the strongest winds in the area occur in the northern hemisphere winter and are directed toward 209°. Copyright 1999 by the American Geophysical Union.
CRT--Cascade Routing Tool to define and visualize flow paths for grid-based watershed models
Henson, Wesley R.; Medina, Rose L.; Mayers, C. Justin; Niswonger, Richard G.; Regan, R.S.
2013-01-01
The U.S. Geological Survey Cascade Routing Tool (CRT) is a computer application for watershed models that include the coupled Groundwater and Surface-water FLOW model, GSFLOW, and the Precipitation-Runoff Modeling System (PRMS). CRT generates output to define cascading surface and shallow subsurface flow paths for grid-based model domains. CRT requires a land-surface elevation for each hydrologic response unit (HRU) of the model grid; these elevations can be derived from a Digital Elevation Model raster data set of the area containing the model domain. Additionally, a list is required of the HRUs containing streams, swales, lakes, and other cascade termination features along with indices that uniquely define these features. Cascade flow paths are determined from the altitudes of each HRU. Cascade paths can cross any of the four faces of an HRU to a stream or to a lake within or adjacent to an HRU. Cascades can terminate at a stream, lake, or HRU that has been designated as a watershed outflow location.
Some dynamics of signaling games.
Huttegger, Simon; Skyrms, Brian; Tarrès, Pierre; Wagner, Elliott
2014-07-22
Information transfer is a basic feature of life that includes signaling within and between organisms. Owing to its interactive nature, signaling can be investigated by using game theory. Game theoretic models of signaling have a long tradition in biology, economics, and philosophy. For a long time the analyses of these games has mostly relied on using static equilibrium concepts such as Pareto optimal Nash equilibria or evolutionarily stable strategies. More recently signaling games of various types have been investigated with the help of game dynamics, which includes dynamical models of evolution and individual learning. A dynamical analysis leads to more nuanced conclusions as to the outcomes of signaling interactions. Here we explore different kinds of signaling games that range from interactions without conflicts of interest between the players to interactions where their interests are seriously misaligned. We consider these games within the context of evolutionary dynamics (both infinite and finite population models) and learning dynamics (reinforcement learning). Some results are specific features of a particular dynamical model, whereas others turn out to be quite robust across different models. This suggests that there are certain qualitative aspects that are common to many real-world signaling interactions.
Some dynamics of signaling games
Huttegger, Simon; Skyrms, Brian; Tarrès, Pierre; Wagner, Elliott
2014-01-01
Information transfer is a basic feature of life that includes signaling within and between organisms. Owing to its interactive nature, signaling can be investigated by using game theory. Game theoretic models of signaling have a long tradition in biology, economics, and philosophy. For a long time the analyses of these games has mostly relied on using static equilibrium concepts such as Pareto optimal Nash equilibria or evolutionarily stable strategies. More recently signaling games of various types have been investigated with the help of game dynamics, which includes dynamical models of evolution and individual learning. A dynamical analysis leads to more nuanced conclusions as to the outcomes of signaling interactions. Here we explore different kinds of signaling games that range from interactions without conflicts of interest between the players to interactions where their interests are seriously misaligned. We consider these games within the context of evolutionary dynamics (both infinite and finite population models) and learning dynamics (reinforcement learning). Some results are specific features of a particular dynamical model, whereas others turn out to be quite robust across different models. This suggests that there are certain qualitative aspects that are common to many real-world signaling interactions. PMID:25024209
Miller, Thomas F.
2017-01-01
We present a coarse-grained simulation model that is capable of simulating the minute-timescale dynamics of protein translocation and membrane integration via the Sec translocon, while retaining sufficient chemical and structural detail to capture many of the sequence-specific interactions that drive these processes. The model includes accurate geometric representations of the ribosome and Sec translocon, obtained directly from experimental structures, and interactions parameterized from nearly 200 μs of residue-based coarse-grained molecular dynamics simulations. A protocol for mapping amino-acid sequences to coarse-grained beads enables the direct simulation of trajectories for the co-translational insertion of arbitrary polypeptide sequences into the Sec translocon. The model reproduces experimentally observed features of membrane protein integration, including the efficiency with which polypeptide domains integrate into the membrane, the variation in integration efficiency upon single amino-acid mutations, and the orientation of transmembrane domains. The central advantage of the model is that it connects sequence-level protein features to biological observables and timescales, enabling direct simulation for the mechanistic analysis of co-translational integration and for the engineering of membrane proteins with enhanced membrane integration efficiency. PMID:28328943
Anatomy of an anesthesia information management system.
Shah, Nirav J; Tremper, Kevin K; Kheterpal, Sachin
2011-09-01
Anesthesia information management systems (AIMS) have become more prevalent as more sophisticated hardware and software have increased usability and reliability. National mandates and incentives have driven adoption as well. AIMS can be developed in one of several software models (Web based, client/server, or incorporated into a medical device). Irrespective of the development model, the best AIMS have a feature set that allows for comprehensive management of workflow for an anesthesiologist. Key features include preoperative, intraoperative, and postoperative documentation; quality assurance; billing; compliance and operational reporting; patient and operating room tracking; and integration with hospital electronic medical records. Copyright © 2011 Elsevier Inc. All rights reserved.
Walter, Fiona M; Emery, Jon; Braithwaite, Dejana; Marteau, Theresa M
2004-01-01
Although the family history is increasingly used for genetic risk assessment of common chronic diseases in primary care, evidence suggests that lay understanding about inheritance may conflict with medical models. This study systematically reviewed and synthesized the qualitative literature exploring understanding about familial risk held by persons with a family history of cancer, coronary artery disease, and diabetes mellitus. Twenty-two qualitative articles were found after a comprehensive literature search and were critically appraised; 11 were included. A meta-ethnographic approach was used to translate the studies across each other, synthesize the translation, and express the synthesis. A dynamic process emerged by which a personal sense of vulnerability included some features that mirror the medical factors used to assess risk, such as the number of affected relatives. Other features are more personal, such as experience of a relative's disease, sudden or premature death, perceived patterns of illness relating to gender or age at death, and comparisons between a person and an affected relative. The developing vulnerability is interpreted using personal mental models, including models of disease causation, inheritance, and fatalism. A person's sense of vulnerability affects how that person copes with, and attempts to control, any perceived familial risk. Persons with a family history of a common chronic disease develop a personal sense of vulnerability that is informed by the salience of their family history and interpreted within their personal models of disease causation and inheritance. Features that give meaning to familial risk may be perceived differently by patients and professionals. This review identifies key areas for health professionals to explore with patients that may improve the effectiveness of communication about disease risk and management.
Spin-related origin of the magnetotransport feature at filling factor 7/11
NASA Astrophysics Data System (ADS)
Gamez, Gerardo; Muraki, Koji
2010-03-01
Experiments by Pan et al. disclosed quantum Hall (QH) effect-like features at unconventional filling fractions, such as 4/11 and 7/11, not included in the Jain sequence [1]. These features were considered as evidence for a new class of fractional quantum Hall (FQH) states whose origin, unlike ordinary FQH states, is linked to interactions between composite fermions (CFs). However, the exact origin of these features is not well established yet. Here we focus on 7/11, where a minimum in the longitudinal resistance and a plateau-like structure in the Hall resistance are observed at a much higher field, 11.4 T, in a 30-nm quantum well (QW). Our density-dependent studies show that at this field, the FQH states flanking 7/11, viz. the 2/3 and 3/5 states, are both fully spin polarized. Despite of this fact, tilted-field experiments reveal that the 7/11 feature weakens and then disappears upon tilting. Using a CF model, we show that the spin degree of freedom may not be completely frozen in the region between the 2/3 and 3/5 states even when both states are fully polarized. Systematic studies unveil that the exact location of the 7/11 feature depends on the electron density and the QW width, in accordance with the model. Our model can also account for the reported contrasting behavior upon tilting of 7/11 and its electron-hole counterpart 4/11. [1] Pan et al., Phys. Rev. Lett. 90, 016801 (2003).
Patient-Specific Deep Architectural Model for ECG Classification
Luo, Kan; Cuschieri, Alfred
2017-01-01
Heartbeat classification is a crucial step for arrhythmia diagnosis during electrocardiographic (ECG) analysis. The new scenario of wireless body sensor network- (WBSN-) enabled ECG monitoring puts forward a higher-level demand for this traditional ECG analysis task. Previously reported methods mainly addressed this requirement with the applications of a shallow structured classifier and expert-designed features. In this study, modified frequency slice wavelet transform (MFSWT) was firstly employed to produce the time-frequency image for heartbeat signal. Then the deep learning (DL) method was performed for the heartbeat classification. Here, we proposed a novel model incorporating automatic feature abstraction and a deep neural network (DNN) classifier. Features were automatically abstracted by the stacked denoising auto-encoder (SDA) from the transferred time-frequency image. DNN classifier was constructed by an encoder layer of SDA and a softmax layer. In addition, a deterministic patient-specific heartbeat classifier was achieved by fine-tuning on heartbeat samples, which included a small subset of individual samples. The performance of the proposed model was evaluated on the MIT-BIH arrhythmia database. Results showed that an overall accuracy of 97.5% was achieved using the proposed model, confirming that the proposed DNN model is a powerful tool for heartbeat pattern recognition. PMID:29065597
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattison, M.B.; Blackman, H.S.; Novack, S.D.
The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methods, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3)more » enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattison, M.B.; Blackman, H.S.; Novack, S.D.
The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methodology, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3)more » enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements.« less
van den Berg, Ronald; Roerdink, Jos B T M; Cornelissen, Frans W
2010-01-22
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.
Modeling of Casting Defects in an Integrated Computational Materials Engineering Approach
NASA Astrophysics Data System (ADS)
Sabau, Adrian S.
To accelerate the introduction of new cast alloys the modeling and simulation of multiphysical phenomena needs to be considered in the design and optimization of mechanical properties of cast components. The required models related to casting defects, such as microporosity and hot tears are reviewed. Three aluminum alloys are considered A356, 356 and 319. The data on calculated solidification shrinkage is presented and its effects on microporosity levels discussed. Examples are given for predicting microporosity defects and microstructure distribution for a plate casting. Models to predict fatigue life and yield stress are briefly highlighted here for the sake of completion and to illustrate how the length scales of the microstructure features as well as porosity defects are taken into account for modeling the mechanical properties. The data on casting defects including microstructure features, is crucial for evaluating the final performance-related properties of the component.
Modelling multimodal expression of emotion in a virtual agent.
Pelachaud, Catherine
2009-12-12
Over the past few years we have been developing an expressive embodied conversational agent system. In particular, we have developed a model of multimodal behaviours that includes dynamism and complex facial expressions. The first feature refers to the qualitative execution of behaviours. Our model is based on perceptual studies and encompasses several parameters that modulate multimodal behaviours. The second feature, the model of complex expressions, follows a componential approach where a new expression is obtained by combining facial areas of other expressions. Lately we have been working on adding temporal dynamism to expressions. So far they have been designed statically, typically at their apex. Only full-blown expressions could be modelled. To overcome this limitation, we have defined a representation scheme that describes the temporal evolution of the expression of an emotion. It is no longer represented by a static definition but by a temporally ordered sequence of multimodal signals.
Integrating high dimensional bi-directional parsing models for gene mention tagging.
Hsu, Chun-Nan; Chang, Yu-Ming; Kuo, Cheng-Ju; Lin, Yu-Shi; Huang, Han-Shen; Chung, I-Fang
2008-07-01
Tagging gene and gene product mentions in scientific text is an important initial step of literature mining. In this article, we describe in detail our gene mention tagger participated in BioCreative 2 challenge and analyze what contributes to its good performance. Our tagger is based on the conditional random fields model (CRF), the most prevailing method for the gene mention tagging task in BioCreative 2. Our tagger is interesting because it accomplished the highest F-scores among CRF-based methods and second over all. Moreover, we obtained our results by mostly applying open source packages, making it easy to duplicate our results. We first describe in detail how we developed our CRF-based tagger. We designed a very high dimensional feature set that includes most of information that may be relevant. We trained bi-directional CRF models with the same set of features, one applies forward parsing and the other backward, and integrated two models based on the output scores and dictionary filtering. One of the most prominent factors that contributes to the good performance of our tagger is the integration of an additional backward parsing model. However, from the definition of CRF, it appears that a CRF model is symmetric and bi-directional parsing models will produce the same results. We show that due to different feature settings, a CRF model can be asymmetric and the feature setting for our tagger in BioCreative 2 not only produces different results but also gives backward parsing models slight but constant advantage over forward parsing model. To fully explore the potential of integrating bi-directional parsing models, we applied different asymmetric feature settings to generate many bi-directional parsing models and integrate them based on the output scores. Experimental results show that this integrated model can achieve even higher F-score solely based on the training corpus for gene mention tagging. Data sets, programs and an on-line service of our gene mention tagger can be accessed at http://aiia.iis.sinica.edu.tw/biocreative2.htm.
Brunker, K; Hampson, K; Horton, D L; Biek, R
2012-12-01
Landscape epidemiology and landscape genetics combine advances in molecular techniques, spatial analyses and epidemiological models to generate a more real-world understanding of infectious disease dynamics and provide powerful new tools for the study of RNA viruses. Using dog rabies as a model we have identified how key questions regarding viral spread and persistence can be addressed using a combination of these techniques. In contrast to wildlife rabies, investigations into the landscape epidemiology of domestic dog rabies requires more detailed assessment of the role of humans in disease spread, including the incorporation of anthropogenic landscape features, human movements and socio-cultural factors into spatial models. In particular, identifying and quantifying the influence of anthropogenic features on pathogen spread and measuring the permeability of dispersal barriers are important considerations for planning control strategies, and may differ according to cultural, social and geographical variation across countries or continents. Challenges for dog rabies research include the development of metapopulation models and transmission networks using genetic information to uncover potential source/sink dynamics and identify the main routes of viral dissemination. Information generated from a landscape genetics approach will facilitate spatially strategic control programmes that accommodate for heterogeneities in the landscape and therefore utilise resources in the most cost-effective way. This can include the efficient placement of vaccine barriers, surveillance points and adaptive management for large-scale control programmes.
Environmental correlates to behavioral health outcomes in Alzheimer's special care units.
Zeisel, John; Silverstein, Nina M; Hyde, Joan; Levkoff, Sue; Lawton, M Powell; Holmes, William
2003-10-01
We systematically measured the associations between environmental design features of nursing home special care units and the incidence of aggression, agitation, social withdrawal, depression, and psychotic problems among persons living there who have Alzheimer's disease or a related disorder. We developed and tested a model of critical health-related environmental design features in settings for people with Alzheimer's disease. We used hierarchical linear modeling statistical techniques to assess associations between seven environmental design features and behavioral health measures for 427 residents in 15 special care units. Behavioral health measures included the Cohen-Mansfield physical agitation, verbal agitation, and aggressive behavior scales, the Multidimensional Observation Scale for Elderly Subjects depression and social withdrawal scales, and BEHAVE-AD (psychotic symptom list) misidentification and paranoid delusions scales. Statistical controls were included for the influence of, among others, cognitive status, need for assistance with activities of daily living, prescription drug use, amount of Alzheimer's staff training, and staff-to-resident ratio. Although hierarchical linear modeling minimizes the risk of Type II-false positive-error, this exploratory study also pays special attention to avoiding Type I error-the failure to recognize possible relationships between behavioral health characteristics and independent variables. We found associations between each behavioral health measure and particular environmental design features, as well as between behavioral health measures and both resident and nonenvironmental facility variables. This research demonstrates the potential that environment has for contributing to the improvement of Alzheimer's symptoms. A balanced combination of pharmacologic, behavioral, and environmental approaches is likely to be most effective in improving the health, behavior, and quality of life of people with Alzheimer's disease.
Effectiveness of feature and classifier algorithms in character recognition systems
NASA Astrophysics Data System (ADS)
Wilson, Charles L.
1993-04-01
At the first Census Optical Character Recognition Systems Conference, NIST generated accuracy data for more than character recognition systems. Most systems were tested on the recognition of isolated digits and upper and lower case alphabetic characters. The recognition experiments were performed on sample sizes of 58,000 digits, and 12,000 upper and lower case alphabetic characters. The algorithms used by the 26 conference participants included rule-based methods, image-based methods, statistical methods, and neural networks. The neural network methods included Multi-Layer Perceptron's, Learned Vector Quantitization, Neocognitrons, and cascaded neural networks. In this paper 11 different systems are compared using correlations between the answers of different systems, comparing the decrease in error rate as a function of confidence of recognition, and comparing the writer dependence of recognition. This comparison shows that methods that used different algorithms for feature extraction and recognition performed with very high levels of correlation. This is true for neural network systems, hybrid systems, and statistically based systems, and leads to the conclusion that neural networks have not yet demonstrated a clear superiority to more conventional statistical methods. Comparison of these results with the models of Vapnick (for estimation problems), MacKay (for Bayesian statistical models), Moody (for effective parameterization), and Boltzmann models (for information content) demonstrate that as the limits of training data variance are approached, all classifier systems have similar statistical properties. The limiting condition can only be approached for sufficiently rich feature sets because the accuracy limit is controlled by the available information content of the training set, which must pass through the feature extraction process prior to classification.
NASA Astrophysics Data System (ADS)
Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin
2017-01-01
We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.
User-Oriented Modeling Tools for Advanced Hybrid and Climate-Appropriate Rooftop Air Conditioners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woolley, Jonathan; Univ. of California, Davis, CA; Modera, Mark
Hybrid unitary air conditioning systems offer a pathway to substantially reduce energy use and peak electrical demand for cooling, heating, and ventilation in commercial buildings. Hybrid air conditioners incorporate multiple subsystems that are carefully orchestrated to provide climate- and application-specific efficiency advantages. There are a multitude of hybrid system architectures, but common subsystems include: heat recovery ventilation, indirect evaporative cooling, desiccant dehumidification, variable speed fans, modulating dampers, and multi-stage or variable-speed vapor compression cooling. Categorically, hybrid systems can operate in numerous discrete modes. For example: indirect evaporative cooling may operate for periods when the subsystem provides adequate sensible cooling, thenmore » vapor compression cooling will be included when more cooling or dehumidification is necessary. Laboratory assessments, field studies, and simulations have demonstrated that hybrid unitary air conditioners could reduce energy use for cooling and ventilation by 30-90% depending on climate and application. Heretofore, it has been challenging - if not impossible - for practitioners to model hybrid air conditioners as part of building energy simulations; and the limitation has severely obstructed broader adoption of technologies in this class. In this project, we developed a new feature for EnergyPlus that enables modeling hybrid unitary air conditioning equipment for building energy simulations. This is a significant advancement for both theory and practice, and confers public benefit by enabling practitioners to evaluate this compelling efficiency technology as a part of building energy simulations. The feature is a black-box model that requires extensive performance data for each hybrid unitary product. In parallel, we also developed new features for the Technology Performance Exchange to enable manufacturers to submit performance data in a standard format that can be used with the hybrid unitary model in EnergyPlus. Additionally, through this project we expanded university educational resources, and university- manufacturing industry collaborations in the field of energy efficiency technology. Over two years, we involved 20 undergraduate students in ambitious research projects focused on modeling complex multi-mode mechanical systems, supported three mechanical engineering bachelor theses, established undergraduate apprenticeships with multiple industry partners, and involved those partners in the process of design, validation, and debugging for the new EnergyPlus feature. The EnergyPlus feature is described and discussed in an academic article, as well as in an engineering reference, and input/output reference documentation for EnergyPlus. The Technology Performance Exchange features are live and publicly accessible, our manufacturer partners are primed to submit initial product information and performance data to the exchange, and the EnergyPlus feature is scheduled for public release in Spring 2018 as a part of EnergyPlus v8.9.« less
Papp, Laszlo; Poetsch, Nina; Grahovac, Marko; Schmidbauer, Victor; Woehrer, Adelheid; Preusser, Matthias; Mitterhauser, Markus; Kiesel, Barbara; Wadsak, Wolfgang; Beyer, Thomas; Hacker, Marcus; Traub-Weidinger, Tatjana
2017-11-24
Gliomas are the most common types of tumors in the brain. While the definite diagnosis is routinely made ex vivo by histopathologic and molecular examination, diagnostic work-up of patients with suspected glioma is mainly done by using magnetic resonance imaging (MRI). Nevertheless, L-S-methyl- 11 C-methionine ( 11 C-MET) Positron Emission Tomography (PET) holds a great potential in characterization of gliomas. The aim of this study was to establish machine learning (ML) driven survival models for glioma built on 11 C-MET-PET, ex vivo and patient characteristics. Methods: 70 patients with a treatment naïve glioma, who had a positive 11 C-MET-PET and histopathology-derived ex vivo feature extraction, such as World Health Organization (WHO) 2007 tumor grade, histology and isocitrate dehydrogenase (IDH1-R132H) mutation status were included. The 11 C-MET-positive primary tumors were delineated semi-automatically on PET images followed by the feature extraction of tumor-to-background ratio based general and higher-order textural features by applying five different binning approaches. In vivo and ex vivo features, as well as patient characteristics (age, weight, height, body-mass-index, Karnofsky-score) were merged to characterize the tumors. Machine learning approaches were utilized to identify relevant in vivo, ex vivo and patient features and their relative weights for 36 months survival prediction. The resulting feature weights were used to establish three predictive models per binning configuration based on a combination of: in vivo/ex vivo and clinical patient information (M36IEP), in vivo and patient-only information (M36IP), and in vivo only (M36I). In addition a binning-independent ex vivo and patient-only (M36EP) model was created. The established models were validated in a Monte Carlo (MC) cross-validation scheme. Results: Most prominent ML-selected and -weighted features were patient and ex vivo based followed by in vivo features. The highest area under the curve (AUC) values of our models as revealed by the MC cross-validation were: 0.9 (M36IEP), 0.87 (M36EP), 0.77 (M36IP) and 0.72 (M36I). Conclusion: Survival prediction of glioma patients based on amino acid PET using computer-supported predictive models based on in vivo, ex vivo and patient features is highly accurate. Copyright © 2017 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Environmental modeling and recognition for an autonomous land vehicle
NASA Technical Reports Server (NTRS)
Lawton, D. T.; Levitt, T. S.; Mcconnell, C. C.; Nelson, P. C.
1987-01-01
An architecture for object modeling and recognition for an autonomous land vehicle is presented. Examples of objects of interest include terrain features, fields, roads, horizon features, trees, etc. The architecture is organized around a set of data bases for generic object models and perceptual structures, temporary memory for the instantiation of object and relational hypotheses, and a long term memory for storing stable hypotheses that are affixed to the terrain representation. Multiple inference processes operate over these databases. Researchers describe these particular components: the perceptual structure database, the grouping processes that operate over this, schemas, and the long term terrain database. A processing example that matches predictions from the long term terrain model to imagery, extracts significant perceptual structures for consideration as potential landmarks, and extracts a relational structure to update the long term terrain database is given.
Global MHD Modeling of Auroral Conjugacy for Different IMF Conditions
NASA Astrophysics Data System (ADS)
Hesse, M.; Kuznetsova, M. M.; Liu, Y. H.; Birn, J.; Rastaetter, L.
2016-12-01
The question whether auroral features are conjugate or not, and the search for the underlying scientific causes is of high interest in magnetospheric and ionospheric physics. Consequently, this topic has attracted considerable attention in space-based observations of auroral features, and it has inspired a number of theoretical ideas and related modeling activities. Potential contributing factors to the presence or absence of auroral conjugacy include precipitation asymmetries in case of the diffuse aurora, inter-hemispherical conductivity differences, magnetospheric asymmetries brought about by, e.g., dipole tilt, corotation, or IMF By, and, finally, asymmetries in field-aligned current generation primarily in the nightside magnetosphere. In this presentation, we will analyze high-resolution, global MHD simulations of magnetospheric dynamics, with emphasis on auroral conjugacy. For the purpose of this study, we define controlled conditions by selecting solstice times with steady solar wind input, the latter of which includes an IMF rotation from purely southward to east-westward. Conductivity models will include both auroral precipaition proxies as well as the effects of the aysmmetric daylight. We will analyze these simulations with respect to conjugacies or the lack thereof, and study the role of the effects above in determing the former.
Classification of clinically useful sentences in clinical evidence resources.
Morid, Mohammad Amin; Fiszman, Marcelo; Raja, Kalpana; Jonnalagadda, Siddhartha R; Del Fiol, Guilherme
2016-04-01
Most patient care questions raised by clinicians can be answered by online clinical knowledge resources. However, important barriers still challenge the use of these resources at the point of care. To design and assess a method for extracting clinically useful sentences from synthesized online clinical resources that represent the most clinically useful information for directly answering clinicians' information needs. We developed a Kernel-based Bayesian Network classification model based on different domain-specific feature types extracted from sentences in a gold standard composed of 18 UpToDate documents. These features included UMLS concepts and their semantic groups, semantic predications extracted by SemRep, patient population identified by a pattern-based natural language processing (NLP) algorithm, and cue words extracted by a feature selection technique. Algorithm performance was measured in terms of precision, recall, and F-measure. The feature-rich approach yielded an F-measure of 74% versus 37% for a feature co-occurrence method (p<0.001). Excluding predication, population, semantic concept or text-based features reduced the F-measure to 62%, 66%, 58% and 69% respectively (p<0.01). The classifier applied to Medline sentences reached an F-measure of 73%, which is equivalent to the performance of the classifier on UpToDate sentences (p=0.62). The feature-rich approach significantly outperformed general baseline methods. This approach significantly outperformed classifiers based on a single type of feature. Different types of semantic features provided a unique contribution to overall classification performance. The classifier's model and features used for UpToDate generalized well to Medline abstracts. Copyright © 2016 Elsevier Inc. All rights reserved.
High-Fidelity Modeling for Health Monitoring in Honeycomb Sandwich Structures
NASA Technical Reports Server (NTRS)
Luchinsky, Dimitry G.; Hafiychuk, Vasyl; Smelyanskiy, Vadim; Tyson, Richard W.; Walker, James L.; Miller, Jimmy L.
2011-01-01
High-Fidelity Model of the sandwich composite structure with real geometry is reported. The model includes two composite facesheets, honeycomb core, piezoelectric actuator/sensors, adhesive layers, and the impactor. The novel feature of the model is that it includes modeling of the impact and wave propagation in the structure before and after the impact. Results of modeling of the wave propagation, impact, and damage detection in sandwich honeycomb plates using piezoelectric actuator/sensor scheme are reported. The results of the simulations are compared with the experimental results. It is shown that the model is suitable for analysis of the physics of failure due to the impact and for testing structural health monitoring schemes based on guided wave propagation.
Oscillating in synchrony with a metronome: serial dependence, limit cycle dynamics, and modeling.
Torre, Kjerstin; Balasubramaniam, Ramesh; Delignières, Didier
2010-07-01
We analyzed serial dependencies in periods and asynchronies collected during oscillations performed in synchrony with a metronome. Results showed that asynchronies contain 1/f fluctuations, and the series of periods contain antipersistent dependence. The analysis of the phase portrait revealed a specific asymmetry induced by synchronization. We propose a hybrid limit cycle model including a cycle-dependent stiffness parameter provided with fractal properties, and a parametric driving function based on velocity. This model accounts for most experimentally evidenced statistical features, including serial dependence and limit cycle dynamics. We discuss the results and modeling choices within the framework of event-based and emergent timing.
AgRISTARS. Supporting research: Algorithms for scene modelling
NASA Technical Reports Server (NTRS)
Rassbach, M. E. (Principal Investigator)
1982-01-01
The requirements for a comprehensive analysis of LANDSAT or other visual data scenes are defined. The development of a general model of a scene and a computer algorithm for finding the particular model for a given scene is discussed. The modelling system includes a boundary analysis subsystem, which detects all the boundaries and lines in the image and builds a boundary graph; a continuous variation analysis subsystem, which finds gradual variations not well approximated by a boundary structure; and a miscellaneous features analysis, which includes texture, line parallelism, etc. The noise reduction capabilities of this method and its use in image rectification and registration are discussed.
Prioritizing causal disease genes using unbiased genomic features.
Deo, Rahul C; Musso, Gabriel; Tasan, Murat; Tang, Paul; Poon, Annie; Yuan, Christiana; Felix, Janine F; Vasan, Ramachandran S; Beroukhim, Rameen; De Marco, Teresa; Kwok, Pui-Yan; MacRae, Calum A; Roth, Frederick P
2014-12-03
Cardiovascular disease (CVD) is the leading cause of death in the developed world. Human genetic studies, including genome-wide sequencing and SNP-array approaches, promise to reveal disease genes and mechanisms representing new therapeutic targets. In practice, however, identification of the actual genes contributing to disease pathogenesis has lagged behind identification of associated loci, thus limiting the clinical benefits. To aid in localizing causal genes, we develop a machine learning approach, Objective Prioritization for Enhanced Novelty (OPEN), which quantitatively prioritizes gene-disease associations based on a diverse group of genomic features. This approach uses only unbiased predictive features and thus is not hampered by a preference towards previously well-characterized genes. We demonstrate success in identifying genetic determinants for CVD-related traits, including cholesterol levels, blood pressure, and conduction system and cardiomyopathy phenotypes. Using OPEN, we prioritize genes, including FLNC, for association with increased left ventricular diameter, which is a defining feature of a prevalent cardiovascular disorder, dilated cardiomyopathy or DCM. Using a zebrafish model, we experimentally validate FLNC and identify a novel FLNC splice-site mutation in a patient with severe DCM. Our approach stands to assist interpretation of large-scale genetic studies without compromising their fundamentally unbiased nature.
Safety modeling of urban arterials in Shanghai, China.
Wang, Xuesong; Fan, Tianxiang; Chen, Ming; Deng, Bing; Wu, Bing; Tremont, Paul
2015-10-01
Traffic safety on urban arterials is influenced by several key variables including geometric design features, land use, traffic volume, and travel speeds. This paper is an exploratory study of the relationship of these variables to safety. It uses a comparatively new method of measuring speeds by extracting GPS data from taxis operating on Shanghai's urban network. This GPS derived speed data, hereafter called Floating Car Data (FCD) was used to calculate average speeds during peak and off-peak hours, and was acquired from samples of 15,000+ taxis traveling on 176 segments over 18 major arterials in central Shanghai. Geometric design features of these arterials and surrounding land use characteristics were obtained by field investigation, and crash data was obtained from police reports. Bayesian inference using four different models, Poisson-lognormal (PLN), PLN with Maximum Likelihood priors (PLN-ML), hierarchical PLN (HPLN), and HPLN with Maximum Likelihood priors (HPLN-ML), was used to estimate crash frequencies. Results showed the HPLN-ML models had the best goodness-of-fit and efficiency, and models with ML priors yielded estimates with the lowest standard errors. Crash frequencies increased with increases in traffic volume. Higher average speeds were associated with higher crash frequencies during peak periods, but not during off-peak periods. Several geometric design features including average segment length of arterial, number of lanes, presence of non-motorized lanes, number of access points, and commercial land use, were positively related to crash frequencies. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Course on Multimedia Environmental Transport, Exposure, and Risk Assessment.
ERIC Educational Resources Information Center
Cohen, Yoram; And Others
1990-01-01
Included are the general guidelines, outline, a summary of major intermedia transport processes, model features, a discussion of multimedia exposure and health risk, and a list of 50 suggested references for this course. (CW)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnamurthy, Dheepak
This paper is an overview of Power System Simulation Toolbox (psst). psst is an open-source Python application for the simulation and analysis of power system models. psst simulates the wholesale market operation by solving a DC Optimal Power Flow (DCOPF), Security Constrained Unit Commitment (SCUC) and a Security Constrained Economic Dispatch (SCED). psst also includes models for the various entities in a power system such as Generator Companies (GenCos), Load Serving Entities (LSEs) and an Independent System Operator (ISO). psst features an open modular object oriented architecture that will make it useful for researchers to customize, expand, experiment beyond solvingmore » traditional problems. psst also includes a web based Graphical User Interface (GUI) that allows for user friendly interaction and for implementation on remote High Performance Computing (HPCs) clusters for parallelized operations. This paper also provides an illustrative application of psst and benchmarks with standard IEEE test cases to show the advanced features and the performance of toolbox.« less
The powdery mildews: a review of the world's most familiar (yet poorly known) plant pathogens.
Glawe, Dean A
2008-01-01
The past decade has seen fundamental changes in our understanding of powdery mildews (Erysiphales). Research on molecular phylogeny demonstrated that Erysiphales are Leotiomycetes (inoperculate discomycetes) rather than Pyrenomycetes or Plectomycetes. Life cycles are surprisingly variable, including both sexual and asexual states, or only sexual states, or only asexual states. At least one species produces dematiaceous conidia. Analyses of rDNA sequences indicate that major lineages are more closely correlated with anamorphic features such as conidial ontogeny and morphology than with teleomorph features. Development of molecular clock models is enabling researchers to reconstruct patterns of coevolution and host-jumping, as well as ancient migration patterns. Geographic distributions of some species appear to be increasing rapidly but little is known about species diversity in many large areas, including North America. Powdery mildews may already be responding to climate change, suggesting they may be useful models for studying effects of climate change on plant diseases.
Mapping pathological phenotypes in a mouse model of CDKL5 disorder.
Amendola, Elena; Zhan, Yang; Mattucci, Camilla; Castroflorio, Enrico; Calcagno, Eleonora; Fuchs, Claudia; Lonetti, Giuseppina; Silingardi, Davide; Vyssotski, Alexei L; Farley, Dominika; Ciani, Elisabetta; Pizzorusso, Tommaso; Giustetto, Maurizio; Gross, Cornelius T
2014-01-01
Mutations in cyclin-dependent kinase-like 5 (CDKL5) cause early-onset epileptic encephalopathy, a neurodevelopmental disorder with similarities to Rett Syndrome. Here we describe the physiological, molecular, and behavioral phenotyping of a Cdkl5 conditional knockout mouse model of CDKL5 disorder. Behavioral analysis of constitutive Cdkl5 knockout mice revealed key features of the human disorder, including limb clasping, hypoactivity, and abnormal eye tracking. Anatomical, physiological, and molecular analysis of the knockout uncovered potential pathological substrates of the disorder, including reduced dendritic arborization of cortical neurons, abnormal electroencephalograph (EEG) responses to convulsant treatment, decreased visual evoked responses (VEPs), and alterations in the Akt/rpS6 signaling pathway. Selective knockout of Cdkl5 in excitatory and inhibitory forebrain neurons allowed us to map the behavioral features of the disorder to separable cell-types. These findings identify physiological and molecular deficits in specific forebrain neuron populations as possible pathological substrates in CDKL5 disorder.
A Primer on the Statistical Modelling of Learning Curves in Health Professions Education
ERIC Educational Resources Information Center
Pusic, Martin V.; Boutis, Kathy; Pecaric, Martin R.; Savenkov, Oleksander; Beckstead, Jason W.; Jaber, Mohamad Y.
2017-01-01
Learning curves are a useful way of representing the rate of learning over time. Features include an index of baseline performance (y-intercept), the efficiency of learning over time (slope parameter) and the maximal theoretical performance achievable (upper asymptote). Each of these parameters can be statistically modelled on an individual and…
A Comparison of Two Models for Cognitive Diagnosis. Research Report. ETS RR-04-02
ERIC Educational Resources Information Center
Yan, Duanli; Almond, Russell; Mislevy, Robert
2004-01-01
Diagnostic score reports linking assessment outcomes to instructional interventions are one of the most requested features of assessment products. There is a body of interesting work done in the last 20 years including Tatsuoka's rule space method (Tatsuoka, 1983), Haertal and Wiley's binary skills model (Haertal, 1984; Haertal & Wiley, 1993),…
On the Latent Regression Model of Item Response Theory. Research Report. ETS RR-07-12
ERIC Educational Resources Information Center
Antal, Tamás
2007-01-01
Full account of the latent regression model for the National Assessment of Educational Progress is given. The treatment includes derivation of the EM algorithm, Newton-Raphson method, and the asymptotic standard errors. The paper also features the use of the adaptive Gauss-Hermite numerical integration method as a basic tool to evaluate…
ERIC Educational Resources Information Center
Cryan, Mark; Martinek, Thomas
2017-01-01
The Soccer Coaching Club program used the Teaching Personal and Social Responsibility (TPSR) model in an after-school soccer program for sixth grade boys between 11 and 12 years old in a local middle school. Soccer, as the featured physical activity, provided the "hook" for regular attendance. Desired outcomes included improved…
The Effect of Family Processes on School Achievement as Moderated by Socioeconomic Context
ERIC Educational Resources Information Center
Oxford, Monica L.; Lee, Jungeun Olivia
2011-01-01
This longitudinal study examined a model of early school achievement in reading and math, as it varies by socioeconomic context, using data from the NICHD Study of Early Child Care and Youth Development. A conceptual model was tested that included features of family stress, early parenting, and school readiness, through both a single-group…
Windblown Features on Venus and Geological Mapping
NASA Technical Reports Server (NTRS)
Greeley, Ronald
1999-01-01
The objectives of this study were to: 1) develop a global data base of aeolian features by searching Magellan coverage for possible time-variable wind streaks, 2) analyze the data base to characterize aeolian features and processes on Venus, 3) apply the analysis to assessments of wind patterns near the surface and for comparisons with atmospheric circulation models, 4) analyze shuttle radar data acquired for aeolian features on Earth to determine their radar characteristics, and 5) conduct geological mapping of two quadrangles. Wind, or aeolian, features are observed on Venus and aeolian processes play a role in modifying its surface. Analysis of features resulting from aeolian processes provides insight into characteristics of both the atmosphere and the surface. Wind related features identified on Venus include erosional landforms (yardangs), depositional dune fields, and features resulting from the interaction of the atmosphere and crater ejecta at the time of impact. The most abundant aeolian features are various wind streaks. Their discovery on Venus afforded the opportunity to learn about the interaction of the atmosphere and surface, both for the identification of sediments and in mapping near-surface winds.
Breast cancer Ki67 expression preoperative discrimination by DCE-MRI radiomics features
NASA Astrophysics Data System (ADS)
Ma, Wenjuan; Ji, Yu; Qin, Zhuanping; Guo, Xinpeng; Jian, Xiqi; Liu, Peifang
2018-02-01
To investigate whether quantitative radiomics features extracted from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) are associated with Ki67 expression of breast cancer. In this institutional review board approved retrospective study, we collected 377 cases Chinese women who were diagnosed with invasive breast cancer in 2015. This cohort included 53 low-Ki67 expression (Ki67 proliferation index less than 14%) and 324 cases with high-Ki67 expression (Ki67 proliferation index more than 14%). A binary-classification of low- vs. high- Ki67 expression was performed. A set of 52 quantitative radiomics features, including morphological, gray scale statistic, and texture features, were extracted from the segmented lesion area. Three most common machine learning classification methods, including Naive Bayes, k-Nearest Neighbor and support vector machine with Gaussian kernel, were employed for the classification and the least absolute shrink age and selection operator (LASSO) method was used to select most predictive features set for the classifiers. Classification performance was evaluated by the area under receiver operating characteristic curve (AUC), accuracy, sensitivity and specificity. The model that used Naive Bayes classification method achieved the best performance than the other two methods, yielding 0.773 AUC value, 0.757 accuracy, 0.777 sensitivity and 0.769 specificity. Our study showed that quantitative radiomics imaging features of breast tumor extracted from DCE-MRI are associated with breast cancer Ki67 expression. Future larger studies are needed in order to further evaluate the findings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Micah Johnson, Andrew Slaughter
PIKA is a MOOSE-based application for modeling micro-structure evolution of seasonal snow. The model will be useful for environmental, atmospheric, and climate scientists. Possible applications include application to energy balance models, ice sheet modeling, and avalanche forecasting. The model implements physics from published, peer-reviewed articles. The main purpose is to foster university and laboratory collaboration to build a larger multi-scale snow model using MOOSE. The main feature of the code is that it is implemented using the MOOSE framework, thus making features such as multiphysics coupling, adaptive mesh refinement, and parallel scalability native to the application. PIKA implements three equations:more » the phase-field equation for tracking the evolution of the ice-air interface within seasonal snow at the grain-scale; the heat equation for computing the temperature of both the ice and air within the snow; and the mass transport equation for monitoring the diffusion of water vapor in the pore space of the snow.« less
A method of real-time fault diagnosis for power transformers based on vibration analysis
NASA Astrophysics Data System (ADS)
Hong, Kaixing; Huang, Hai; Zhou, Jianping; Shen, Yimin; Li, Yujie
2015-11-01
In this paper, a novel probability-based classification model is proposed for real-time fault detection of power transformers. First, the transformer vibration principle is introduced, and two effective feature extraction techniques are presented. Next, the details of the classification model based on support vector machine (SVM) are shown. The model also includes a binary decision tree (BDT) which divides transformers into different classes according to health state. The trained model produces posterior probabilities of membership to each predefined class for a tested vibration sample. During the experiments, the vibrations of transformers under different conditions are acquired, and the corresponding feature vectors are used to train the SVM classifiers. The effectiveness of this model is illustrated experimentally on typical in-service transformers. The consistency between the results of the proposed model and the actual condition of the test transformers indicates that the model can be used as a reliable method for transformer fault detection.
Yothers, Mitchell P; Browder, Aaron E; Bumm, Lloyd A
2017-01-01
We have developed a real-space method to correct distortion due to thermal drift and piezoelectric actuator nonlinearities on scanning tunneling microscope images using Matlab. The method uses the known structures typically present in high-resolution atomic and molecularly resolved images as an internal standard. Each image feature (atom or molecule) is first identified in the image. The locations of each feature's nearest neighbors are used to measure the local distortion at that location. The local distortion map across the image is simultaneously fit to our distortion model, which includes thermal drift in addition to piezoelectric actuator hysteresis and creep. The image coordinates of the features and image pixels are corrected using an inverse transform from the distortion model. We call this technique the thermal-drift, hysteresis, and creep transform. Performing the correction in real space allows defects, domain boundaries, and step edges to be excluded with a spatial mask. Additional real-space image analyses are now possible with these corrected images. Using graphite(0001) as a model system, we show lattice fitting to the corrected image, averaged unit cell images, and symmetry-averaged unit cell images. Statistical analysis of the distribution of the image features around their best-fit lattice sites measures the aggregate noise in the image, which can be expressed as feature confidence ellipsoids.
NASA Astrophysics Data System (ADS)
Yothers, Mitchell P.; Browder, Aaron E.; Bumm, Lloyd A.
2017-01-01
We have developed a real-space method to correct distortion due to thermal drift and piezoelectric actuator nonlinearities on scanning tunneling microscope images using Matlab. The method uses the known structures typically present in high-resolution atomic and molecularly resolved images as an internal standard. Each image feature (atom or molecule) is first identified in the image. The locations of each feature's nearest neighbors are used to measure the local distortion at that location. The local distortion map across the image is simultaneously fit to our distortion model, which includes thermal drift in addition to piezoelectric actuator hysteresis and creep. The image coordinates of the features and image pixels are corrected using an inverse transform from the distortion model. We call this technique the thermal-drift, hysteresis, and creep transform. Performing the correction in real space allows defects, domain boundaries, and step edges to be excluded with a spatial mask. Additional real-space image analyses are now possible with these corrected images. Using graphite(0001) as a model system, we show lattice fitting to the corrected image, averaged unit cell images, and symmetry-averaged unit cell images. Statistical analysis of the distribution of the image features around their best-fit lattice sites measures the aggregate noise in the image, which can be expressed as feature confidence ellipsoids.
Fold assessment for comparative protein structure modeling.
Melo, Francisco; Sali, Andrej
2007-11-01
Accurate and automated assessment of both geometrical errors and incompleteness of comparative protein structure models is necessary for an adequate use of the models. Here, we describe a composite score for discriminating between models with the correct and incorrect fold. To find an accurate composite score, we designed and applied a genetic algorithm method that searched for a most informative subset of 21 input model features as well as their optimized nonlinear transformation into the composite score. The 21 input features included various statistical potential scores, stereochemistry quality descriptors, sequence alignment scores, geometrical descriptors, and measures of protein packing. The optimized composite score was found to depend on (1) a statistical potential z-score for residue accessibilities and distances, (2) model compactness, and (3) percentage sequence identity of the alignment used to build the model. The accuracy of the composite score was compared with the accuracy of assessment by single and combined features as well as by other commonly used assessment methods. The testing set was representative of models produced by automated comparative modeling on a genomic scale. The composite score performed better than any other tested score in terms of the maximum correct classification rate (i.e., 3.3% false positives and 2.5% false negatives) as well as the sensitivity and specificity across the whole range of thresholds. The composite score was implemented in our program MODELLER-8 and was used to assess models in the MODBASE database that contains comparative models for domains in approximately 1.3 million protein sequences.
Geraci, Joseph; Dharsee, Moyez; Nuin, Paulo; Haslehurst, Alexandria; Koti, Madhuri; Feilotter, Harriet E; Evans, Ken
2014-03-01
We introduce a novel method for visualizing high dimensional data via a discrete dynamical system. This method provides a 2D representation of the relationship between subjects according to a set of variables without geometric projections, transformed axes or principal components. The algorithm exploits a memory-type mechanism inherent in a certain class of discrete dynamical systems collectively referred to as the chaos game that are closely related to iterative function systems. The goal of the algorithm was to create a human readable representation of high dimensional patient data that was capable of detecting unrevealed subclusters of patients from within anticipated classifications. This provides a mechanism to further pursue a more personalized exploration of pathology when used with medical data. For clustering and classification protocols, the dynamical system portion of the algorithm is designed to come after some feature selection filter and before some model evaluation (e.g. clustering accuracy) protocol. In the version given here, a univariate features selection step is performed (in practice more complex feature selection methods are used), a discrete dynamical system is driven by this reduced set of variables (which results in a set of 2D cluster models), these models are evaluated for their accuracy (according to a user-defined binary classification) and finally a visual representation of the top classification models are returned. Thus, in addition to the visualization component, this methodology can be used for both supervised and unsupervised machine learning as the top performing models are returned in the protocol we describe here. Butterfly, the algorithm we introduce and provide working code for, uses a discrete dynamical system to classify high dimensional data and provide a 2D representation of the relationship between subjects. We report results on three datasets (two in the article; one in the appendix) including a public lung cancer dataset that comes along with the included Butterfly R package. In the included R script, a univariate feature selection method is used for the dimension reduction step, but in the future we wish to use a more powerful multivariate feature reduction method based on neural networks (Kriesel, 2007). A script written in R (designed to run on R studio) accompanies this article that implements this algorithm and is available at http://butterflygeraci.codeplex.com/. For details on the R package or for help installing the software refer to the accompanying document, Supporting Material and Appendix.
Histological Image Feature Mining Reveals Emergent Diagnostic Properties for Renal Cancer
Kothari, Sonal; Phan, John H.; Young, Andrew N.; Wang, May D.
2016-01-01
Computer-aided histological image classification systems are important for making objective and timely cancer diagnostic decisions. These systems use combinations of image features that quantify a variety of image properties. Because researchers tend to validate their diagnostic systems on specific cancer endpoints, it is difficult to predict which image features will perform well given a new cancer endpoint. In this paper, we define a comprehensive set of common image features (consisting of 12 distinct feature subsets) that quantify a variety of image properties. We use a data-mining approach to determine which feature subsets and image properties emerge as part of an “optimal” diagnostic model when applied to specific cancer endpoints. Our goal is to assess the performance of such comprehensive image feature sets for application to a wide variety of diagnostic problems. We perform this study on 12 endpoints including 6 renal tumor subtype endpoints and 6 renal cancer grade endpoints. Keywords-histology, image mining, computer-aided diagnosis PMID:28163980
Covariance Analysis of Vision Aided Navigation by Bootstrapping
2012-03-22
vision aided navigation. The aircraft uses its INS estimate to geolocate ground features, track those features to aid the INS, and using that aided...development of the 2-D case, including the dynamics and measurement model development, the state space representation and the use of the Kalman filter ...reference frame. This reference frame has its origin located somewhere on an A/C. Normally the origin is set at the A/C center of gravity to allow the use
Martinez-Torteya, Antonio; Rodriguez-Rojas, Juan; Celaya-Padilla, José M.; Galván-Tejada, Jorge I.; Treviño, Victor; Tamez-Peña, Jose
2014-01-01
Abstract. Early diagnoses of Alzheimer’s disease (AD) would confer many benefits. Several biomarkers have been proposed to achieve such a task, where features extracted from magnetic resonance imaging (MRI) have played an important role. However, studies have focused exclusively on morphological characteristics. This study aims to determine whether features relating to the signal and texture of the image could predict mild cognitive impairment (MCI) to AD progression. Clinical, biological, and positron emission tomography information and MRI images of 62 subjects from the AD neuroimaging initiative were used in this study, extracting 4150 features from each MRI. Within this multimodal database, a feature selection algorithm was used to obtain an accurate and small logistic regression model, generated by a methodology that yielded a mean blind test accuracy of 0.79. This model included six features, five of them obtained from the MRI images, and one obtained from genotyping. A risk analysis divided the subjects into low-risk and high-risk groups according to a prognostic index. The groups were statistically different (p-value=2.04e−11). These results demonstrated that MRI features related to both signal and texture add MCI to AD predictive power, and supported the ongoing notion that multimodal biomarkers outperform single-modality ones. PMID:26158047
MRI signal and texture features for the prediction of MCI to Alzheimer's disease progression
NASA Astrophysics Data System (ADS)
Martínez-Torteya, Antonio; Rodríguez-Rojas, Juan; Celaya-Padilla, José M.; Galván-Tejada, Jorge I.; Treviño, Victor; Tamez-Peña, José G.
2014-03-01
An early diagnosis of Alzheimer's disease (AD) confers many benefits. Several biomarkers from different information modalities have been proposed for the prediction of MCI to AD progression, where features extracted from MRI have played an important role. However, studies have focused almost exclusively in the morphological characteristics of the images. This study aims to determine whether features relating to the signal and texture of the image could add predictive power. Baseline clinical, biological and PET information, and MP-RAGE images for 62 subjects from the Alzheimer's Disease Neuroimaging Initiative were used in this study. Images were divided into 83 regions and 50 features were extracted from each one of these. A multimodal database was constructed, and a feature selection algorithm was used to obtain an accurate and small logistic regression model, which achieved a cross-validation accuracy of 0.96. These model included six features, five of them obtained from the MP-RAGE image, and one obtained from genotyping. A risk analysis divided the subjects into low-risk and high-risk groups according to a prognostic index, showing that both groups are statistically different (p-value of 2.04e-11). The results demonstrate that MRI features related to both signal and texture, add MCI to AD predictive power, and support the idea that multimodal biomarkers outperform single-modality biomarkers.
CATIA V5 Virtual Environment Support for Constellation Ground Operations
NASA Technical Reports Server (NTRS)
Kelley, Andrew
2009-01-01
This summer internship primarily involved using CATIA V5 modeling software to design and model parts to support ground operations for the Constellation program. I learned several new CATIA features, including the Imagine and Shape workbench and the Tubing Design workbench, and presented brief workbench lessons to my co-workers. Most modeling tasks involved visualizing design options for Launch Pad 39B operations, including Mobile Launcher Platform (MLP) access and internal access to the Ares I rocket. Other ground support equipment, including a hydrazine servicing cart, a mobile fuel vapor scrubber, a hypergolic propellant tank cart, and a SCAPE (Self Contained Atmospheric Protective Ensemble) suit, was created to aid in the visualization of pad operations.
Mars Global Reference Atmospheric Model 2010 Version: Users Guide
NASA Technical Reports Server (NTRS)
Justh, H. L.
2014-01-01
This Technical Memorandum (TM) presents the Mars Global Reference Atmospheric Model 2010 (Mars-GRAM 2010) and its new features. Mars-GRAM is an engineering-level atmospheric model widely used for diverse mission applications. Applications include systems design, performance analysis, and operations planning for aerobraking, entry, descent and landing, and aerocapture. Additionally, this TM includes instructions on obtaining the Mars-GRAM source code and data files as well as running Mars-GRAM. It also contains sample Mars-GRAM input and output files and an example of how to incorporate Mars-GRAM as an atmospheric subroutine in a trajectory code.
NASA Astrophysics Data System (ADS)
Carlomagno, J. P.
2018-05-01
We study the features of a nonlocal SU(3) Polyakov-Nambu-Jona-Lasinio model that includes wave-function renormalization. Model parameters are determined from vacuum phenomenology considering lattice-QCD-inspired nonlocal form factors. Within this framework, we analyze the properties of light scalar and pseudoscalar mesons at finite temperature and chemical potential determining characteristics of deconfinement and chiral restoration transitions.
A framework for modeling scenario-based barrier island storm impacts
Mickey, Rangley; Long, Joseph W.; Dalyander, P. Soupy; Plant, Nathaniel G.; Thompson, David M.
2018-01-01
Methods for investigating the vulnerability of existing or proposed coastal features to storm impacts often rely on simplified parametric models or one-dimensional process-based modeling studies that focus on changes to a profile across a dune or barrier island. These simple studies tend to neglect the impacts to curvilinear or alongshore varying island planforms, influence of non-uniform nearshore hydrodynamics and sediment transport, irregular morphology of the offshore bathymetry, and impacts from low magnitude wave events (e.g. cold fronts). Presented here is a framework for simulating regionally specific, low and high magnitude scenario-based storm impacts to assess the alongshore variable vulnerabilities of a coastal feature. Storm scenarios based on historic hydrodynamic conditions were derived and simulated using the process-based morphologic evolution model XBeach. Model results show that the scenarios predicted similar patterns of erosion and overwash when compared to observed qualitative morphologic changes from recent storm events that were not included in the dataset used to build the scenarios. The framework model simulations were capable of predicting specific areas of vulnerability in the existing feature and the results illustrate how this storm vulnerability simulation framework could be used as a tool to help inform the decision-making process for scientists, engineers, and stakeholders involved in coastal zone management or restoration projects.
Dai, Wensheng; Wu, Jui-Yu; Lu, Chi-Jie
2014-01-01
Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting.
Dai, Wensheng
2014-01-01
Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting. PMID:25165740
Fractal based modelling and analysis of electromyography (EMG) to identify subtle actions.
Arjunan, Sridhar P; Kumar, Dinesh K
2007-01-01
The paper reports the use of fractal theory and fractal dimension to study the non-linear properties of surface electromyogram (sEMG) and to use these properties to classify subtle hand actions. The paper reports identifying a new feature of the fractal dimension, the bias that has been found to be useful in modelling the muscle activity and of sEMG. Experimental results demonstrate that the feature set consisting of bias values and fractal dimension of the recordings is suitable for classification of sEMG against the different hand gestures. The scatter plots demonstrate the presence of simple relationships of these features against the four hand gestures. The results indicate that there is small inter-experimental variation but large inter-subject variation. This may be due to differences in the size and shape of muscles for different subjects. The possible applications of this research include use in developing prosthetic hands, controlling machines and computers.
The Impact of Solid Surface Features on Fluid-Fluid Interface Configuration
NASA Astrophysics Data System (ADS)
Araujo, J. B.; Brusseau, M. L. L.
2017-12-01
Pore-scale fluid processes in geological media are critical for a broad range of applications such as radioactive waste disposal, carbon sequestration, soil moisture distribution, subsurface pollution, land stability, and oil and gas recovery. The continued improvement of high-resolution image acquisition and processing have provided a means to test the usefulness of theoretical models developed to simulate pore-scale fluid processes, through the direct quantification of interfaces. High-resolution synchrotron X-ray microtomography is used in combination with advanced visualization tools to characterize fluid distributions in natural geologic media. The studies revealed the presence of fluid-fluid interface associated with macroscopic features on the surfaces of the solids such as pits and crevices. These features and respective fluid interfaces, which are not included in current theoretical or computational models, may have a significant impact on accurate simulation and understanding of multi-phase flow, energy, heat and mass transfer processes.
Improved Diagnostic Multimodal Biomarkers for Alzheimer's Disease and Mild Cognitive Impairment
Martínez-Torteya, Antonio; Treviño, Víctor; Tamez-Peña, José G.
2015-01-01
The early diagnosis of Alzheimer's disease (AD) and mild cognitive impairment (MCI) is very important for treatment research and patient care purposes. Few biomarkers are currently considered in clinical settings, and their use is still optional. The objective of this work was to determine whether multimodal and nonpreviously AD associated features could improve the classification accuracy between AD, MCI, and healthy controls, which may impact future AD biomarkers. For this, Alzheimer's Disease Neuroimaging Initiative database was mined for case-control candidates. At least 652 baseline features extracted from MRI and PET analyses, biological samples, and clinical data up to February 2014 were used. A feature selection methodology that includes a genetic algorithm search coupled to a logistic regression classifier and forward and backward selection strategies was used to explore combinations of features. This generated diagnostic models with sizes ranging from 3 to 8, including well documented AD biomarkers, as well as unexplored image, biochemical, and clinical features. Accuracies of 0.85, 0.79, and 0.80 were achieved for HC-AD, HC-MCI, and MCI-AD classifications, respectively, when evaluated using a blind test set. In conclusion, a set of features provided additional and independent information to well-established AD biomarkers, aiding in the classification of MCI and AD. PMID:26106620
NASA Technical Reports Server (NTRS)
Carlson, L. A.; Horn, W. J.
1981-01-01
A computer model for the prediction of the trajectory and thermal behavior of zero-pressure high altitude balloon was developed. In accord with flight data, the model permits radiative emission and absorption of the lifting gas and daytime gas temperatures above that of the balloon film. It also includes ballasting, venting, and valving. Predictions obtained with the model are compared with flight data from several flights and newly discovered features are discussed.
A simple 2D biofilm model yields a variety of morphological features.
Hermanowicz, S W
2001-01-01
A two-dimensional biofilm model was developed based on the concept of cellular automata. Three simple, generic processes were included in the model: cell growth, internal and external mass transport and cell detachment (erosion). The model generated a diverse range of biofilm morphologies (from dense layers to open, mushroom-like forms) similar to those observed in real biofilm systems. Bulk nutrient concentration and external mass transfer resistance had a large influence on the biofilm structure.
Analysis of precision and accuracy in a simple model of machine learning
NASA Astrophysics Data System (ADS)
Lee, Julian
2017-12-01
Machine learning is a procedure where a model for the world is constructed from a training set of examples. It is important that the model should capture relevant features of the training set, and at the same time make correct prediction for examples not included in the training set. I consider the polynomial regression, the simplest method of learning, and analyze the accuracy and precision for different levels of the model complexity.
How well do CMIP5 models simulate the low-level jet in western Colombia?
NASA Astrophysics Data System (ADS)
Sierra, Juan P.; Arias, Paola A.; Vieira, Sara C.; Agudelo, Jhoana
2017-11-01
The Choco jet is an important atmospheric feature of Colombian and northern South America hydro-climatology. This work assesses the ability of 26 coupled and 11 uncoupled (AMIP) global climate models (GCMs) included in the fifth phase of the Coupled Model Intercomparison Project (CMIP5) archive to simulate the climatological basic features (annual cycle, spatial distribution and vertical structure) of this jet. Using factor and cluster analysis, we objectively classify models in Best, Worst, and Intermediate groups. Despite the coarse resolution of the GCMs, this study demonstrates that nearly all models can represent the existence of the Choco low-level jet. AMIP and Best models present a more realistic simulation of jet. Worst models exhibit biases such as an anomalous southward location of the Choco jet during the whole year and a shallower jet. The model skill to represent this jet comes from their ability to reproduce some of its main causes, such as the temperature and pressure differences between particular regions in the eastern Pacific and western Colombian lands, which are non-local features. Conversely, Worst models considerably underestimate temperature and pressure differences between these key regions. We identify a close relationship between the location of the Choco jet and the Inter-tropical Convergence Zone (ITCZ), and CMIP5 models are able to represent such relationship. Errors in Worst models are related with bias in the location of the ITCZ over the eastern tropical Pacific Ocean, as well as the representation of the topography and the horizontal resolution.
D GIS for Flood Modelling in River Valleys
NASA Astrophysics Data System (ADS)
Tymkow, P.; Karpina, M.; Borkowski, A.
2016-06-01
The objective of this study is implementation of system architecture for collecting and analysing data as well as visualizing results for hydrodynamic modelling of flood flows in river valleys using remote sensing methods, tree-dimensional geometry of spatial objects and GPU multithread processing. The proposed solution includes: spatial data acquisition segment, data processing and transformation, mathematical modelling of flow phenomena and results visualization. Data acquisition segment was based on aerial laser scanning supplemented by images in visible range. Vector data creation was based on automatic and semiautomatic algorithms of DTM and 3D spatial features modelling. Algorithms for buildings and vegetation geometry modelling were proposed or adopted from literature. The implementation of the framework was designed as modular software using open specifications and partially reusing open source projects. The database structure for gathering and sharing vector data, including flood modelling results, was created using PostgreSQL. For the internal structure of feature classes of spatial objects in a database, the CityGML standard was used. For the hydrodynamic modelling the solutions of Navier-Stokes equations in two-dimensional version was implemented. Visualization of geospatial data and flow model results was transferred to the client side application. This gave the independence from server hardware platform. A real-world case in Poland, which is a part of Widawa River valley near Wroclaw city, was selected to demonstrate the applicability of proposed system.
Study for Updated Gout Classification Criteria (SUGAR): identification of features to classify gout
Taylor, William J.; Fransen, Jaap; Jansen, Tim L.; Dalbeth, Nicola; Schumacher, H. Ralph; Brown, Melanie; Louthrenoo, Worawit; Vazquez-Mellado, Janitzia; Eliseev, Maxim; McCarthy, Geraldine; Stamp, Lisa K.; Perez-Ruiz, Fernando; Sivera, Francisca; Ea, Hang-Korng; Gerritsen, Martijn; Scire, Carlo; Cavagna, Lorenzo; Lin, Chingtsai; Chou, Yin-Yi; Tausche, Anne-Kathrin; Vargas-Santos, Ana Beatriz; Janssen, Matthijs; Chen, Jiunn-Horng; Slot, Ole; Cimmino, Marco A.; Uhlig, Till; Neogi, Tuhina
2015-01-01
Objective To determine which clinical, laboratory and imaging features most accurately distinguished gout from non-gout. Methods A cross-sectional study of consecutive rheumatology clinic patients with at least one swollen joint or subcutaneous tophus. Gout was defined by synovial fluid or tophus aspirate microscopy by certified examiners in all patients. The sample was randomly divided into a model development (2/3) and test sample (1/3). Univariate and multivariate association between clinical features and MSU-defined gout was determined using logistic regression modelling. Shrinkage of regression weights was performed to prevent over-fitting of the final model. Latent class analysis was conducted to identify patterns of joint involvement. Results In total, 983 patients were included. Gout was present in 509 (52%). In the development sample (n=653), these features were selected for the final model (multivariate OR) joint erythema (2.13), difficulty walking (7.34), time to maximal pain < 24 hours (1.32), resolution by 2 weeks (3.58), tophus (7.29), MTP1 ever involved (2.30), location of currently tender joints: Other foot/ankle (2.28), MTP1 (2.82), serum urate level > 6 mg/dl (0.36 mmol/l) (3.35), ultrasound double contour sign (7.23), Xray erosion or cyst (2.49). The final model performed adequately in the test set with no evidence of misfit, high discrimination and predictive ability. MTP1 involvement was the most common joint pattern (39.4%) in gout cases. Conclusion Ten key discriminating features have been identified for further evaluation for new gout classification criteria. Ultrasound findings and degree of uricemia add discriminating value, and will significantly contribute to more accurate classification criteria. PMID:25777045
Structured Light-Based 3D Reconstruction System for Plants.
Nguyen, Thuy Tuong; Slaughter, David C; Max, Nelson; Maloof, Julin N; Sinha, Neelima
2015-07-29
Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance.
A survey of Applied Psychological Services' models of the human operator
NASA Technical Reports Server (NTRS)
Siegel, A. I.; Wolf, J. J.
1979-01-01
A historical perspective is presented in terms of the major features and status of two families of computer simulation models in which the human operator plays the primary role. Both task oriented and message oriented models are included. Two other recent efforts are summarized which deal with visual information processing. They involve not whole model development but a family of subroutines customized to add the human aspects to existing models. A global diagram of the generalized model development/validation process is presented and related to 15 criteria for model evaluation.
Terrestrial Analogs to Wind-Related Features at the Viking and Pathfinder Landing Sites on Mars
NASA Technical Reports Server (NTRS)
Greeley, Ronald; Bridges, Nathan T.; Kuzmin, Ruslan O.; Laity, Julie E.
2002-01-01
Features in the Mojave Desert and Iceland provide insight into the characteristics and origin of Martian wind-related landforms seen by the Viking and Pathfinder landers. The terrestrial sites were chosen because they exhibit diverse wind features that are generally well understood. These features have morphologies comparable to those on Mars and include origins by deposition and erosion, with erosional processes modifying both soils and rocks. Duneforms and drifts are the most common depositional features seen at the Martian landing sites and indicate supplies of sand-sized particles blown by generally unidirectional winds. Erosional features include lag deposits, moat-like depressions around some rocks, and exhumed soil horizons. They indicate that wind can deflate at least some sediments and that this process is particularly effective where the wind interacts with rocks. The formation of ripples and wind tails involves a combination of depositional and erosional processes. Rock erosional features, or ventifacts, are recognized by their overall shapes, erosional flutes, and characteristic surface textures resulting from abrasion by windblown particles. The physics of saltation requires that particles in ripples and duneforms are predominantly sand-sized (60-2000 microns). The orientations of duneforms, wind tails, moats, and ventifacts are correlated with surface winds above particle threshold. Such winds are influenced by local topography and are correlated with winds at higher altitudes predicted by atmospheric models.
Burnside, Elizabeth S.; Liu, Jie; Wu, Yirong; Onitilo, Adedayo A.; McCarty, Catherine; Page, C. David; Peissig, Peggy; Trentham-Dietz, Amy; Kitchner, Terrie; Fan, Jun; Yuan, Ming
2015-01-01
Rationale and Objectives The discovery of germline genetic variants associated with breast cancer has engendered interest in risk stratification for improved, targeted detection and diagnosis. However, there has yet to be a comparison of the predictive ability of these genetic variants with mammography abnormality descriptors. Materials and Methods Our IRB-approved, HIPAA-compliant study utilized a personalized medicine registry in which participants consented to provide a DNA sample and participate in longitudinal follow-up. In our retrospective, age-matched, case-controlled study of 373 cases and 395 controls who underwent breast biopsy, we collected risk factors selected a priori based on the literature including: demographic variables based on the Gail model, common germline genetic variants, and diagnostic mammography findings according to BI-RADS. We developed predictive models using logistic regression to determine the predictive ability of: 1) demographic variables, 2) 10 selected genetic variants, or 3) mammography BI-RADS features. We evaluated each model in turn by calculating a risk score for each patient using 10-fold cross validation; used this risk estimate to construct ROC curves; and compared the AUC of each using the DeLong method. Results The performance of the regression model using demographic risk factors was not statistically different from the model using genetic variants (p=0.9). The model using mammography features (AUC = 0.689) was superior to both the demographic model (AUC = .598; p<0.001) and the genetic model (AUC = .601; p<0.001). Conclusion BI-RADS features exceeded the ability of demographic and 10 selected germline genetic variants to predict breast cancer in women recommended for biopsy. PMID:26514439
NASA Astrophysics Data System (ADS)
Elbanna, A. E.
2015-12-01
The brittle portion of the crust contains structural features such as faults, jogs, joints, bends and cataclastic zones that span a wide range of length scales. These features may have a profound effect on earthquake nucleation, propagation and arrest. Incorporating these existing features in modeling and the ability to spontaneously generate new one in response to earthquake loading is crucial for predicting seismicity patterns, distribution of aftershocks and nucleation sites, earthquakes arrest mechanisms, and topological changes in the seismogenic zone structure. Here, we report on our efforts in modeling two important mechanisms contributing to the evolution of fault zone topology: (1) Grain comminution at the submeter scale, and (2) Secondary faulting/plasticity at the scale of few to hundreds of meters. We use the finite element software Abaqus to model the dynamic rupture. The constitutive response of the fault zone is modeled using the Shear Transformation Zone theory, a non-equilibrium statistical thermodynamic framework for modeling plastic deformation and localization in amorphous materials such as fault gouge. The gouge layer is modeled as 2D plane strain region with a finite thickness and heterogeenous distribution of porosity. By coupling the amorphous gouge with the surrounding elastic bulk, the model introduces a set of novel features that go beyond the state of the art. These include: (1) self-consistent rate dependent plasticity with a physically-motivated set of internal variables, (2) non-locality that alleviates mesh dependence of shear band formation, (3) spontaneous evolution of fault roughness and its strike which affects ground motion generation and the local stress fields, and (4) spontaneous evolution of grain size and fault zone fabric.
A Unified Framework for Complex Networks with Degree Trichotomy Based on Markov Chains.
Hui, David Shui Wing; Chen, Yi-Chao; Zhang, Gong; Wu, Weijie; Chen, Guanrong; Lui, John C S; Li, Yingtao
2017-06-16
This paper establishes a Markov chain model as a unified framework for describing the evolution processes in complex networks. The unique feature of the proposed model is its capability in addressing the formation mechanism that can reflect the "trichotomy" observed in degree distributions, based on which closed-form solutions can be derived. Important special cases of the proposed unified framework are those classical models, including Poisson, Exponential, Power-law distributed networks. Both simulation and experimental results demonstrate a good match of the proposed model with real datasets, showing its superiority over the classical models. Implications of the model to various applications including citation analysis, online social networks, and vehicular networks design, are also discussed in the paper.
Borisyuk, Alla; Semple, Malcolm N; Rinzel, John
2002-10-01
A mathematical model was developed for exploring the sensitivity of low-frequency inferior colliculus (IC) neurons to interaural phase disparity (IPD). The formulation involves a firing-rate-type model that does not include spikes per se. The model IC neuron receives IPD-tuned excitatory and inhibitory inputs (viewed as the output of a collection of cells in the medial superior olive). The model cell possesses cellular properties of firing rate adaptation and postinhibitory rebound (PIR). The descriptions of these mechanisms are biophysically reasonable, but only semi-quantitative. We seek to explain within a minimal model the experimentally observed mismatch between responses to IPD stimuli delivered dynamically and those delivered statically (McAlpine et al. 2000; Spitzer and Semple 1993). The model reproduces many features of the responses to static IPD presentations, binaural beat, and partial range sweep stimuli. These features include differences in responses to a stimulus presented in static or dynamic context: sharper tuning and phase shifts in response to binaural beats, and hysteresis and "rise-from-nowhere" in response to partial range sweeps. Our results suggest that dynamic response features are due to the structure of inputs and the presence of firing rate adaptation and PIR mechanism in IC cells, but do not depend on a specific biophysical mechanism. We demonstrate how the model's various components contribute to shaping the observed phenomena. For example, adaptation, PIR, and transmission delay shape phase advances and delays in responses to binaural beats, adaptation and PIR shape hysteresis in different ranges of IPD, and tuned inhibition underlies asymmetry in dynamic tuning properties. We also suggest experiments to test our modeling predictions: in vitro simulation of the binaural beat (phase advance at low beat frequencies, its dependence on firing rate), in vivo partial range sweep experiments (dependence of the hysteresis curve on parameters), and inhibition blocking experiments (to study inhibitory tuning properties by observation of phase shifts).
CAD-model-based vision for space applications
NASA Technical Reports Server (NTRS)
Shapiro, Linda G.
1988-01-01
A pose acquisition system operating in space must be able to perform well in a variety of different applications including automated guidance and inspections tasks with many different, but known objects. Since the space station is being designed with automation in mind, there will be CAD models of all the objects, including the station itself. The construction of vision models and procedures directly from the CAD models is the goal of this project. The system that is being designed and implementing must convert CAD models to vision models, predict visible features from a given view point from the vision models, construct view classes representing views of the objects, and use the view class model thus derived to rapidly determine the pose of the object from single images and/or stereo pairs.
Modeling Tidal Stresses on Satellites Using an Enhanced SatStressGUI
NASA Astrophysics Data System (ADS)
Patthoff, D. A.; Pappalardo, R. T.; Li, J.; Ayton, B.; Kay, J.; Kattenhorn, S. A.
2015-12-01
Icy and rocky satellites of our solar system display a wide range of geological deformation on their surfaces. Some are old and heavily cratered while other are observed to be presently active. Many of the potential sources of stress which can deform satellites are tied to the tidal deformation the moons experience as they orbit their parent planets. Other plausible sources of global-scale stress include a change in orbital parameters, nonsynchronous rotation, or volume change induced by the melting or freezing of a subsurface layer. We turn to computer modeling to correlate observed geologic features to the possible stresses that created them. One model is the SatStress open-source program developed by Z. Selvans (Wahr et al.,2009) to compute viscoelastic diurnal and nonsynchronous rotation stresses using a four-layer viscoelastic satellite model. Kay and Katternhorn (2010) expanded on this work by developing SatStressGUI, which integrated SatStress's original features into a graphical user interface. SatStressGUI computes stress vectors and Love numbers, and generates stress plots and lineaments. We have expanded on SatStressGUI by adding features such as the ability to generate cycloid-style lineaments, calculate stresses resulting from obliquity, and more efficient batch the processing of data. Users may also define their own Love numbers to propagate through further calculations. Here we demonstrate our recent enhancements to SatStressGUI and its abilities, by comparing observed features on Enceladus and Europa to modeled diurnal, nonsynchronous, and obliquity stresses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peeters, Stephanie; Hoogeman, Mischa S.; Heemsbergen, Wilma D.
2006-09-01
Purpose: To analyze whether inclusion of predisposing clinical features in the Lyman-Kutcher-Burman (LKB) normal tissue complication probability (NTCP) model improves the estimation of late gastrointestinal toxicity. Methods and Materials: This study includes 468 prostate cancer patients participating in a randomized trial comparing 68 with 78 Gy. We fitted the probability of developing late toxicity within 3 years (rectal bleeding, high stool frequency, and fecal incontinence) with the original, and a modified LKB model, in which a clinical feature (e.g., history of abdominal surgery) was taken into account by fitting subset specific TD50s. The ratio of these TD50s is the dose-modifyingmore » factor for that clinical feature. Dose distributions of anorectal (bleeding and frequency) and anal wall (fecal incontinence) were used. Results: The modified LKB model gave significantly better fits than the original LKB model. Patients with a history of abdominal surgery had a lower tolerance to radiation than did patients without previous surgery, with a dose-modifying factor of 1.1 for bleeding and of 2.5 for fecal incontinence. The dose-response curve for bleeding was approximately two times steeper than that for frequency and three times steeper than that for fecal incontinence. Conclusions: Inclusion of predisposing clinical features significantly improved the estimation of the NTCP. For patients with a history of abdominal surgery, more severe dose constraints should therefore be used during treatment plan optimization.« less
Regulating automobile pollution under certainty, competition, and imperfect information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Innes, R.
1996-09-01
This paper studies an integrated economic model of automobile emissions that incorporates consumer mileage, automobile features, and fuel content choices. Optimal regulatory policies are shown to include fuel content standards, gasoline taxes, and direct automobile regulation or taxation.
April 2013 MOVES Model Review Work Group Meeting Materials
Presentations from the meeting on April 30th of 2013 include a focus on the next version of MOtor Vehicle Emission Simulator (MOVES), evaluating proposed data sources and analysis methods, and commenting on or suggesting features or enhancements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, J; Pollom, E; Durkee, B
2015-06-15
Purpose: To predict response to radiation treatment using computational FDG-PET and CT images in locally advanced head and neck cancer (HNC). Methods: 68 patients with State III-IVB HNC treated with chemoradiation were included in this retrospective study. For each patient, we analyzed primary tumor and lymph nodes on PET and CT scans acquired both prior to and during radiation treatment, which led to 8 combinations of image datasets. From each image set, we extracted high-throughput, radiomic features of the following types: statistical, morphological, textural, histogram, and wavelet, resulting in a total of 437 features. We then performed unsupervised redundancy removalmore » and stability test on these features. To avoid over-fitting, we trained a logistic regression model with simultaneous feature selection based on least absolute shrinkage and selection operator (LASSO). To objectively evaluate the prediction ability, we performed 5-fold cross validation (CV) with 50 random repeats of stratified bootstrapping. Feature selection and model training was solely conducted on the training set and independently validated on the holdout test set. Receiver operating characteristic (ROC) curve of the pooled Result and the area under the ROC curve (AUC) was calculated as figure of merit. Results: For predicting local-regional recurrence, our model built on pre-treatment PET of lymph nodes achieved the best performance (AUC=0.762) on 5-fold CV, which compared favorably with node volume and SUVmax (AUC=0.704 and 0.449, p<0.001). Wavelet coefficients turned out to be the most predictive features. Prediction of distant recurrence showed a similar trend, in which pre-treatment PET features of lymph nodes had the highest AUC of 0.705. Conclusion: The radiomics approach identified novel imaging features that are predictive to radiation treatment response. If prospectively validated in larger cohorts, they could aid in risk-adaptive treatment of HNC.« less
Automatic classification of animal vocalizations
NASA Astrophysics Data System (ADS)
Clemins, Patrick J.
2005-11-01
Bioacoustics, the study of animal vocalizations, has begun to use increasingly sophisticated analysis techniques in recent years. Some common tasks in bioacoustics are repertoire determination, call detection, individual identification, stress detection, and behavior correlation. Each research study, however, uses a wide variety of different measured variables, called features, and classification systems to accomplish these tasks. The well-established field of human speech processing has developed a number of different techniques to perform many of the aforementioned bioacoustics tasks. Melfrequency cepstral coefficients (MFCCs) and perceptual linear prediction (PLP) coefficients are two popular feature sets. The hidden Markov model (HMM), a statistical model similar to a finite autonoma machine, is the most commonly used supervised classification model and is capable of modeling both temporal and spectral variations. This research designs a framework that applies models from human speech processing for bioacoustic analysis tasks. The development of the generalized perceptual linear prediction (gPLP) feature extraction model is one of the more important novel contributions of the framework. Perceptual information from the species under study can be incorporated into the gPLP feature extraction model to represent the vocalizations as the animals might perceive them. By including this perceptual information and modifying parameters of the HMM classification system, this framework can be applied to a wide range of species. The effectiveness of the framework is shown by analyzing African elephant and beluga whale vocalizations. The features extracted from the African elephant data are used as input to a supervised classification system and compared to results from traditional statistical tests. The gPLP features extracted from the beluga whale data are used in an unsupervised classification system and the results are compared to labels assigned by experts. The development of a framework from which to build animal vocalization classifiers will provide bioacoustics researchers with a consistent platform to analyze and classify vocalizations. A common framework will also allow studies to compare results across species and institutions. In addition, the use of automated classification techniques can speed analysis and uncover behavioral correlations not readily apparent using traditional techniques.
Suppression effects in feature-based attention
Wang, Yixue; Miller, James; Liu, Taosheng
2015-01-01
Attending to a feature enhances visual processing of that feature, but it is less clear what occurs to unattended features. Single-unit recording studies in middle temporal (MT) have shown that neuronal modulation is a monotonic function of the difference between the attended and neuron's preferred direction. Such a relationship should predict a monotonic suppressive effect in psychophysical performance. However, past research on suppressive effects of feature-based attention has remained inconclusive. We investigated the suppressive effect for motion direction, orientation, and color in three experiments. We asked participants to detect a weak signal among noise and provided a partially valid feature cue to manipulate attention. We measured performance as a function of the offset between the cued and signal feature. We also included neutral trials where no feature cues were presented to provide a baseline measure of performance. Across three experiments, we consistently observed enhancement effects when the target feature and cued feature coincided and suppression effects when the target feature deviated from the cued feature. The exact profile of suppression was different across feature dimensions: Whereas the profile for direction exhibited a “rebound” effect, the profiles for orientation and color were monotonic. These results demonstrate that unattended features are suppressed during feature-based attention, but the exact suppression profile depends on the specific feature. Overall, the results are largely consistent with neurophysiological data and support the feature-similarity gain model of attention. PMID:26067533
The role of park conditions and features on park visitation and physical activity.
Rung, Ariane L; Mowen, Andrew J; Broyles, Stephanie T; Gustat, Jeanette
2011-09-01
Neighborhood parks play an important role in promoting physical activity. We examined the effect of activity area, condition, and presence of supporting features on number of park users and park-based physical activity levels. 37 parks and 154 activity areas within parks were assessed during summer 2008 for their features and park-based physical activity. Outcomes included any park use, number of park users, mean and total energy expenditure. Independent variables included type and condition of activity area, supporting features, size of activity area, gender, and day of week. Multilevel models controlled for clustering of observations at activity area and park levels. Type of activity area was associated with number of park users, mean and total energy expenditure, with basketball courts having the highest number of users and total energy expenditure, and playgrounds having the highest mean energy expenditure. Condition of activity areas was positively associated with number of basketball court users and inversely associated with number of green space users and total green space energy expenditure. Various supporting features were both positively and negatively associated with each outcome. This study provides evidence regarding characteristics of parks that can contribute to achieving physical activity goals within recreational spaces.
Periodontal Defects in the A116T Knock-in Murine Model of Odontohypophosphatasia.
Foster, B L; Sheen, C R; Hatch, N E; Liu, J; Cory, E; Narisawa, S; Kiffer-Moreira, T; Sah, R L; Whyte, M P; Somerman, M J; Millán, J L
2015-05-01
Mutations in ALPL result in hypophosphatasia (HPP), a disease causing defective skeletal mineralization. ALPL encodes tissue nonspecific alkaline phosphatase (ALP), an enzyme that promotes mineralization by reducing inorganic pyrophosphate, a mineralization inhibitor. In addition to skeletal defects, HPP causes dental defects, and a mild clinical form of HPP, odontohypophosphatasia, features only a dental phenotype. The Alpl knockout (Alpl (-/-)) mouse phenocopies severe infantile HPP, including profound skeletal and dental defects. However, the severity of disease in Alpl (-/-) mice prevents analysis at advanced ages, including studies to target rescue of dental tissues. We aimed to generate a knock-in mouse model of odontohypophosphatasia with a primarily dental phenotype, based on a mutation (c.346G>A) identified in a human kindred with autosomal dominant odontohypophosphatasia. Biochemical, skeletal, and dental analyses were performed on the resulting Alpl(+/A116T) mice to validate this model. Alpl(+/A116T) mice featured 50% reduction in plasma ALP activity compared with wild-type controls. No differences in litter size, survival, or body weight were observed in Alpl(+/A116T) versus wild-type mice. The postcranial skeleton of Alpl(+/A116T) mice was normal by radiography, with no differences in femur length, cortical/trabecular structure or mineral density, or mechanical properties. Parietal bone trabecular compartment was mildly altered. Alpl(+/A116T) mice featured alterations in the alveolar bone, including radiolucencies and resorptive lesions, osteoid accumulation on the alveolar bone crest, and significant differences in several bone properties measured by micro-computed tomography. Nonsignificant changes in acellular cementum did not appear to affect periodontal attachment or function, although circulating ALP activity was correlated significantly with incisor cementum thickness. The Alpl(+/A116T) mouse is the first model of odontohypophosphatasia, providing insights on dentoalveolar development and function under reduced ALP, bringing attention to direct effects of HPP on alveolar bone, and offering a new model for testing potential dental-targeted therapies in future studies. © International & American Associations for Dental Research 2015.
Spatial Data Transfer Standard (SDTS)
,
1999-01-01
The American National Standards Institute?s (ANSI) Spatial Data Transfer Standard (SDTS) is a mechanism for archiving and transferring of spatial data (including metadata) between dissimilar computer systems. The SDTS specifies exchange constructs, such as format, structure, and content, for spatially referenced vector and raster (including gridded) data. The SDTS includes a flexible conceptual model, specifications for a quality report, transfer module specifications, data dictionary specifications, and definitions of spatial features and attributes.
Converging shock flows for a Mie-Grüneisen equation of state
NASA Astrophysics Data System (ADS)
Ramsey, Scott D.; Schmidt, Emma M.; Boyd, Zachary M.; Lilieholm, Jennifer F.; Baty, Roy S.
2018-04-01
Previous work has shown that the one-dimensional (1D) inviscid compressible flow (Euler) equations admit a wide variety of scale-invariant solutions (including the famous Noh, Sedov, and Guderley shock solutions) when the included equation of state (EOS) closure model assumes a certain scale-invariant form. However, this scale-invariant EOS class does not include even simple models used for shock compression of crystalline solids, including many broadly applicable representations of Mie-Grüneisen EOS. Intuitively, this incompatibility naturally arises from the presence of multiple dimensional scales in the Mie-Grüneisen EOS, which are otherwise absent from scale-invariant models that feature only dimensionless parameters (such as the adiabatic index in the ideal gas EOS). The current work extends previous efforts intended to rectify this inconsistency, by using a scale-invariant EOS model to approximate a Mie-Grüneisen EOS form. To this end, the adiabatic bulk modulus for the Mie-Grüneisen EOS is constructed, and its key features are used to motivate the selection of a scale-invariant approximation form. The remaining surrogate model parameters are selected through enforcement of the Rankine-Hugoniot jump conditions for an infinitely strong shock in a Mie-Grüneisen material. Finally, the approximate EOS is used in conjunction with the 1D inviscid Euler equations to calculate a semi-analytical Guderley-like imploding shock solution in a metal sphere and to determine if and when the solution may be valid for the underlying Mie-Grüneisen EOS.
Morris, Jeffrey S
2012-01-01
In recent years, developments in molecular biotechnology have led to the increased promise of detecting and validating biomarkers, or molecular markers that relate to various biological or medical outcomes. Proteomics, the direct study of proteins in biological samples, plays an important role in the biomarker discovery process. These technologies produce complex, high dimensional functional and image data that present many analytical challenges that must be addressed properly for effective comparative proteomics studies that can yield potential biomarkers. Specific challenges include experimental design, preprocessing, feature extraction, and statistical analysis accounting for the inherent multiple testing issues. This paper reviews various computational aspects of comparative proteomic studies, and summarizes contributions I along with numerous collaborators have made. First, there is an overview of comparative proteomics technologies, followed by a discussion of important experimental design and preprocessing issues that must be considered before statistical analysis can be done. Next, the two key approaches to analyzing proteomics data, feature extraction and functional modeling, are described. Feature extraction involves detection and quantification of discrete features like peaks or spots that theoretically correspond to different proteins in the sample. After an overview of the feature extraction approach, specific methods for mass spectrometry ( Cromwell ) and 2D gel electrophoresis ( Pinnacle ) are described. The functional modeling approach involves modeling the proteomic data in their entirety as functions or images. A general discussion of the approach is followed by the presentation of a specific method that can be applied, wavelet-based functional mixed models, and its extensions. All methods are illustrated by application to two example proteomic data sets, one from mass spectrometry and one from 2D gel electrophoresis. While the specific methods presented are applied to two specific proteomic technologies, MALDI-TOF and 2D gel electrophoresis, these methods and the other principles discussed in the paper apply much more broadly to other expression proteomics technologies.
Feature-to-Feature Inference Under Conditions of Cue Restriction and Dimensional Correlation.
Lancaster, Matthew E; Homa, Donald
2017-01-01
The present study explored feature-to-feature and label-to-feature inference in a category task for different category structures. In the correlated condition, each of the 4 dimensions comprising the category was positively correlated to each other and to the category label. In the uncorrelated condition, no correlation existed between the 4 dimensions comprising the category, although the dimension to category label correlation matched that of the correlated condition. After learning, participants made inference judgments of a missing feature, given 1, 2, or 3 feature cues; on half the trials, the category label was also included as a cue. The results showed superior inference of features following training on the correlated structure, with accurate inference when only a single feature was presented. In contrast, a single-feature cue resulted in chance levels of inference for the uncorrelated structure. Feature inference systematically improved with number of cues after training on the correlated structure. Surprisingly, a similar outcome was obtained for the uncorrelated structure, an outcome that must have reflected mediation via the category label. A descriptive model is briefly introduced to explain the results, with a suggestion that this paradigm might be profitably extended to hierarchical structures where the levels of feature-to-feature inference might vary with the depth of the hierarchy.
Naturally-Occurring Canine Invasive Urothelial Carcinoma: A Model for Emerging Therapies
Sommer, Breann C.; Dhawan, Deepika; Ratliff, Timothy L.; Knapp, Deborah W.
2018-01-01
The development of targeted therapies and the resurgence of immunotherapy offer enormous potential to dramatically improve the outlook for patients with invasive urothelial carcinoma (InvUC). Optimization of these therapies, however, is crucial as only a minority of patients achieve dramatic remission, and toxicities are common. With the complexities of the therapies, and the growing list of possible drug combinations to test, highly relevant animal models are needed to assess and select the most promising approaches to carry forward into human trials. The animal model(s) should possess key features that dictate success or failure of cancer drugs in humans including tumor heterogeneity, genetic-epigenetic crosstalk, immune cell responsiveness, invasive and metastatic behavior, and molecular subtypes (e.g., luminal, basal). While it may not be possible to create these collective features in experimental models, these features are present in naturally-occurring InvUC in pet dogs. Naturally occurring canine InvUC closely mimics muscle-invasive bladder cancer in humans in regards to cellular and molecular features, molecular subtypes, biological behavior (sites and frequency of metastasis), and response to therapy. Clinical treatment trials in pet dogs with InvUC are considered a win-win scenario; the individual dog benefits from effective treatment, the results are expected to help other dogs, and the findings are expected to translate to better treatment outcomes in humans. This review will provide an overview of canine InvUC, the similarities to the human condition, and the potential for dogs with InvUC to serve as a model to predict the outcomes of targeted therapy and immunotherapy in humans. PMID:29732386
NASA Astrophysics Data System (ADS)
Wang, Yu; Guo, Yanzhi; Kuang, Qifan; Pu, Xuemei; Ji, Yue; Zhang, Zhihang; Li, Menglong
2015-04-01
The assessment of binding affinity between ligands and the target proteins plays an essential role in drug discovery and design process. As an alternative to widely used scoring approaches, machine learning methods have also been proposed for fast prediction of the binding affinity with promising results, but most of them were developed as all-purpose models despite of the specific functions of different protein families, since proteins from different function families always have different structures and physicochemical features. In this study, we proposed a random forest method to predict the protein-ligand binding affinity based on a comprehensive feature set covering protein sequence, binding pocket, ligand structure and intermolecular interaction. Feature processing and compression was respectively implemented for different protein family datasets, which indicates that different features contribute to different models, so individual representation for each protein family is necessary. Three family-specific models were constructed for three important protein target families of HIV-1 protease, trypsin and carbonic anhydrase respectively. As a comparison, two generic models including diverse protein families were also built. The evaluation results show that models on family-specific datasets have the superior performance to those on the generic datasets and the Pearson and Spearman correlation coefficients ( R p and Rs) on the test sets are 0.740, 0.874, 0.735 and 0.697, 0.853, 0.723 for HIV-1 protease, trypsin and carbonic anhydrase respectively. Comparisons with the other methods further demonstrate that individual representation and model construction for each protein family is a more reasonable way in predicting the affinity of one particular protein family.
Naturally-Occurring Canine Invasive Urothelial Carcinoma: A Model for Emerging Therapies.
Sommer, Breann C; Dhawan, Deepika; Ratliff, Timothy L; Knapp, Deborah W
2018-04-26
The development of targeted therapies and the resurgence of immunotherapy offer enormous potential to dramatically improve the outlook for patients with invasive urothelial carcinoma (InvUC). Optimization of these therapies, however, is crucial as only a minority of patients achieve dramatic remission, and toxicities are common. With the complexities of the therapies, and the growing list of possible drug combinations to test, highly relevant animal models are needed to assess and select the most promising approaches to carry forward into human trials. The animal model(s) should possess key features that dictate success or failure of cancer drugs in humans including tumor heterogeneity, genetic-epigenetic crosstalk, immune cell responsiveness, invasive and metastatic behavior, and molecular subtypes (e.g., luminal, basal). While it may not be possible to create these collective features in experimental models, these features are present in naturally-occurring InvUC in pet dogs. Naturally occurring canine InvUC closely mimics muscle-invasive bladder cancer in humans in regards to cellular and molecular features, molecular subtypes, biological behavior (sites and frequency of metastasis), and response to therapy. Clinical treatment trials in pet dogs with InvUC are considered a win-win scenario; the individual dog benefits from effective treatment, the results are expected to help other dogs, and the findings are expected to translate to better treatment outcomes in humans. This review will provide an overview of canine InvUC, the similarities to the human condition, and the potential for dogs with InvUC to serve as a model to predict the outcomes of targeted therapy and immunotherapy in humans.
Extended behavioural device modelling and circuit simulation with Qucs-S
NASA Astrophysics Data System (ADS)
Brinson, M. E.; Kuznetsov, V.
2018-03-01
Current trends in circuit simulation suggest a growing interest in open source software that allows access to more than one simulation engine while simultaneously supporting schematic drawing tools, behavioural Verilog-A and XSPICE component modelling, and output data post-processing. This article introduces a number of new features recently implemented in the 'Quite universal circuit simulator - SPICE variant' (Qucs-S), including structure and fundamental schematic capture algorithms, at the same time highlighting their use in behavioural semiconductor device modelling. Particular importance is placed on the interaction between Qucs-S schematics, equation-defined devices, SPICE B behavioural sources and hardware description language (HDL) scripts. The multi-simulator version of Qucs is a freely available tool that offers extended modelling and simulation features compared to those provided by legacy circuit simulators. The performance of a number of Qucs-S modelling extensions are demonstrated with a GaN HEMT compact device model and data obtained from tests using the Qucs-S/Ngspice/Xyce ©/SPICE OPUS multi-engine circuit simulator.
Lau, Stephan; Hiemisch, Anette; Baumeister, Roy F
2015-05-01
Six experiments tested two competing models of subjective freedom during decision-making. The process model is mainly based on philosophical conceptions of free will and assumes that features of the process of choosing affect subjective feelings of freedom. In contrast, the outcome model predicts that subjective freedom is due to positive outcomes that can be expected or are achieved by a decision. Results heavily favored the outcome model over the process model. For example, participants felt freer when choosing between two equally good than two equally bad options. Process features including number of options, complexity of decision, uncertainty, having the option to defer the decision, conflict among reasons, and investing high effort in choosing generally had no or even negative effects on subjective freedom. In contrast, participants reported high freedom with good outcomes and low freedom with bad outcomes, and ease of deciding increased subjective freedom, consistent with the outcome model. Copyright © 2014 Elsevier Inc. All rights reserved.
Kasthurirathne, Suranga N; Dixon, Brian E; Gichoya, Judy; Xu, Huiping; Xia, Yuni; Mamlin, Burke; Grannis, Shaun J
2017-05-01
Existing approaches to derive decision models from plaintext clinical data frequently depend on medical dictionaries as the sources of potential features. Prior research suggests that decision models developed using non-dictionary based feature sourcing approaches and "off the shelf" tools could predict cancer with performance metrics between 80% and 90%. We sought to compare non-dictionary based models to models built using features derived from medical dictionaries. We evaluated the detection of cancer cases from free text pathology reports using decision models built with combinations of dictionary or non-dictionary based feature sourcing approaches, 4 feature subset sizes, and 5 classification algorithms. Each decision model was evaluated using the following performance metrics: sensitivity, specificity, accuracy, positive predictive value, and area under the receiver operating characteristics (ROC) curve. Decision models parameterized using dictionary and non-dictionary feature sourcing approaches produced performance metrics between 70 and 90%. The source of features and feature subset size had no impact on the performance of a decision model. Our study suggests there is little value in leveraging medical dictionaries for extracting features for decision model building. Decision models built using features extracted from the plaintext reports themselves achieve comparable results to those built using medical dictionaries. Overall, this suggests that existing "off the shelf" approaches can be leveraged to perform accurate cancer detection using less complex Named Entity Recognition (NER) based feature extraction, automated feature selection and modeling approaches. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allgood, G.O.; Dress, W.B.; Kercel, S.W.
1999-05-10
A major problem with cavitation in pumps and other hydraulic devices is that there is no effective method for detecting or predicting its inception. The traditional approach is to declare the pump in cavitation when the total head pressure drops by some arbitrary value (typically 3o/0) in response to a reduction in pump inlet pressure. However, the pump is already cavitating at this point. A method is needed in which cavitation events are captured as they occur and characterized by their process dynamics. The object of this research was to identify specific features of cavitation that could be used asmore » a model-based descriptor in a context-dependent condition-based maintenance (CD-CBM) anticipatory prognostic and health assessment model. This descriptor was based on the physics of the phenomena, capturing the salient features of the process dynamics. An important element of this concept is the development and formulation of the extended process feature vector @) or model vector. Thk model-based descriptor encodes the specific information that describes the phenomena and its dynamics and is formulated as a data structure consisting of several elements. The first is a descriptive model abstracting the phenomena. The second is the parameter list associated with the functional model. The third is a figure of merit, a single number between [0,1] representing a confidence factor that the functional model and parameter list actually describes the observed data. Using this as a basis and applying it to the cavitation problem, any given location in a flow loop will have this data structure, differing in value but not content. The extended process feature vector is formulated as follows: E`> [ , {parameter Iist}, confidence factor]. (1) For this study, the model that characterized cavitation was a chirped-exponentially decaying sinusoid. Using the parameters defined by this model, the parameter list included frequency, decay, and chirp rate. Based on this, the process feature vector has the form: @=> [, {01 = a, ~= b, ~ = c}, cf = 0.80]. (2) In this experiment a reversible catastrophe was examined. The reason for this is that the same catastrophe could be repeated to ensure the statistical significance of the data.« less
van der Ster, Björn J P; Bennis, Frank C; Delhaas, Tammo; Westerhof, Berend E; Stok, Wim J; van Lieshout, Johannes J
2017-01-01
Introduction: In the initial phase of hypovolemic shock, mean blood pressure (BP) is maintained by sympathetically mediated vasoconstriction rendering BP monitoring insensitive to detect blood loss early. Late detection can result in reduced tissue oxygenation and eventually cellular death. We hypothesized that a machine learning algorithm that interprets currently used and new hemodynamic parameters could facilitate in the detection of impending hypovolemic shock. Method: In 42 (27 female) young [mean (sd): 24 (4) years], healthy subjects central blood volume (CBV) was progressively reduced by application of -50 mmHg lower body negative pressure until the onset of pre-syncope. A support vector machine was trained to classify samples into normovolemia (class 0), initial phase of CBV reduction (class 1) or advanced CBV reduction (class 2). Nine models making use of different features were computed to compare sensitivity and specificity of different non-invasive hemodynamic derived signals. Model features included : volumetric hemodynamic parameters (stroke volume and cardiac output), BP curve dynamics, near-infrared spectroscopy determined cortical brain oxygenation, end-tidal carbon dioxide pressure, thoracic bio-impedance, and middle cerebral artery transcranial Doppler (TCD) blood flow velocity. Model performance was tested by quantifying the predictions with three methods : sensitivity and specificity, absolute error, and quantification of the log odds ratio of class 2 vs. class 0 probability estimates. Results: The combination with maximal sensitivity and specificity for classes 1 and 2 was found for the model comprising volumetric features (class 1: 0.73-0.98 and class 2: 0.56-0.96). Overall lowest model error was found for the models comprising TCD curve hemodynamics. Using probability estimates the best combination of sensitivity for class 1 (0.67) and specificity (0.87) was found for the model that contained the TCD cerebral blood flow velocity derived pulse height. The highest combination for class 2 was found for the model with the volumetric features (0.72 and 0.91). Conclusion: The most sensitive models for the detection of advanced CBV reduction comprised data that describe features from volumetric parameters and from cerebral blood flow velocity hemodynamics. In a validated model of hemorrhage in humans these parameters provide the best indication of the progression of central hypovolemia.
Modeling Electric Field Influences on Plasmaspheric Refilling
NASA Technical Reports Server (NTRS)
Liemohn, M. W.; Kozyra, J. U.; Khazanov, G. V.; Craven, Paul D.
1998-01-01
We have a new model of ion transport that we have applied to the problem of plasmaspheric flux tube refilling after a geomagnetic disturbance. This model solves the Fokker-Planck kinetic equation by applying discrete difference numerical schemes to the various operators. Features of the model include a time-varying ionospheric source, self-consistent Coulomb collisions, field-aligned electric field, hot plasma interactions, and ion cyclotron wave heating. We see refilling rates similar to those of earlier observations and models, except when the electric field is included. In this case, the refilling rates can be quite different that previously predicted. Depending on the populations included and the values of relevant parameters, trap zone densities can increase or decrease. In particular, the inclusion of hot populations near the equatorial region (specifically warm pancake distributions and ring current ions) can dramatically alter the refilling rate. Results are compared with observations as well as previous hydrodynamic and kinetic particle model simulations.
Initial Field Trial of a Coach-Supported Web-Based Depression Treatment.
Schueller, Stephen M; Mohr, David C
2015-08-01
Early web-based depression treatments were often self-guided and included few interactive elements, instead focusing mostly on delivering informational content online. Newer programs include many more types of features. As such, trials should analyze the ways in which people use these sites in order to inform the design of subsequent sites and models of support. The current study describes of a field trial consisting of 9 patients with major depressive disorder who completed a 12-week program including weekly coach calls. Patients usage varied widely, however, patients who formed regular patterns tended to persist with the program for the longest. Future sites might be able to facilitate user engagement by designing features to support regular use and to use coaches to help establish patterns to increase long-term use and benefit.
Transportation Modes Classification Using Sensors on Smartphones.
Fang, Shih-Hau; Liao, Hao-Hsiang; Fei, Yu-Xiang; Chen, Kai-Hsiang; Huang, Jen-Wei; Lu, Yu-Ding; Tsao, Yu
2016-08-19
This paper investigates the transportation and vehicular modes classification by using big data from smartphone sensors. The three types of sensors used in this paper include the accelerometer, magnetometer, and gyroscope. This study proposes improved features and uses three machine learning algorithms including decision trees, K-nearest neighbor, and support vector machine to classify the user's transportation and vehicular modes. In the experiments, we discussed and compared the performance from different perspectives including the accuracy for both modes, the executive time, and the model size. Results show that the proposed features enhance the accuracy, in which the support vector machine provides the best performance in classification accuracy whereas it consumes the largest prediction time. This paper also investigates the vehicle classification mode and compares the results with that of the transportation modes.
Transportation Modes Classification Using Sensors on Smartphones
Fang, Shih-Hau; Liao, Hao-Hsiang; Fei, Yu-Xiang; Chen, Kai-Hsiang; Huang, Jen-Wei; Lu, Yu-Ding; Tsao, Yu
2016-01-01
This paper investigates the transportation and vehicular modes classification by using big data from smartphone sensors. The three types of sensors used in this paper include the accelerometer, magnetometer, and gyroscope. This study proposes improved features and uses three machine learning algorithms including decision trees, K-nearest neighbor, and support vector machine to classify the user’s transportation and vehicular modes. In the experiments, we discussed and compared the performance from different perspectives including the accuracy for both modes, the executive time, and the model size. Results show that the proposed features enhance the accuracy, in which the support vector machine provides the best performance in classification accuracy whereas it consumes the largest prediction time. This paper also investigates the vehicle classification mode and compares the results with that of the transportation modes. PMID:27548182
Hypnosis and belief: A review of hypnotic delusions.
Connors, Michael H
2015-11-01
Hypnosis can create temporary, but highly compelling alterations in belief. As such, it can be used to model many aspects of clinical delusions in the laboratory. This approach allows researchers to recreate features of delusions on demand and examine underlying processes with a high level of experimental control. This paper reviews studies that have used hypnosis to model delusions in this way. First, the paper reviews studies that have focused on reproducing the surface features of delusions, such as their high levels of subjective conviction and strong resistance to counter-evidence. Second, the paper reviews studies that have focused on modelling underlying processes of delusions, including anomalous experiences or cognitive deficits that underpin specific delusional beliefs. Finally, the paper evaluates this body of research as a whole. The paper discusses advantages and limitations of using hypnotic models to study delusions and suggests some directions for future research. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mady, Franck, E-mail: franck.mady@unice.fr; Duchez, Jean-Bernard, E-mail: franck.mady@unice.fr; Mebrouk, Yasmine, E-mail: franck.mady@unice.fr
2014-10-21
We propose a model to describe the photo- or/and the radiation-induced darkening of ytterbium-doped silica optical fibers. This model accounts for the well-established experimental features of photo-darkening. Degradation behaviors predicted for fibers pumped in harsh environments are also fully confirmed by experimental data reported in the work by Duchez et al. (this proceeding), which gives a detailed characterization of the interplay between the effects of the pump and those of a superimposed ionizing irradiation (actual operation conditions in space-based applications for instance). In particular, dependences of the darkening build-up on the pump power, the total ionizing dose and the dosemore » rate are all correctly reproduced. The presented model is a ‘sufficient’ one, including the minimal physical ingredients required to reproduce experimental features. Refinements could be proposed to improve, e.g., quantitative kinetics.« less
Vidić, Igor; Egnell, Liv; Jerome, Neil P; Teruel, Jose R; Sjøbakk, Torill E; Østlie, Agnes; Fjøsne, Hans E; Bathen, Tone F; Goa, Pål Erik
2018-05-01
Diffusion-weighted MRI (DWI) is currently one of the fastest developing MRI-based techniques in oncology. Histogram properties from model fitting of DWI are useful features for differentiation of lesions, and classification can potentially be improved by machine learning. To evaluate classification of malignant and benign tumors and breast cancer subtypes using support vector machine (SVM). Prospective. Fifty-one patients with benign (n = 23) and malignant (n = 28) breast tumors (26 ER+, whereof six were HER2+). Patients were imaged with DW-MRI (3T) using twice refocused spin-echo echo-planar imaging with echo time / repetition time (TR/TE) = 9000/86 msec, 90 × 90 matrix size, 2 × 2 mm in-plane resolution, 2.5 mm slice thickness, and 13 b-values. Apparent diffusion coefficient (ADC), relative enhanced diffusivity (RED), and the intravoxel incoherent motion (IVIM) parameters diffusivity (D), pseudo-diffusivity (D*), and perfusion fraction (f) were calculated. The histogram properties (median, mean, standard deviation, skewness, kurtosis) were used as features in SVM (10-fold cross-validation) for differentiation of lesions and subtyping. Accuracies of the SVM classifications were calculated to find the combination of features with highest prediction accuracy. Mann-Whitney tests were performed for univariate comparisons. For benign versus malignant tumors, univariate analysis found 11 histogram properties to be significant differentiators. Using SVM, the highest accuracy (0.96) was achieved from a single feature (mean of RED), or from three feature combinations of IVIM or ADC. Combining features from all models gave perfect classification. No single feature predicted HER2 status of ER + tumors (univariate or SVM), although high accuracy (0.90) was achieved with SVM combining several features. Importantly, these features had to include higher-order statistics (kurtosis and skewness), indicating the importance to account for heterogeneity. Our findings suggest that SVM, using features from a combination of diffusion models, improves prediction accuracy for differentiation of benign versus malignant breast tumors, and may further assist in subtyping of breast cancer. 3 Technical Efficacy: Stage 3 J. Magn. Reson. Imaging 2018;47:1205-1216. © 2017 International Society for Magnetic Resonance in Medicine.
Diagnostics for Hypersonic Engine Control
2015-02-01
modeling efforts and/or lead to the development of sensors that can be used as part of scramjet engine control strategies. Activities included work on...of a model scramjet engine cannot rely on the presence of water. Instead, light sources operating at wavelengths resonant with molecular oxygen are...transmitted beam amplitude fluctuations (scintillation). Frequency axis is normalized. Figure 3. Oxygen absorption feature recorded using direct
ERIC Educational Resources Information Center
Mercer, Walter A.
Major features of the cooperative student teaching model include 1) a pattern of student teaching assignments within the school system which would provide for proportional inclusion of prospective teachers--from the nearby majority black university and the nearby majority white university--to each school serving as a student teaching facility; 2)…
ERIC Educational Resources Information Center
Mbaziira, Alex Vincent
2017-01-01
Cybercriminals are increasingly using Internet-based text messaging applications to exploit their victims. Incidents of deceptive cybercrime in text-based communication are increasing and include fraud, scams, as well as favorable and unfavorable fake reviews. In this work, we use a text-based deception detection approach to train models for…
Marti Aitken; Jane L. Hayes
2006-01-01
Roads are important ecological features of forest landscapes, but their cause-and effect relationships with other ecosystem components are only recently becoming included in integrated landscape analyses. Simulation models can help us to understand how forested landscapes respond over time to disturbance and socioeconomic factors, and potentially to address the...
On type B cyclogenesis in a quasi-geostrophic model
NASA Astrophysics Data System (ADS)
Grotjahn, Richard
2005-01-01
A quasi-geostrophic (QG) model is used to approximate some aspects of 'type B' cyclogenesis as described in an observational paper that appeared several decades earlier in this journal. Though often cited, that earlier work has some ambiguity that has propagated into subsequent analyses. The novel aspects examined here include allowing advective nonlinearity to distort and amplify structures that are quasi-coherent and nearly stable in a linear form of the model; also, separate upper and lower structures are localized in space. Cases are studied separately where the upper trough tracks across different low-level features: an enhanced baroclinic zone (stronger horizontal temperature gradient) or a region of augmented temperature. Growth by superposition of lower and upper features is excluded by experimental design. The dynamics are evaluated with the vertical motion equation, the QG vorticity equation, the QG perturbation energy equation, and 'potential-vorticity thinking'. Results are compared against 'control' cases having no additional low-level features. Nonlinearity is examined relative to a corresponding linear calculation and is generally positive. The results are perhaps richer than the seminal article might imply, because growth is enhanced not only when properties of the lower feature reinforce growth but also when the lower feature opposes decay of the upper feature. For example, growth is enhanced where low-level warm advection introduces rising warm air to oppose the rising cold air ahead of the upper trough. Such growth is magnified when adjacent warm and cold anomalies have a strong baroclinic zone between them. The enhanced growth triggers an upstream tilt in the solution whose properties further accelerate the growth.
New features in Saturn's atmosphere revealed by high-resolution thermal infrared images
NASA Technical Reports Server (NTRS)
Gezari, D. Y.; Mumma, M. J.; Espenak, F.; Deming, D.; Bjoraker, G.; Woods, L.; Folz, W.
1989-01-01
Observations of the stratospheric IR emission structure on Saturn are presented. The high-spatial-resolution global images show a variety of new features, including a narrow equatorial belt of enhanced emission at 7.8 micron, a prominent symmetrical north polar hotspot at all three wavelengths, and a midlatitude structure which is asymmetrically brightened at the east limb. The results confirm the polar brightening and reversal in position predicted by recent models for seasonal thermal variations of Saturn's stratosphere.