DOT National Transportation Integrated Search
2002-02-26
This document, the Introduction to the Enhanced Logistics Intratheater Support Tool (ELIST) Mission Application and its Segments, satisfies the following objectives: : It identifies the mission application, known in brief as ELIST, and all seven ...
SIPHER: Scalable Implementation of Primitives for Homomorphic Encryption
2015-11-01
595–618. 2009. [Ajt96] M. Ajtai. Generating hard instances of lattice problems. Quaderni di Matematica , 13:1–32, 2004. Preliminary version in STOC...1), pages 403–415. 2011. [Ajt96] M. Ajtai. Generating hard instances of lattice problems. Quaderni di Matematica , 13:1–32, 2004. Preliminary version...learning with errors. In ASIACRYPT. 2011. To appear. [Ajt96] M. Ajtai. Generating hard instances of lattice problems. Quaderni di Matematica , 13:1–32
Instances selection algorithm by ensemble margin
NASA Astrophysics Data System (ADS)
Saidi, Meryem; Bechar, Mohammed El Amine; Settouti, Nesma; Chikh, Mohamed Amine
2018-05-01
The main limit of data mining algorithms is their inability to deal with the huge amount of available data in a reasonable processing time. A solution of producing fast and accurate results is instances and features selection. This process eliminates noisy or redundant data in order to reduce the storage and computational cost without performances degradation. In this paper, a new instance selection approach called Ensemble Margin Instance Selection (EMIS) algorithm is proposed. This approach is based on the ensemble margin. To evaluate our approach, we have conducted several experiments on different real-world classification problems from UCI Machine learning repository. The pixel-based image segmentation is a field where the storage requirement and computational cost of applied model become higher. To solve these limitations we conduct a study based on the application of EMIS and other instance selection techniques for the segmentation and automatic recognition of white blood cells WBC (nucleus and cytoplasm) in cytological images.
Instance annotation for multi-instance multi-label learning
F. Briggs; X.Z. Fern; R. Raich; Q. Lou
2013-01-01
Multi-instance multi-label learning (MIML) is a framework for supervised classification where the objects to be classified are bags of instances associated with multiple labels. For example, an image can be represented as a bag of segments and associated with a list of objects it contains. Prior work on MIML has focused on predicting label sets for previously unseen...
Tălu, Stefan
2013-07-01
The purpose of this paper is to determine a quantitative assessment of the human retinal vascular network architecture for patients with diabetic macular edema (DME). Multifractal geometry and lacunarity parameters are used in this study. A set of 10 segmented and skeletonized human retinal images, corresponding to both normal (five images) and DME states of the retina (five images), from the DRIVE database was analyzed using the Image J software. Statistical analyses were performed using Microsoft Office Excel 2003 and GraphPad InStat software. The human retinal vascular network architecture has a multifractal geometry. The average of generalized dimensions (Dq) for q = 0, 1, 2 of the normal images (segmented versions), is similar to the DME cases (segmented versions). The average of generalized dimensions (Dq) for q = 0, 1 of the normal images (skeletonized versions), is slightly greater than the DME cases (skeletonized versions). However, the average of D2 for the normal images (skeletonized versions) is similar to the DME images. The average of lacunarity parameter, Λ, for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values for DME images (segmented and skeletonized versions). The multifractal and lacunarity analysis provides a non-invasive predictive complementary tool for an early diagnosis of patients with DME.
Marketing ambulatory care to women: a segmentation approach.
Harrell, G D; Fors, M F
1985-01-01
Although significant changes are occurring in health care delivery, in many instances the new offerings are not based on a clear understanding of market segments being served. This exploratory study suggests that important differences may exist among women with regard to health care selection. Five major women's segments are identified for consideration by health care executives in developing marketing strategies. Additional research is suggested to confirm this segmentation hypothesis, validate segmental differences and quantify the findings.
Holló, Gábor; Shu-Wei, Hsu; Naghizadeh, Farzaneh
2016-06-01
To compare the current (6.3) and a novel software version (6.12) of the RTVue-100 optical coherence tomograph (RTVue-OCT) for ganglion cell complex (GCC) and retinal nerve fiber layer thickness (RNFLT) image segmentation and detection of glaucoma in high myopia. RNFLT and GCC scans were acquired with software version 6.3 of the RTVue-OCT on 51 highly myopic eyes (spherical refractive error ≤-6.0 D) of 51 patients, and were analyzed with both the software versions. Twenty-two eyes were nonglaucomatous, 13 were ocular hypertensive and 16 eyes had glaucoma. No difference was seen for any RNFLT, and average GCC parameter between the software versions (paired t test, P≥0.084). Global loss volume was significantly lower (more normal) with version 6.12 than with version 6.3 (Wilcoxon signed-rank test, P<0.001). The percentage agreement (κ) between the clinical (normal and ocular hypertensive vs. glaucoma) and the software-provided classifications (normal and borderline vs. outside normal limits) were 0.3219 and 0.4442 for average RNFLT, and 0.2926 and 0.4977 for average GCC with versions 1 and 2, respectively (McNemar symmetry test, P≥0.289). No difference in average RNFLT and GCC classification (McNemar symmetry test, P≥0.727) and the number of eyes with at least 1 segmentation error (P≥0.109) was found between the software versions, respectively. Although GCC segmentation was improved with software version 6.12 compared with the current version in highly myopic eyes, this did not result in a significant change of the average RNFLT and GCC values, and did not significantly improve the software-provided classification for glaucoma.
Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis
Ţălu, Ştefan; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina
2015-01-01
AIM To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method. METHODS This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images) and pathological (148 images) states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software. RESULTS It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR) and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values of mild non-proliferative DR (NPDR) images (segmented and skeletonized versions). The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions). The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions). CONCLUSION The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with DR from healthy individuals. PMID:26309878
Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis.
Ţălu, Ştefan; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina
2015-01-01
To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method. This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images) and pathological (148 images) states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software. It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR) and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values of mild non-proliferative DR (NPDR) images (segmented and skeletonized versions). The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions). The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions). The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with DR from healthy individuals.
Incorporating Edge Information into Best Merge Region-Growing Segmentation
NASA Technical Reports Server (NTRS)
Tilton, James C.; Pasolli, Edoardo
2014-01-01
We have previously developed a best merge region-growing approach that integrates nonadjacent region object aggregation with the neighboring region merge process usually employed in region growing segmentation approaches. This approach has been named HSeg, because it provides a hierarchical set of image segmentation results. Up to this point, HSeg considered only global region feature information in the region growing decision process. We present here three new versions of HSeg that include local edge information into the region growing decision process at different levels of rigor. We then compare the effectiveness and processing times of these new versions HSeg with each other and with the original version of HSeg.
Advanced Homomorphic Encryption its Applications and Derivatives (AHEAD)
2013-09-01
lattice problems. Quaderni di Matematica , 13:1–32, 2004. Preliminary version in STOC 1996. [Ajt99] M. Ajtai. Generating hard instances of the short...search-to-decision reduction of [17]. References [1] M. Ajtai. Generating hard instances of lattice problems. Quaderni di Matematica , 13:1–32, 2004
NASA Technical Reports Server (NTRS)
1991-01-01
The Reusable Reentry Satellite (RRS) System is composed of the payload segment (PS), vehicle segment (VS), and mission support (MS) segments. This specification establishes the performance, design, development, and test requirements for the RRS Rodent Module (RM).
Learning to segment mouse embryo cells
NASA Astrophysics Data System (ADS)
León, Juan; Pardo, Alejandro; Arbeláez, Pablo
2017-11-01
Recent advances in microscopy enable the capture of temporal sequences during cell development stages. However, the study of such sequences is a complex task and time consuming task. In this paper we propose an automatic strategy to adders the problem of semantic and instance segmentation of mouse embryos using NYU's Mouse Embryo Tracking Database. We obtain our instance proposals as refined predictions from the generalized hough transform, using prior knowledge of the embryo's locations and their current cell stage. We use two main approaches to learn the priors: Hand crafted features and automatic learned features. Our strategy increases the baseline jaccard index from 0.12 up to 0.24 using hand crafted features and 0.28 by using automatic learned ones.
Complete genome sequences of two Staphylococcus aureus ST5 isolates from California, USA
USDA-ARS?s Scientific Manuscript database
Staphylococcus aureus is a bacteria that can cause disease in humans and animals. S. aureus bacteria can transfer or exchange segments of genetic material with other bacteria. These segments are known as mobile genetic elements and in some instances they can encode for factors that increase the abil...
Draft genome sequences of 14 Staphylococcus aureus ST5 isolates from California, USA
USDA-ARS?s Scientific Manuscript database
Staphylococcus aureus is a bacteria that can cause disease in humans and animals. S. aureus bacteria can transfer or exchange segments of genetic material with other bacteria. These segments are known as mobile genetic elements and in some instances they can encode for factors that increase the abil...
The Functional Unit of Japanese Word Naming: Evidence from Masked Priming
ERIC Educational Resources Information Center
Verdonschot, Rinus G.; Kiyama, Sachiko; Tamaoka, Katsuo; Kinoshita, Sachiko; La Heij, Wido; Schiller, Niels O.
2011-01-01
Theories of language production generally describe the segment as the basic unit in phonological encoding (e.g., Dell, 1988; Levelt, Roelofs, & Meyer, 1999). However, there is also evidence that such a unit might be language specific. Chen, Chen, and Dell (2002), for instance, found no effect of single segments when using a preparation…
Motion Imagery Processing and Exploitation (MIPE)
2013-01-01
facial recognition —i.e., the identification of a specific person.37 Object detection is often (but not always) considered a prerequisite for instance...The goal of segmentation is to distinguish objects and identify boundaries in images. Some of the earliest approaches to facial recognition involved...methods of instance recognition are at varying levels of maturity. Facial recognition methods are arguably the most mature; the technology is well
Poly-Pattern Compressive Segmentation of ASTER Data for GIS
NASA Technical Reports Server (NTRS)
Myers, Wayne; Warner, Eric; Tutwiler, Richard
2007-01-01
Pattern-based segmentation of multi-band image data, such as ASTER, produces one-byte and two-byte approximate compressions. This is a dual segmentation consisting of nested coarser and finer level pattern mappings called poly-patterns. The coarser A-level version is structured for direct incorporation into geographic information systems in the manner of a raster map. GIs renderings of this A-level approximation are called pattern pictures which have the appearance of color enhanced images. The two-byte version consisting of thousands of B-level segments provides a capability for approximate restoration of the multi-band data in selected areas or entire scenes. Poly-patterns are especially useful for purposes of change detection and landscape analysis at multiple scales. The primary author has implemented the segmentation methodology in a public domain software suite.
Geometric Hitting Set for Segments of Few Orientations
Fekete, Sandor P.; Huang, Kan; Mitchell, Joseph S. B.; ...
2016-01-13
Here we study several natural instances of the geometric hitting set problem for input consisting of sets of line segments (and rays, lines) having a small number of distinct slopes. These problems model path monitoring (e.g., on road networks) using the fewest sensors (the \\hitting points"). We give approximation algorithms for cases including (i) lines of 3 slopes in the plane, (ii) vertical lines and horizontal segments, (iii) pairs of horizontal/vertical segments. Lastly, we give hardness and hardness of approximation results for these problems. We prove that the hitting set problem for vertical lines and horizontal rays is polynomially solvable.
Integrating segmentation methods from the Insight Toolkit into a visualization application.
Martin, Ken; Ibáñez, Luis; Avila, Lisa; Barré, Sébastien; Kaspersen, Jon H
2005-12-01
The Insight Toolkit (ITK) initiative from the National Library of Medicine has provided a suite of state-of-the-art segmentation and registration algorithms ideally suited to volume visualization and analysis. A volume visualization application that effectively utilizes these algorithms provides many benefits: it allows access to ITK functionality for non-programmers, it creates a vehicle for sharing and comparing segmentation techniques, and it serves as a visual debugger for algorithm developers. This paper describes the integration of image processing functionalities provided by the ITK into VolView, a visualization application for high performance volume rendering. A free version of this visualization application is publicly available and is available in the online version of this paper. The process for developing ITK plugins for VolView according to the publicly available API is described in detail, and an application of ITK VolView plugins to the segmentation of Abdominal Aortic Aneurysms (AAAs) is presented. The source code of the ITK plugins is also publicly available and it is included in the online version.
A Unified Mathematical Approach to Image Analysis.
1987-08-31
describes four instances of the paradigm in detail. Directions for ongoing and future research are also indicated. Keywords: Image processing; Algorithms; Segmentation; Boundary detection; tomography; Global image analysis .
Price Analysis and the Effects of Competition.
1985-10-01
state of the market . For instance, is it possible that competition can squeeze a company to greater efficiency or lower profits in the short run, but...dual- source competition . The Stackelberg model recognizes two types of firm behavior. A firm may choose to be a leader and pursue a dominant market ...strate- gies in areas of potential competition . In this instance, the follower firm will serve that segment of the market that the leader firm cannot
Constrained Deep Weak Supervision for Histopathology Image Segmentation.
Jia, Zhipeng; Huang, Xingyi; Chang, Eric I-Chao; Xu, Yan
2017-11-01
In this paper, we develop a new weakly supervised learning algorithm to learn to segment cancerous regions in histopathology images. This paper is under a multiple instance learning (MIL) framework with a new formulation, deep weak supervision (DWS); we also propose an effective way to introduce constraints to our neural networks to assist the learning process. The contributions of our algorithm are threefold: 1) we build an end-to-end learning system that segments cancerous regions with fully convolutional networks (FCNs) in which image-to-image weakly-supervised learning is performed; 2) we develop a DWS formulation to exploit multi-scale learning under weak supervision within FCNs; and 3) constraints about positive instances are introduced in our approach to effectively explore additional weakly supervised information that is easy to obtain and enjoy a significant boost to the learning process. The proposed algorithm, abbreviated as DWS-MIL, is easy to implement and can be trained efficiently. Our system demonstrates the state-of-the-art results on large-scale histopathology image data sets and can be applied to various applications in medical imaging beyond histopathology images, such as MRI, CT, and ultrasound images.
Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC (version 4.0) technical manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1988-01-01
The information contained in the NASARC (Version 4.0) Technical Manual and NASARC (Version 4.0) User's Manual relates to the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbits. Array dimensions within the software were structured to fit within the currently available 12 megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.0) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.
The segment polarity network is a robust developmental module
NASA Astrophysics Data System (ADS)
von Dassow, George; Meir, Eli; Munro, Edwin M.; Odell, Garrett M.
2000-07-01
All insects possess homologous segments, but segment specification differs radically among insect orders. In Drosophila, maternal morphogens control the patterned activation of gap genes, which encode transcriptional regulators that shape the patterned expression of pair-rule genes. This patterning cascade takes place before cellularization. Pair-rule gene products subsequently `imprint' segment polarity genes with reiterated patterns, thus defining the primordial segments. This mechanism must be greatly modified in insect groups in which many segments emerge only after cellularization. In beetles and parasitic wasps, for instance, pair-rule homologues are expressed in patterns consistent with roles during segmentation, but these patterns emerge within cellular fields. In contrast, although in locusts pair-rule homologues may not control segmentation, some segment polarity genes and their interactions are conserved. Perhaps segmentation is modular, with each module autonomously expressing a characteristic intrinsic behaviour in response to transient stimuli. If so, evolution could rearrange inputs to modules without changing their intrinsic behaviours. Here we suggest, using computer simulations, that the Drosophila segment polarity genes constitute such a module, and that this module is resistant to variations in the kinetic constants that govern its behaviour.
Simulating the 2012 High Plains Drought Using Three Single Column Models (SCM)
NASA Astrophysics Data System (ADS)
Medina, I. D.; Baker, I. T.; Denning, S.; Dazlich, D. A.
2015-12-01
The impact of changes in the frequency and severity of drought on fresh water sustainability is a great concern for many regions of the world. One such location is the High Plains, where the local economy is primarily driven by fresh water withdrawals from the Ogallala Aquifer, which accounts for approximately 30% of total irrigation withdrawals from all U.S. aquifers combined. Modeling studies that focus on the feedback mechanisms that control the climate and eco-hydrology during times of drought are limited, and have used conventional General Circulation Models (GCMs) with grid length scales ranging from one hundred to several hundred kilometers. Additionally, these models utilize crude statistical parameterizations of cloud processes for estimating sub-grid fluxes of heat and moisture and have a poor representation of land surface heterogeneity. For this research, we focus on the 2012 High Plains drought and perform numerical simulations using three single column model (SCM) versions of BUGS5 (Colorado State University (CSU) GCM coupled to the Simple Biosphere Model (SiB3)). In the first version of BUGS5, the model is used in its standard bulk setting (single atmospheric column coupled to a single instance of SiB3), secondly, the Super-Parameterized Community Atmospheric Model (SP-CAM), a cloud resolving model (CRM) (CRM consists of 32 atmospheric columns), replaces the single CSU GCM atmospheric parameterization and is coupled to a single instance of SiB3, and for the third version of BUGS5, an instance of SiB3 is coupled to each CRM column of the SP-CAM (32 CRM columns coupled to 32 instances of SiB3). To assess the physical realism of the land-atmosphere feedbacks simulated by all three versions of BUGS5, differences in simulated energy and moisture fluxes are computed between the 2011 and 2012 period and are compared to those calculated using observational data from the AmeriFlux Tower Network for the same period at the ARM Site in Lamont, OK. This research will provide a better understanding of model deficiencies in reproducing and predicting droughts in the future, which is essential to the economic, ecologic and social well being of the High Plains.
Segmented strings and the McMillan map
Gubser, Steven S.; Parikh, Sarthak; Witaszczyk, Przemek
2016-07-25
We present new exact solutions describing motions of closed segmented strings in AdS 3 in terms of elliptic functions. The existence of analytic expressions is due to the integrability of the classical equations of motion, which in our examples reduce to instances of the McMillan map. Here, we also obtain a discrete evolution rule for the motion in AdS 3 of arbitrary bound states of fundamental strings and D1-branes in the test approximation.
Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC), version 4.0: User's manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1988-01-01
The information in the NASARC (Version 4.0) Technical Manual (NASA-TM-101453) and NASARC (Version 4.0) User's Manual (NASA-TM-101454) relates to the state of Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbit. Array dimensions within the software were structured to fit within the currently available 12-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.
Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC, Version 2.0: User's Manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1987-01-01
The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and the NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through October 16, 1987. The technical manual describes the NASARC concept and the algorithms which are used to implement it. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions have been incorporated in the Version 2.0 software over prior versions. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit into the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time reducing computer run time.
NASA Technical Reports Server (NTRS)
Fortenbaugh, R. L.
1980-01-01
Equations incorporated in a VATOL six degree of freedom off-line digital simulation program and data for the Vought SF-121 VATOL aircraft concept which served as the baseline for the development of this program are presented. The equations and data are intended to facilitate the development of a piloted VATOL simulation. The equation presentation format is to state the equations which define a particular model segment. Listings of constants required to quantify the model segment, input variables required to exercise the model segment, and output variables required by other model segments are included. In several instances a series of input or output variables are followed by a section number in parentheses which identifies the model segment of origination or termination of those variables.
Effect of improving the usability of an e-learning resource: a randomized trial.
Davids, Mogamat Razeen; Chikte, Usuf M E; Halperin, Mitchell L
2014-06-01
Optimizing the usability of e-learning materials is necessary to reduce extraneous cognitive load and maximize their potential educational impact. However, this is often neglected, especially when time and other resources are limited. We conducted a randomized trial to investigate whether a usability evaluation of our multimedia e-learning resource, followed by fixing of all problems identified, would translate into improvements in usability parameters and learning by medical residents. Two iterations of our e-learning resource [version 1 (V1) and version 2 (V2)] were compared. V1 was the first fully functional version and V2 was the revised version after all identified usability problems were addressed. Residents in internal medicine and anesthesiology were randomly assigned to one of the versions. Usability was evaluated by having participants complete a user satisfaction questionnaire and by recording and analyzing their interactions with the application. The effect on learning was assessed by questions designed to test the retention and transfer of knowledge. Participants reported high levels of satisfaction with both versions, with good ratings on the System Usability Scale and adjective rating scale. In contrast, analysis of video recordings revealed significant differences in the occurrence of serious usability problems between the two versions, in particular in the interactive HandsOn case with its treatment simulation, where there was a median of five serious problem instances (range: 0-50) recorded per participant for V1 and zero instances (range: 0-1) for V2 (P < 0.001). There were no differences in tests of retention or transfer of knowledge between the two versions. In conclusion, usability evaluation followed by a redesign of our e-learning resource resulted in significant improvements in usability. This is likely to translate into improved motivation and willingness to engage with the learning material. In this population of relatively high-knowledge participants, learning scores were similar across the two versions. Copyright © 2014 The American Physiological Society.
Effect of improving the usability of an e-learning resource: a randomized trial
Chikte, Usuf M. E.; Halperin, Mitchell L.
2014-01-01
Optimizing the usability of e-learning materials is necessary to reduce extraneous cognitive load and maximize their potential educational impact. However, this is often neglected, especially when time and other resources are limited. We conducted a randomized trial to investigate whether a usability evaluation of our multimedia e-learning resource, followed by fixing of all problems identified, would translate into improvements in usability parameters and learning by medical residents. Two iterations of our e-learning resource [version 1 (V1) and version 2 (V2)] were compared. V1 was the first fully functional version and V2 was the revised version after all identified usability problems were addressed. Residents in internal medicine and anesthesiology were randomly assigned to one of the versions. Usability was evaluated by having participants complete a user satisfaction questionnaire and by recording and analyzing their interactions with the application. The effect on learning was assessed by questions designed to test the retention and transfer of knowledge. Participants reported high levels of satisfaction with both versions, with good ratings on the System Usability Scale and adjective rating scale. In contrast, analysis of video recordings revealed significant differences in the occurrence of serious usability problems between the two versions, in particular in the interactive HandsOn case with its treatment simulation, where there was a median of five serious problem instances (range: 0–50) recorded per participant for V1 and zero instances (range: 0–1) for V2 (P < 0.001). There were no differences in tests of retention or transfer of knowledge between the two versions. In conclusion, usability evaluation followed by a redesign of our e-learning resource resulted in significant improvements in usability. This is likely to translate into improved motivation and willingness to engage with the learning material. In this population of relatively high-knowledge participants, learning scores were similar across the two versions. PMID:24913451
Reproducibility of myelin content-based human habenula segmentation at 3 Tesla.
Kim, Joo-Won; Naidich, Thomas P; Joseph, Joshmi; Nair, Divya; Glasser, Matthew F; O'halloran, Rafael; Doucet, Gaelle E; Lee, Won Hee; Krinsky, Hannah; Paulino, Alejandro; Glahn, David C; Anticevic, Alan; Frangou, Sophia; Xu, Junqian
2018-03-26
In vivo morphological study of the human habenula, a pair of small epithalamic nuclei adjacent to the dorsomedial thalamus, has recently gained significant interest for its role in reward and aversion processing. However, segmenting the habenula from in vivo magnetic resonance imaging (MRI) is challenging due to the habenula's small size and low anatomical contrast. Although manual and semi-automated habenula segmentation methods have been reported, the test-retest reproducibility of the segmented habenula volume and the consistency of the boundaries of habenula segmentation have not been investigated. In this study, we evaluated the intra- and inter-site reproducibility of in vivo human habenula segmentation from 3T MRI (0.7-0.8 mm isotropic resolution) using our previously proposed semi-automated myelin contrast-based method and its fully-automated version, as well as a previously published manual geometry-based method. The habenula segmentation using our semi-automated method showed consistent boundary definition (high Dice coefficient, low mean distance, and moderate Hausdorff distance) and reproducible volume measurement (low coefficient of variation). Furthermore, the habenula boundary in our semi-automated segmentation from 3T MRI agreed well with that in the manual segmentation from 7T MRI (0.5 mm isotropic resolution) of the same subjects. Overall, our proposed semi-automated habenula segmentation showed reliable and reproducible habenula localization, while its fully-automated version offers an efficient way for large sample analysis. © 2018 Wiley Periodicals, Inc.
Simulating the 2012 High Plains drought using three single column versions (SCM) of BUGS5
NASA Astrophysics Data System (ADS)
Medina, I. D.; Denning, S.
2013-12-01
The impact of changes in the frequency and severity of drought on fresh water sustainability is a great concern for many regions of the world. One such location is the High Plains, where the local economy is primarily driven by fresh water withdrawals from the Ogallala Aquifer, which accounts for approximately 30% of total irrigation withdrawals from all U.S. aquifers combined. Modeling studies that focus on the feedback mechanisms that control the climate and eco-hydrology during times of drought are limited, and have used conventional General Circulation Models (GCMs) with grid length scales ranging from one hundred to several hundred kilometers. Additionally, these models utilize crude statistical parameterizations of cloud processes for estimating sub-grid fluxes of heat and moisture and have a poor representation of land surface heterogeneity. For this research, we will focus on the 2012 High Plains drought and will perform numerical simulations using three single column versions (SCM) of BUGS5 (Colorado State University (CSU) GCM coupled to the Simple Biosphere Model (SiB3)) at multiple sites overlying the Ogallala Aquifer for the 2011-2012 periods. In the first version of BUGS5, the model will be used in its standard bulk setting (single atmospheric column coupled to a single instance of SiB3), secondly, the Super-Parameterized Community Atmospheric Model (SP-CAM), a cloud resolving model (CRM consists of 64 atmospheric columns), will replace the single CSU GCM atmospheric parameterization and will be coupled to a single instance of SiB3, and for the third version of BUGS5, an instance of SiB3 will be coupled to each CRM column of the SP-CAM (64 CRM columns coupled to 64 instances of SiB3). To assess the physical realism of the land-atmosphere feedbacks simulated at each site by all versions of BUGS5, differences in simulated energy and moisture fluxes will be computed between the 2011 and 2012 period and will be compared to differences calculated using observational data from the AmeriFlux tower network for the same period. These results will give some insight to the land-atmosphere feedbacks GCMs may produce when atmospheric and land surface heterogeneity are included within a single framework. Furthermore, this research will provide a better understanding of model deficiencies in reproducing and predicting droughts in the future, which is essential to the economic, ecologic and social well being of the High Plains.
A hybrid intelligence approach to artifact recognition in digital publishing
NASA Astrophysics Data System (ADS)
Vega-Riveros, J. Fernando; Santos Villalobos, Hector J.
2006-02-01
The system presented integrates rule-based and case-based reasoning for artifact recognition in Digital Publishing. In Variable Data Printing (VDP) human proofing could result prohibitive since a job could contain millions of different instances that may contain two types of artifacts: 1) evident defects, like a text overflow or overlapping 2) style-dependent artifacts, subtle defects that show as inconsistencies with regard to the original job design. We designed a Knowledge-Based Artifact Recognition tool for document segmentation, layout understanding, artifact detection, and document design quality assessment. Document evaluation is constrained by reference to one instance of the VDP job proofed by a human expert against the remaining instances. Fundamental rules of document design are used in the rule-based component for document segmentation and layout understanding. Ambiguities in the design principles not covered by the rule-based system are analyzed by case-based reasoning, using the Nearest Neighbor Algorithm, where features from previous jobs are used to detect artifacts and inconsistencies within the document layout. We used a subset of XSL-FO and assembled a set of 44 document samples. The system detected all the job layout changes, while obtaining an overall average accuracy of 84.56%, with the highest accuracy of 92.82%, for overlapping and the lowest, 66.7%, for the lack-of-white-space.
Visualization Software for VisIT Java Client
DOE Office of Scientific and Technical Information (OSTI.GOV)
Billings, Jay Jay; Smith, Robert W
The VisIT Java Client (JVC) library is a lightweight thin client that is designed and written purely in the native language of Java (the Python & JavaScript versions of the library use the same concept) and communicates with any new unmodified standalone version of VisIT, a high performance computing parallel visualization toolkit, over traditional or web sockets and dynamically determines capabilities of the running VisIT instance whether local or remote.
Practical optimization of Steiner trees via the cavity method
NASA Astrophysics Data System (ADS)
Braunstein, Alfredo; Muntoni, Anna
2016-07-01
The optimization version of the cavity method for single instances, called Max-Sum, has been applied in the past to the minimum Steiner tree problem on graphs and variants. Max-Sum has been shown experimentally to give asymptotically optimal results on certain types of weighted random graphs, and to give good solutions in short computation times for some types of real networks. However, the hypotheses behind the formulation and the cavity method itself limit substantially the class of instances on which the approach gives good results (or even converges). Moreover, in the standard model formulation, the diameter of the tree solution is limited by a predefined bound, that affects both computation time and convergence properties. In this work we describe two main enhancements to the Max-Sum equations to be able to cope with optimization of real-world instances. First, we develop an alternative ‘flat’ model formulation that allows the relevant configuration space to be reduced substantially, making the approach feasible on instances with large solution diameter, in particular when the number of terminal nodes is small. Second, we propose an integration between Max-Sum and three greedy heuristics. This integration allows Max-Sum to be transformed into a highly competitive self-contained algorithm, in which a feasible solution is given at each step of the iterative procedure. Part of this development participated in the 2014 DIMACS Challenge on Steiner problems, and we report the results here. The performance on the challenge of the proposed approach was highly satisfactory: it maintained a small gap to the best bound in most cases, and obtained the best results on several instances in two different categories. We also present several improvements with respect to the version of the algorithm that participated in the competition, including new best solutions for some of the instances of the challenge.
Multi scales based sparse matrix spectral clustering image segmentation
NASA Astrophysics Data System (ADS)
Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin
2018-04-01
In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.
Boosting instance prototypes to detect local dermoscopic features.
Situ, Ning; Yuan, Xiaojing; Zouridakis, George
2010-01-01
Local dermoscopic features are useful in many dermoscopic criteria for skin cancer detection. We address the problem of detecting local dermoscopic features from epiluminescence (ELM) microscopy skin lesion images. We formulate the recognition of local dermoscopic features as a multi-instance learning (MIL) problem. We employ the method of diverse density (DD) and evidence confidence (EC) function to convert MIL to a single-instance learning (SIL) problem. We apply Adaboost to improve the classification performance with support vector machines (SVMs) as the base classifier. We also propose to boost the selection of instance prototypes through changing the data weights in the DD function. We validate the methods on detecting ten local dermoscopic features from a dataset with 360 images. We compare the performance of the MIL approach, its boosting version, and a baseline method without using MIL. Our results show that boosting can provide performance improvement compared to the other two methods.
Numerical arc segmentation algorithm for a radio conference-NASARC (version 2.0) technical manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1987-01-01
The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of NASARC software development through October 16, 1987. The Technical Manual describes the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operating instructions. Significant revisions have been incorporated in the Version 2.0 software. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit within the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time effecting an overall reduction in computer run time.
Annual Industrial Capabilities Report to Congress
2007-02-01
65 5.1 Aircraft Sector Industrial Summary ................................................................. 65 5.2 Command...industry partners to encourage long-term contractor workforce improvements. Industry segment-level baseline assessments ( aircraft ; command, control...For instance, within aircraft major defense acquisition programs (MDAPs), research, development, test, and evaluation (RDT&E) funding is steadily
Statis omnidirectional stereoscopic display system
NASA Astrophysics Data System (ADS)
Barton, George G.; Feldman, Sidney; Beckstead, Jeffrey A.
1999-11-01
A unique three camera stereoscopic omnidirectional viewing system based on the periscopic panoramic camera described in the 11/98 SPIE proceedings (AM13). The 3 panoramic cameras are equilaterally combined so each leg of the triangle approximates the human inter-ocular spacing allowing each panoramic camera to view 240 degree(s) of the panoramic scene, the most counter clockwise 120 degree(s) being the left eye field and the other 120 degree(s) segment being the right eye field. Field definition may be by green/red filtration or time discrimination of the video signal. In the first instance a 2 color spectacle is used in viewing the display or in the 2nd instance LCD goggles are used to differentiate the R/L fields. Radially scanned vidicons or re-mapped CCDs may be used. The display consists of three vertically stacked 120 degree(s) segments of the panoramic field of view with 2 fields/frame. Field A being the left eye display and Field B the right eye display.
Zhuang, Qiang; Li, Wenjuan; Benda, Christina; Huang, Zhijian; Ahmed, Tanveer; Liu, Ping; Guo, Xiangpeng; Ibañez, David P; Luo, Zhiwei; Zhang, Meng; Abdul, Mazid Md; Yang, Zhongzhou; Yang, Jiayin; Huang, Yinghua; Zhang, Hui; Huang, Dehao; Zhou, Jianguo; Zhong, Xiaofen; Zhu, Xihua; Fu, Xiuling; Fan, Wenxia; Liu, Yulin; Xu, Yan; Ward, Carl; Khan, Muhammad Jadoon; Kanwal, Shahzina; Mirza, Bushra; Tortorella, Micky D; Tse, Hung-Fat; Chen, Jiayu; Qin, Baoming; Bao, Xichen; Gao, Shaorong; Hutchins, Andrew P; Esteban, Miguel A
2018-06-15
In the version of this Article originally published, in Fig. 2c, the '+' sign and 'OSKM' were superimposed in the label '+OSKM'. In Fig. 4e, in the labels, all instances of 'Ant' should have been 'Anti-'. And, in Fig. 7a, the label '0.0' was misplaced; it should have been on the colour scale bar. These figures have now been corrected in the online versions.
Deformable segmentation via sparse representation and dictionary learning.
Zhang, Shaoting; Zhan, Yiqiang; Metaxas, Dimitris N
2012-10-01
"Shape" and "appearance", the two pillars of a deformable model, complement each other in object segmentation. In many medical imaging applications, while the low-level appearance information is weak or mis-leading, shape priors play a more important role to guide a correct segmentation, thanks to the strong shape characteristics of biological structures. Recently a novel shape prior modeling method has been proposed based on sparse learning theory. Instead of learning a generative shape model, shape priors are incorporated on-the-fly through the sparse shape composition (SSC). SSC is robust to non-Gaussian errors and still preserves individual shape characteristics even when such characteristics is not statistically significant. Although it seems straightforward to incorporate SSC into a deformable segmentation framework as shape priors, the large-scale sparse optimization of SSC has low runtime efficiency, which cannot satisfy clinical requirements. In this paper, we design two strategies to decrease the computational complexity of SSC, making a robust, accurate and efficient deformable segmentation system. (1) When the shape repository contains a large number of instances, which is often the case in 2D problems, K-SVD is used to learn a more compact but still informative shape dictionary. (2) If the derived shape instance has a large number of vertices, which often appears in 3D problems, an affinity propagation method is used to partition the surface into small sub-regions, on which the sparse shape composition is performed locally. Both strategies dramatically decrease the scale of the sparse optimization problem and hence speed up the algorithm. Our method is applied on a diverse set of biomedical image analysis problems. Compared to the original SSC, these two newly-proposed modules not only significant reduce the computational complexity, but also improve the overall accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.
A neural network model of semantic memory linking feature-based object representation and words.
Cuppini, C; Magosso, E; Ursino, M
2009-06-01
Recent theories in cognitive neuroscience suggest that semantic memory is a distributed process, which involves many cortical areas and is based on a multimodal representation of objects. The aim of this work is to extend a previous model of object representation to realize a semantic memory, in which sensory-motor representations of objects are linked with words. The model assumes that each object is described as a collection of features, coded in different cortical areas via a topological organization. Features in different objects are segmented via gamma-band synchronization of neural oscillators. The feature areas are further connected with a lexical area, devoted to the representation of words. Synapses among the feature areas, and among the lexical area and the feature areas are trained via a time-dependent Hebbian rule, during a period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from acoustic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits).
Taking Proof based Verified Computation a Few Steps Closer to Practicality (extended version)
2012-06-27
general s2 + s, in general V’s per-instance CPU costs Issue commit queries (e + 2c) · n/β (e + 2c) · n/β Process commit responses d d Issue PCP...size (# of instances) (§2.3) e: cost of encrypting an element in F d : cost of decrypting an encrypted element f : cost of multiplying in F h: cost of...domain D (such as the integers, Z, or the rationals, Q) to equivalent constraints over a finite field, the programmer or compiler performs 3We suspect
Induced subgraph searching for geometric model fitting
NASA Astrophysics Data System (ADS)
Xiao, Fan; Xiao, Guobao; Yan, Yan; Wang, Xing; Wang, Hanzi
2017-11-01
In this paper, we propose a novel model fitting method based on graphs to fit and segment multiple-structure data. In the graph constructed on data, each model instance is represented as an induced subgraph. Following the idea of pursuing the maximum consensus, the multiple geometric model fitting problem is formulated as searching for a set of induced subgraphs including the maximum union set of vertices. After the generation and refinement of the induced subgraphs that represent the model hypotheses, the searching process is conducted on the "qualified" subgraphs. Multiple model instances can be simultaneously estimated by solving a converted problem. Then, we introduce the energy evaluation function to determine the number of model instances in data. The proposed method is able to effectively estimate the number and the parameters of model instances in data severely corrupted by outliers and noises. Experimental results on synthetic data and real images validate the favorable performance of the proposed method compared with several state-of-the-art fitting methods.
[Effect of the ISS Russian segment configuration on the service module radiation environment].
Mitrikas, V G
2011-01-01
Mathematical modeling of variations in the Service module radiation environment as a function of ISS Russian segment configuration was carried out using models of the RS modules and a spherical humanoid phantom. ISS reconfiguration impacted significantly only the phantom brought into the transfer compartment (ExT). The Radiation Safety Service prohibition for cosmonauts to stay in this compartment during solar flare events remains valid. In all other instances, error of dose estimation is higher as compared to dose value estimation with consideration for ISS RS reconfiguration.
Learning fuzzy information in a hybrid connectionist, symbolic model
NASA Technical Reports Server (NTRS)
Romaniuk, Steve G.; Hall, Lawrence O.
1993-01-01
An instance-based learning system is presented. SC-net is a fuzzy hybrid connectionist, symbolic learning system. It remembers some examples and makes groups of examples into exemplars. All real-valued attributes are represented as fuzzy sets. The network representation and learning method is described. To illustrate this approach to learning in fuzzy domains, an example of segmenting magnetic resonance images of the brain is discussed. Clearly, the boundaries between human tissues are ill-defined or fuzzy. Example fuzzy rules for recognition are generated. Segmentations are presented that provide results that radiologists find useful.
Magazzù, L; Forn-Díaz, P; Belyansky, R; Orgiazzi, J-L; Yurtalan, M A; Otto, M R; Lupascu, A; Wilson, C M; Grifoni, M
2018-06-07
The original PDF and HTML versions of this Article omitted the ORCID ID of the authors L. Magazzù and P. Forn-Díaz. (L. Magazzù: 0000-0002-4377-8387; P. Forn-Diaz: 0000-0003-4365-5157).The original PDF version of this Article contained errors in Eqs. (2), (6), (13), (14), (25), (26). These equations were missing all instances of 'Γ' and 'Δ', which are correctly displayed in the HTML version.Similarly, the inline equation in the third sentence of the caption of Fig. 2 was missing the left hand term 'Ω'.The original HTML version of this Article contained errors in Table 1. The correct version of the sixth row of the first column states 'Figure 2' instead of the original, incorrect 'Figure'. And the correction version of the ninth row of the first column states 'Figure 3' instead of the original, incorrect 'Figure'.This has been corrected in both the PDF and HTML versions of the Article.
Preventing Abuse in Federal Student Aid: Community College Practices
ERIC Educational Resources Information Center
Baime, David S.; Mullin, Christopher M.
2012-01-01
In recent months, some legislators, government agency officials, segments of the media, and campus administrators have called attention to perceived and proven instances of abuse of the federal student financial assistance programs. Concerns have focused on students enrolling in courses primarily to secure student financial aid funds rather than…
Proportional crosstalk correction for the segmented clover at iThemba LABS
NASA Astrophysics Data System (ADS)
Bucher, T. D.; Noncolela, S. P.; Lawrie, E. A.; Dinoko, T. R. S.; Easton, J. L.; Erasmus, N.; Lawrie, J. J.; Mthembu, S. H.; Mtshali, W. X.; Shirinda, O.; Orce, J. N.
2017-11-01
Reaching new depths in nuclear structure investigations requires new experimental equipment and new techniques of data analysis. The modern γ-ray spectrometers, like AGATA and GRETINA are now built of new-generation segmented germanium detectors. These most advanced detectors are able to reconstruct the trajectory of a γ-ray inside the detector. These are powerful detectors, but they need careful characterization, since their output signals are more complex. For instance for each γ-ray interaction that occurs in a segment of such a detector additional output signals (called proportional crosstalk), falsely appearing as an independent (often negative) energy depositions, are registered on the non-interacting segments. A failure to implement crosstalk correction results in incorrectly measured energies on the segments for two- and higher-fold events. It affects all experiments which rely on the recorded segment energies. Furthermore incorrectly recorded energies on the segments cause a failure to reconstruct the γ-ray trajectories using Compton scattering analysis. The proportional crosstalk for the iThemba LABS segmented clover was measured and a crosstalk correction was successfully implemented. The measured crosstalk-corrected energies show good agreement with the true γ-ray energies independent on the number of hit segments and an improved energy resolution for the segment sum energy was obtained.
Orthographic Transparency Enhances Morphological Segmentation in Children Reading Hebrew Words.
Haddad, Laurice; Weiss, Yael; Katzir, Tami; Bitan, Tali
2017-01-01
Morphological processing of derived words develops simultaneously with reading acquisition. However, the reader's engagement in morphological segmentation may depend on the language morphological richness and orthographic transparency, and the readers' reading skills. The current study tested the common idea that morphological segmentation is enhanced in non-transparent orthographies to compensate for the absence of phonological information. Hebrew's rich morphology and the dual version of the Hebrew script (with and without diacritic marks) provides an opportunity to study the interaction of orthographic transparency and morphological segmentation on the development of reading skills in a within-language design. Hebrew speaking 2nd ( N = 27) and 5th ( N = 29) grade children read aloud 96 noun words. Half of the words were simple mono-morphemic words and half were bi-morphemic derivations composed of a productive root and a morphemic pattern. In each list half of the words were presented in the transparent version of the script (with diacritic marks), and half in the non-transparent version (without diacritic marks). Our results show that in both groups, derived bi-morphemic words were identified more accurately than mono-morphemic words, but only for the transparent, pointed, script. For the un-pointed script the reverse was found, namely, that bi-morphemic words were read less accurately than mono-morphemic words, especially in second grade. Second grade children also read mono-morphemic words faster than bi-morphemic words. Finally, correlations with a standardized measure of morphological awareness were found only for second grade children, and only in bi-morphemic words. These results, showing greater morphological effects in second grade compared to fifth grade children suggest that for children raised in a language with a rich morphology, common and easily segmented morphemic units may be more beneficial for younger compared to older readers. Moreover, in contrast to the common hypothesis, our results show that morphemic segmentation does not compensate for the missing phonological information in a non-transparent orthography, but rather that morphological segmentation is most beneficial in the highly transparent script. These results are consistent with the idea that morphological and phonological segmentation processes occur simultaneously and do not constitute alternative pathways to visual word recognition.
Orthographic Transparency Enhances Morphological Segmentation in Children Reading Hebrew Words
Haddad, Laurice; Weiss, Yael; Katzir, Tami; Bitan, Tali
2018-01-01
Morphological processing of derived words develops simultaneously with reading acquisition. However, the reader’s engagement in morphological segmentation may depend on the language morphological richness and orthographic transparency, and the readers’ reading skills. The current study tested the common idea that morphological segmentation is enhanced in non-transparent orthographies to compensate for the absence of phonological information. Hebrew’s rich morphology and the dual version of the Hebrew script (with and without diacritic marks) provides an opportunity to study the interaction of orthographic transparency and morphological segmentation on the development of reading skills in a within-language design. Hebrew speaking 2nd (N = 27) and 5th (N = 29) grade children read aloud 96 noun words. Half of the words were simple mono-morphemic words and half were bi-morphemic derivations composed of a productive root and a morphemic pattern. In each list half of the words were presented in the transparent version of the script (with diacritic marks), and half in the non-transparent version (without diacritic marks). Our results show that in both groups, derived bi-morphemic words were identified more accurately than mono-morphemic words, but only for the transparent, pointed, script. For the un-pointed script the reverse was found, namely, that bi-morphemic words were read less accurately than mono-morphemic words, especially in second grade. Second grade children also read mono-morphemic words faster than bi-morphemic words. Finally, correlations with a standardized measure of morphological awareness were found only for second grade children, and only in bi-morphemic words. These results, showing greater morphological effects in second grade compared to fifth grade children suggest that for children raised in a language with a rich morphology, common and easily segmented morphemic units may be more beneficial for younger compared to older readers. Moreover, in contrast to the common hypothesis, our results show that morphemic segmentation does not compensate for the missing phonological information in a non-transparent orthography, but rather that morphological segmentation is most beneficial in the highly transparent script. These results are consistent with the idea that morphological and phonological segmentation processes occur simultaneously and do not constitute alternative pathways to visual word recognition. PMID:29403413
Segmentation Fusion Techniques with Application to Plenoptic Images: A Survey.
NASA Astrophysics Data System (ADS)
Evin, D.; Hadad, A.; Solano, A.; Drozdowicz, B.
2016-04-01
The segmentation of anatomical and pathological structures plays a key role in the characterization of clinically relevant evidence from digital images. Recently, plenoptic imaging has emerged as a new promise to enrich the diagnostic potential of conventional photography. Since the plenoptic images comprises a set of slightly different versions of the target scene, we propose to make use of those images to improve the segmentation quality in relation to the scenario of a single image segmentation. The problem of finding a segmentation solution from multiple images of a single scene, is called segmentation fusion. This paper reviews the issue of segmentation fusion in order to find solutions that can be applied to plenoptic images, particularly images from the ophthalmological domain.
Validation of automated white matter hyperintensity segmentation.
Smart, Sean D; Firbank, Michael J; O'Brien, John T
2011-01-01
Introduction. White matter hyperintensities (WMHs) are a common finding on MRI scans of older people and are associated with vascular disease. We compared 3 methods for automatically segmenting WMHs from MRI scans. Method. An operator manually segmented WMHs on MRI images from a 3T scanner. The scans were also segmented in a fully automated fashion by three different programmes. The voxel overlap between manual and automated segmentation was compared. Results. Between observer overlap ratio was 63%. Using our previously described in-house software, we had overlap of 62.2%. We investigated the use of a modified version of SPM segmentation; however, this was not successful, with only 14% overlap. Discussion. Using our previously reported software, we demonstrated good segmentation of WMHs in a fully automated fashion.
de Siqueira, Alexandre Fioravante; Cabrera, Flávio Camargo; Nakasuga, Wagner Massayuki; Pagamisse, Aylton; Job, Aldo Eloizo
2018-01-01
Image segmentation, the process of separating the elements within a picture, is frequently used for obtaining information from photomicrographs. Segmentation methods should be used with reservations, since incorrect results can mislead when interpreting regions of interest (ROI). This decreases the success rate of extra procedures. Multi-Level Starlet Segmentation (MLSS) and Multi-Level Starlet Optimal Segmentation (MLSOS) were developed to be an alternative for general segmentation tools. These methods gave rise to Jansen-MIDAS, an open-source software. A scientist can use it to obtain several segmentations of hers/his photomicrographs. It is a reliable alternative to process different types of photomicrographs: previous versions of Jansen-MIDAS were used to segment ROI in photomicrographs of two different materials, with an accuracy superior to 89%. © 2017 Wiley Periodicals, Inc.
Scale model test results of several STOVL ventral nozzle concepts
NASA Technical Reports Server (NTRS)
Meyer, B. E.; Re, R. J.; Yetter, J. A.
1991-01-01
Short take-off and vertical landing (STOVL) ventral nozzle concepts are investigated by means of a static cold flow scale model at a NASA facility. The internal aerodynamic performance characteristics of the cruise, transition, and vertical lift modes are considered for four ventral nozzle types. The nozzle configurations examined include those with: butterfly-type inner doors and vectoring exit vanes; circumferential inner doors and thrust vectoring vanes; a three-port segmented version with circumferential inner doors; and a two-port segmented version with cylindrical nozzle exit shells. During the testing, internal and external pressure is measured, and the thrust and flow coefficients and resultant vector angles are obtained. The inner door used for ventral nozzle flow control is found to affect performance negatively during the initial phase of transition. The best thrust performance is demonstrated by the two-port segmented ventral nozzle due to the elimination of the inner door.
Japanese migration in contemporary Japan: economic segmentation and interprefectural migration.
Fukurai, H
1991-01-01
This paper examines the economic segmentation model in explaining 1985-86 Japanese interregional migration. The analysis takes advantage of statistical graphic techniques to illustrate the following substantive issues of interregional migration: (1) to examine whether economic segmentation significantly influences Japanese regional migration and (2) to explain socioeconomic characteristics of prefectures for both in- and out-migration. Analytic techniques include a latent structural equation (LISREL) methodology and statistical residual mapping. The residual dispersion patterns, for instance, suggest the extent to which socioeconomic and geopolitical variables explain migration differences by showing unique clusters of unexplained residuals. The analysis further points out that extraneous factors such as high residential land values, significant commuting populations, and regional-specific cultures and traditions need to be incorporated in the economic segmentation model in order to assess the extent of the model's reliability in explaining the pattern of interprefectural migration.
Inferring action structure and causal relationships in continuous sequences of human action.
Buchsbaum, Daphna; Griffiths, Thomas L; Plunkett, Dillon; Gopnik, Alison; Baldwin, Dare
2015-02-01
In the real world, causal variables do not come pre-identified or occur in isolation, but instead are embedded within a continuous temporal stream of events. A challenge faced by both human learners and machine learning algorithms is identifying subsequences that correspond to the appropriate variables for causal inference. A specific instance of this problem is action segmentation: dividing a sequence of observed behavior into meaningful actions, and determining which of those actions lead to effects in the world. Here we present a Bayesian analysis of how statistical and causal cues to segmentation should optimally be combined, as well as four experiments investigating human action segmentation and causal inference. We find that both people and our model are sensitive to statistical regularities and causal structure in continuous action, and are able to combine these sources of information in order to correctly infer both causal relationships and segmentation boundaries. Copyright © 2014. Published by Elsevier Inc.
[Bioimpedometry and its utilization in dialysis therapy].
Lopot, František
2016-01-01
Measurement of living tissue impedance - bioimpedometry - started to be used in medicine some 50 years ago, first exclusively for estimation of extracellular and intracellular compartment volumes. Its most simple single frequency (50 kHz) version works directly with the measured impedance vector. Technically more sophisticated versions convert the measured impedance in values of volumes of different compartments of body fluids and calculate also principal markers of nutritional status (lean body mass, adipose tissue mass). The latest version specifically developed for application in dialysis patients includes body composition modelling and provides even absolute value of overhydration (excess fluid). Still in experimental phase is the bioimpedance exploitation for more precise estimation of residual glomerular filtration. Not yet standardized is also segmental bioimpedance measurement which should enable separate assessment of hydration status of the trunk segment and ultrafiltration capacity of peritoneum in peritoneal dialysis patients.Key words: assessment - bioimpedance - excess fluid - fluid status - glomerular filtration - haemodialysis - nutritional status - peritoneal dialysis.
Anatomy-aware measurement of segmentation accuracy
NASA Astrophysics Data System (ADS)
Tizhoosh, H. R.; Othman, A. A.
2016-03-01
Quantifying the accuracy of segmentation and manual delineation of organs, tissue types and tumors in medical images is a necessary measurement that suffers from multiple problems. One major shortcoming of all accuracy measures is that they neglect the anatomical significance or relevance of different zones within a given segment. Hence, existing accuracy metrics measure the overlap of a given segment with a ground-truth without any anatomical discrimination inside the segment. For instance, if we understand the rectal wall or urethral sphincter as anatomical zones, then current accuracy measures ignore their significance when they are applied to assess the quality of the prostate gland segments. In this paper, we propose an anatomy-aware measurement scheme for segmentation accuracy of medical images. The idea is to create a "master gold" based on a consensus shape containing not just the outline of the segment but also the outlines of the internal zones if existent or relevant. To apply this new approach to accuracy measurement, we introduce the anatomy-aware extensions of both Dice coefficient and Jaccard index and investigate their effect using 500 synthetic prostate ultrasound images with 20 different segments for each image. We show that through anatomy-sensitive calculation of segmentation accuracy, namely by considering relevant anatomical zones, not only the measurement of individual users can change but also the ranking of users' segmentation skills may require reordering.
Enhancing atlas based segmentation with multiclass linear classifiers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sdika, Michaël, E-mail: michael.sdika@creatis.insa-lyon.fr
Purpose: To present a method to enrich atlases for atlas based segmentation. Such enriched atlases can then be used as a single atlas or within a multiatlas framework. Methods: In this paper, machine learning techniques have been used to enhance the atlas based segmentation approach. The enhanced atlas defined in this work is a pair composed of a gray level image alongside an image of multiclass classifiers with one classifier per voxel. Each classifier embeds local information from the whole training dataset that allows for the correction of some systematic errors in the segmentation and accounts for the possible localmore » registration errors. The authors also propose to use these images of classifiers within a multiatlas framework: results produced by a set of such local classifier atlases can be combined using a label fusion method. Results: Experiments have been made on the in vivo images of the IBSR dataset and a comparison has been made with several state-of-the-art methods such as FreeSurfer and the multiatlas nonlocal patch based method of Coupé or Rousseau. These experiments show that their method is competitive with state-of-the-art methods while having a low computational cost. Further enhancement has also been obtained with a multiatlas version of their method. It is also shown that, in this case, nonlocal fusion is unnecessary. The multiatlas fusion can therefore be done efficiently. Conclusions: The single atlas version has similar quality as state-of-the-arts multiatlas methods but with the computational cost of a naive single atlas segmentation. The multiatlas version offers a improvement in quality and can be done efficiently without a nonlocal strategy.« less
Throckmorton, Thomas W; Gulotta, Lawrence V; Bonnarens, Frank O; Wright, Stephen A; Hartzell, Jeffrey L; Rozzi, William B; Hurst, Jason M; Frostick, Simon P; Sperling, John W
2015-06-01
The purpose of this study was to compare the accuracy of patient-specific guides for total shoulder arthroplasty (TSA) with traditional instrumentation in arthritic cadaver shoulders. We hypothesized that the patient-specific guides would place components more accurately than standard instrumentation. Seventy cadaver shoulders with radiographically confirmed arthritis were randomized in equal groups to 5 surgeons of varying experience levels who were not involved in development of the patient-specific guidance system. Specimens were then randomized to patient-specific guides based off of computed tomography scanning, standard instrumentation, and anatomic TSA or reverse TSA. Variances in version or inclination of more than 10° and more than 4 mm in starting point were considered indications of significant component malposition. TSA glenoid components placed with patient-specific guides averaged 5° of deviation from the intended position in version and 3° in inclination; those with standard instrumentation averaged 8° of deviation in version and 7° in inclination. These differences were significant for version (P = .04) and inclination (P = .01). Multivariate analysis of variance to compare the overall accuracy for the entire cohort (TSA and reverse TSA) revealed patient-specific guides to be significantly more accurate (P = .01) for the combined vectors of version and inclination. Patient-specific guides also had fewer instances of significant component malposition than standard instrumentation did. Patient-specific targeting guides were more accurate than traditional instrumentation and had fewer instances of component malposition for glenoid component placement in this multi-surgeon cadaver study of arthritic shoulders. Long-term clinical studies are needed to determine if these improvements produce improved functional outcomes. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Validation of Automated White Matter Hyperintensity Segmentation
Smart, Sean D.; Firbank, Michael J.; O'Brien, John T.
2011-01-01
Introduction. White matter hyperintensities (WMHs) are a common finding on MRI scans of older people and are associated with vascular disease. We compared 3 methods for automatically segmenting WMHs from MRI scans. Method. An operator manually segmented WMHs on MRI images from a 3T scanner. The scans were also segmented in a fully automated fashion by three different programmes. The voxel overlap between manual and automated segmentation was compared. Results. Between observer overlap ratio was 63%. Using our previously described in-house software, we had overlap of 62.2%. We investigated the use of a modified version of SPM segmentation; however, this was not successful, with only 14% overlap. Discussion. Using our previously reported software, we demonstrated good segmentation of WMHs in a fully automated fashion. PMID:21904678
Municipalities face many challenges in managing nonhazardous solid waste. For instance, landfills are reaching capacity throughout the country, tipping fees are increasing, and regulations affecting the disposal and recycling of municipal solid waste (MSW) are being promulgated ...
A User''s Guide to the Zwikker-Kosten Transmission Line Code (ZKTL)
NASA Technical Reports Server (NTRS)
Kelly, J. J.; Abu-Khajeel, H.
1997-01-01
This user's guide documents updates to the Zwikker-Kosten Transmission Line Code (ZKTL). This code was developed for analyzing new liner concepts developed to provide increased sound absorption. Contiguous arrays of multi-degree-of-freedom (MDOF) liner elements serve as the model for these liner configurations, and Zwikker and Kosten's theory of sound propagation in channels is used to predict the surface impedance. Transmission matrices for the various liner elements incorporate both analytical and semi-empirical methods. This allows standard matrix techniques to be employed in the code to systematically calculate the composite impedance due to the individual liner elements. The ZKTL code consists of four independent subroutines: 1. Single channel impedance calculation - linear version (SCIC) 2. Single channel impedance calculation - nonlinear version (SCICNL) 3. Multi-channel, multi-segment, multi-layer impedance calculation - linear version (MCMSML) 4. Multi-channel, multi-segment, multi-layer impedance calculation - nonlinear version (MCMSMLNL) Detailed examples, comments, and explanations for each liner impedance computation module are included. Also contained in the guide are depictions of the interactive execution, input files and output files.
Developing a Procedure for Segmenting Meshed Heat Networks of Heat Supply Systems without Outflows
NASA Astrophysics Data System (ADS)
Tokarev, V. V.
2018-06-01
The heat supply systems of cities have, as a rule, a ring structure with the possibility of redistributing the flows. Despite the fact that a ring structure is more reliable than a radial one, the operators of heat networks prefer to use them in normal modes according to the scheme without overflows of the heat carrier between the heat mains. With such a scheme, it is easier to adjust the networks and to detect and locate faults in them. The article proposes a formulation of the heat network segmenting problem. The problem is set in terms of optimization with the heat supply system's excessive hydraulic power used as the optimization criterion. The heat supply system computer model has a hierarchically interconnected multilevel structure. Since iterative calculations are only carried out for the level of trunk heat networks, decomposing the entire system into levels allows the dimensionality of the solved subproblems to be reduced by an order of magnitude. An attempt to solve the problem by fully enumerating possible segmentation versions does not seem to be feasible for systems of really existing sizes. The article suggests a procedure for searching rational segmentation of heat supply networks with limiting the search to versions of dividing the system into segments near the flow convergence nodes with subsequent refining of the solution. The refinement is performed in two stages according to the total excess hydraulic power criterion. At the first stage, the loads are redistributed among the sources. After that, the heat networks are divided into independent fragments, and the possibility of increasing the excess hydraulic power in the obtained fragments is checked by shifting the division places inside a fragment. The proposed procedure has been approbated taking as an example a municipal heat supply system involving six heat mains fed from a common source, 24 loops within the feeding mains plane, and more than 5000 consumers. Application of the proposed segmentation procedure made it possible to find a version with required hydraulic power in the heat supply system on 3% less than the one found using the simultaneous segmentation method.
Pre-Calculus California Content Standards: Standards Deconstruction Project. Version 1.0
ERIC Educational Resources Information Center
Arnold, Bruce; Cliffe, Karen; Cubillo, Judy; Kracht, Brenda; Leaf, Abi; Legner, Mary; McGinity, Michelle; Orr, Michael; Rocha, Mario; Ross, Judy; Teegarden, Terrie; Thomson, Sarah; Villero, Geri
2008-01-01
This project was coordinated and funded by the California Partnership for Achieving Student Success (Cal-PASS). Cal-PASS is a data sharing system linking all segments of education. Its purpose is to improve student transition and success from one educational segment to the next. Cal-PASS' standards deconstruction project was initiated by the…
NASA Technical Reports Server (NTRS)
Hartz, Leslie
1994-01-01
Tool helps worker grip and move along large, smooth structure with no handgrips or footholds. Adheres to surface but easily released by actuating simple mechanism. Includes handle and segmented contact-adhesive pad. Bulk of pad made of soft plastic foam conforming to surface of structure. Each segment reinforced with rib. In sticking mode, ribs braced by side catches. In peeling mode, side catches retracted, and segmented adhesive pad loses its stiffness. Modified versions useful in inspecting hulls of ships and scaling walls in rescue operations.
Teaching Structured Design of Network Algorithms in Enhanced Versions of SQL
ERIC Educational Resources Information Center
de Brock, Bert
2004-01-01
From time to time developers of (database) applications will encounter, explicitly or implicitly, structures such as trees, graphs, and networks. Such applications can, for instance, relate to bills of material, organization charts, networks of (rail)roads, networks of conduit pipes (e.g., plumbing, electricity), telecom networks, and data…
Lemon, W C; Levine, R B
1997-06-01
During the metamorphosis of Manduca sexta the larval nervous system is reorganized to allow the generation of behaviors that are specific to the pupal and adult stages. In some instances, metamorphic changes in neurons that persist from the larval stage are segment-specific and lead to expression of segment-specific behavior in later stages. At the larval-pupal transition, the larval abdominal bending behavior, which is distributed throughout the abdomen, changes to the pupal gin trap behavior which is restricted to three abdominal segments. This study suggests that the neural circuit that underlies larval bending undergoes segment specific modifications to produce the segmentally restricted gin trap behavior. We show, however, that non-gin trap segments go through a developmental change similar to that seen in gin trap segments. Pupal-specific motor patterns are produced by stimulation of sensory neurons in abdominal segments that do not have gin traps and cannot produce the gin trap behavior. In particular, sensory stimulation in non-gin trap pupal segments evokes a motor response that is faster than the larval response and that displays the triphasic contralateral-ipsilateral-contralateral activity pattern that is typical of the pupal gin trap behavior. Despite the alteration of reflex activity in all segments, developmental changes in sensory neuron morphology are restricted to those segments that form gin traps. In non-gin trap segments, persistent sensory neurons do not expand their terminal arbors, as do sensory neurons in gin trap segments, yet are capable of eliciting gin trap-like motor responses.
Analysis of normal human retinal vascular network architecture using multifractal geometry
Ţălu, Ştefan; Stach, Sebastian; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina; Nicoară, Simona Delia
2017-01-01
AIM To apply the multifractal analysis method as a quantitative approach to a comprehensive description of the microvascular network architecture of the normal human retina. METHODS Fifty volunteers were enrolled in this study in the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and January 2014. A set of 100 segmented and skeletonised human retinal images, corresponding to normal states of the retina were studied. An automatic unsupervised method for retinal vessel segmentation was applied before multifractal analysis. The multifractal analysis of digital retinal images was made with computer algorithms, applying the standard box-counting method. Statistical analyses were performed using the GraphPad InStat software. RESULTS The architecture of normal human retinal microvascular network was able to be described using the multifractal geometry. The average of generalized dimensions (Dq) for q=0, 1, 2, the width of the multifractal spectrum (Δα=αmax − αmin) and the spectrum arms' heights difference (|Δf|) of the normal images were expressed as mean±standard deviation (SD): for segmented versions, D0=1.7014±0.0057; D1=1.6507±0.0058; D2=1.5772±0.0059; Δα=0.92441±0.0085; |Δf|= 0.1453±0.0051; for skeletonised versions, D0=1.6303±0.0051; D1=1.6012±0.0059; D2=1.5531±0.0058; Δα=0.65032±0.0162; |Δf|= 0.0238±0.0161. The average of generalized dimensions (Dq) for q=0, 1, 2, the width of the multifractal spectrum (Δα) and the spectrum arms' heights difference (|Δf|) of the segmented versions was slightly greater than the skeletonised versions. CONCLUSION The multifractal analysis of fundus photographs may be used as a quantitative parameter for the evaluation of the complex three-dimensional structure of the retinal microvasculature as a potential marker for early detection of topological changes associated with retinal diseases. PMID:28393036
Attack tolerance of correlated time-varying social networks with well-defined communities
NASA Astrophysics Data System (ADS)
Sur, Souvik; Ganguly, Niloy; Mukherjee, Animesh
2015-02-01
In this paper, we investigate the efficiency and the robustness of information transmission for real-world social networks, modeled as time-varying instances, under targeted attack in shorter time spans. We observe that these quantities are markedly higher than that of the randomized versions of the considered networks. An important factor that drives this efficiency or robustness is the presence of short-time correlations across the network instances which we quantify by a novel metric the-edge emergence factor, denoted as ξ. We find that standard targeted attacks are not effective in collapsing this network structure. Remarkably, if the hourly community structures of the temporal network instances are attacked with the largest size community attacked first, the second largest next and so on, the network soon collapses. This behavior, we show is an outcome of the fact that the edge emergence factor bears a strong positive correlation with the size ordered community structures.
Bit by Bit or All at Once? Splitting up the Inquiry Task to Promote Children's Scientific Reasoning
ERIC Educational Resources Information Center
Lazonder, Ard W.; Kamp, Ellen
2012-01-01
This study examined whether and why assigning children to a segmented inquiry task makes their investigations more productive. Sixty-one upper elementary-school pupils engaged in a simulation-based inquiry assignment either received a multivariable inquiry task (n = 21), a segmented version of this task that addressed the variables in successive…
Vatsa, Mayank; Singh, Richa; Noore, Afzel
2008-08-01
This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.
NASA Technical Reports Server (NTRS)
Glasser, M. E.
1981-01-01
The Multilevel Diffusion Model (MDM) Version 5 was modified to include features of more recent versions. The MDM was used to predict in-cloud HCl concentrations for the April 12 launch of the space Shuttle (STS-1). The maximum centerline predictions were compared with measurements of maximum gaseous HCl obtained from aircraft passes through two segments of the fragmented shuttle ground cloud. The model over-predicted the maximum values for gaseous HCl in the lower cloud segment and portrayed the same rate of decay with time as the observed values. However, the decay with time of HCl maximum predicted by the MDM was more rapid than the observed decay for the higher cloud segment, causing the model to under-predict concentrations which were measured late in the life of the cloud. The causes of the tendency for the MDM to be conservative in over-estimating the HCl concentrations in the one case while tending to under-predict concentrations in the other case are discussed.
Chen, Cheng; Wang, Wei; Ozolek, John A.; Rohde, Gustavo K.
2013-01-01
We describe a new supervised learning-based template matching approach for segmenting cell nuclei from microscopy images. The method uses examples selected by a user for building a statistical model which captures the texture and shape variations of the nuclear structures from a given dataset to be segmented. Segmentation of subsequent, unlabeled, images is then performed by finding the model instance that best matches (in the normalized cross correlation sense) local neighborhood in the input image. We demonstrate the application of our method to segmenting nuclei from a variety of imaging modalities, and quantitatively compare our results to several other methods. Quantitative results using both simulated and real image data show that, while certain methods may work well for certain imaging modalities, our software is able to obtain high accuracy across several imaging modalities studied. Results also demonstrate that, relative to several existing methods, the template-based method we propose presents increased robustness in the sense of better handling variations in illumination, variations in texture from different imaging modalities, providing more smooth and accurate segmentation borders, as well as handling better cluttered nuclei. PMID:23568787
Hierarchical Higher Order Crf for the Classification of Airborne LIDAR Point Clouds in Urban Areas
NASA Astrophysics Data System (ADS)
Niemeyer, J.; Rottensteiner, F.; Soergel, U.; Heipke, C.
2016-06-01
We propose a novel hierarchical approach for the classification of airborne 3D lidar points. Spatial and semantic context is incorporated via a two-layer Conditional Random Field (CRF). The first layer operates on a point level and utilises higher order cliques. Segments are generated from the labelling obtained in this way. They are the entities of the second layer, which incorporates larger scale context. The classification result of the segments is introduced as an energy term for the next iteration of the point-based layer. This framework iterates and mutually propagates context to improve the classification results. Potentially wrong decisions can be revised at later stages. The output is a labelled point cloud as well as segments roughly corresponding to object instances. Moreover, we present two new contextual features for the segment classification: the distance and the orientation of a segment with respect to the closest road. It is shown that the classification benefits from these features. In our experiments the hierarchical framework improve the overall accuracies by 2.3% on a point-based level and by 3.0% on a segment-based level, respectively, compared to a purely point-based classification.
Automatic segmentation of the left ventricle cavity and myocardium in MRI data.
Lynch, M; Ghita, O; Whelan, P F
2006-04-01
A novel approach for the automatic segmentation has been developed to extract the epi-cardium and endo-cardium boundaries of the left ventricle (lv) of the heart. The developed segmentation scheme takes multi-slice and multi-phase magnetic resonance (MR) images of the heart, transversing the short-axis length from the base to the apex. Each image is taken at one instance in the heart's phase. The images are segmented using a diffusion-based filter followed by an unsupervised clustering technique and the resulting labels are checked to locate the (lv) cavity. From cardiac anatomy, the closest pool of blood to the lv cavity is the right ventricle cavity. The wall between these two blood-pools (interventricular septum) is measured to give an approximate thickness for the myocardium. This value is used when a radial search is performed on a gradient image to find appropriate robust segments of the epi-cardium boundary. The robust edge segments are then joined using a normal spline curve. Experimental results are presented with very encouraging qualitative and quantitative results and a comparison is made against the state-of-the art level-sets method.
Guidelines to Data Processing Management.
ERIC Educational Resources Information Center
Data Processing Management Association, Park Ridge, IL.
This is a revised and updated version of an earlier published set of guidelines. As in the instance of the first edition, this volume contains contributions by some of the most capable consultants in the information processing field. Their comments are based on sound, proved judgment tested in day-to-day operations at installations throughout the…
NASA Technical Reports Server (NTRS)
Womble, M. E.; Potter, J. E.
1975-01-01
A prefiltering version of the Kalman filter is derived for both discrete and continuous measurements. The derivation consists of determining a single discrete measurement that is equivalent to either a time segment of continuous measurements or a set of discrete measurements. This prefiltering version of the Kalman filter easily handles numerical problems associated with rapid transients and ill-conditioned Riccati matrices. Therefore, the derived technique for extrapolating the Riccati matrix from one time to the next constitutes a new set of integration formulas which alleviate ill-conditioning problems associated with continuous Riccati equations. Furthermore, since a time segment of continuous measurements is converted into a single discrete measurement, Potter's square root formulas can be used to update the state estimate and its error covariance matrix. Therefore, if having the state estimate and its error covariance matrix at discrete times is acceptable, the prefilter extends square root filtering with all its advantages, to continuous measurement problems.
Oracle Applications Patch Administration Tool (PAT) Beta Version
DOE Office of Scientific and Technical Information (OSTI.GOV)
2002-01-04
PAT is a Patch Administration Tool that provides analysis, tracking, and management of Oracle Application patches. This includes capabilities as outlined below: Patch Analysis & Management Tool Outline of capabilities: Administration Patch Data Maintenance -- track Oracle Application patches applied to what database instance & machine Patch Analysis capture text files (readme.txt and driver files) form comparison detail report comparison detail PL/SQL package comparison detail SQL scripts detail JSP module comparison detail Parse and load the current applptch.txt (10.7) or load patch data from Oracle Application database patch tables (11i) Display Analysis -- Compare patch to be applied with currentmore » Oracle Application installed Appl_top code versions Patch Detail Module comparison detail Analyze and display one Oracle Application module patch. Patch Management -- automatic queue and execution of patches Administration Parameter maintenance -- setting for directory structure of Oracle Application appl_top Validation data maintenance -- machine names and instances to patch Operation Patch Data Maintenance Schedule a patch (queue for later execution) Run a patch (queue for immediate execution) Review the patch logs Patch Management Reports« less
Utilities for master source code distribution: MAX and Friends
NASA Technical Reports Server (NTRS)
Felippa, Carlos A.
1988-01-01
MAX is a program for the manipulation of FORTRAN master source code (MSC). This is a technique by which one maintains one and only one master copy of a FORTRAN program under a program developing system, which for MAX is assumed to be VAX/VMS. The master copy is not intended to be directly compiled. Instead it must be pre-processed by MAX to produce compilable instances. These instances may correspond to different code versions (for example, double precision versus single precision), different machines (for example, IBM, CDC, Cray) or different operating systems (i.e., VAX/VMS versus VAX/UNIX). The advantage os using a master source is more pronounced in complex application programs that are developed and maintained over many years and are to be transported and executed on several computer environments. The version lag problem that plagues many such programs is avoided by this approach. MAX is complemented by several auxiliary programs that perform nonessential functions. The ensemble is collectively known as MAX and Friends. All of these programs, including MAX, are executed as foreign VAX/VMS commands and can easily be hidden in customized VMS command procedures.
Demonstration of a compiled version of Shor's quantum factoring algorithm using photonic qubits.
Lu, Chao-Yang; Browne, Daniel E; Yang, Tao; Pan, Jian-Wei
2007-12-21
We report an experimental demonstration of a complied version of Shor's algorithm using four photonic qubits. We choose the simplest instance of this algorithm, that is, factorization of N=15 in the case that the period r=2 and exploit a simplified linear optical network to coherently implement the quantum circuits of the modular exponential execution and semiclassical quantum Fourier transformation. During this computation, genuine multiparticle entanglement is observed which well supports its quantum nature. This experiment represents an essential step toward full realization of Shor's algorithm and scalable linear optics quantum computation.
ELM: the status of the 2010 eukaryotic linear motif resource
Gould, Cathryn M.; Diella, Francesca; Via, Allegra; Puntervoll, Pål; Gemünd, Christine; Chabanis-Davidson, Sophie; Michael, Sushama; Sayadi, Ahmed; Bryne, Jan Christian; Chica, Claudia; Seiler, Markus; Davey, Norman E.; Haslam, Niall; Weatheritt, Robert J.; Budd, Aidan; Hughes, Tim; Paś, Jakub; Rychlewski, Leszek; Travé, Gilles; Aasland, Rein; Helmer-Citterich, Manuela; Linding, Rune; Gibson, Toby J.
2010-01-01
Linear motifs are short segments of multidomain proteins that provide regulatory functions independently of protein tertiary structure. Much of intracellular signalling passes through protein modifications at linear motifs. Many thousands of linear motif instances, most notably phosphorylation sites, have now been reported. Although clearly very abundant, linear motifs are difficult to predict de novo in protein sequences due to the difficulty of obtaining robust statistical assessments. The ELM resource at http://elm.eu.org/ provides an expanding knowledge base, currently covering 146 known motifs, with annotation that includes >1300 experimentally reported instances. ELM is also an exploratory tool for suggesting new candidates of known linear motifs in proteins of interest. Information about protein domains, protein structure and native disorder, cellular and taxonomic contexts is used to reduce or deprecate false positive matches. Results are graphically displayed in a ‘Bar Code’ format, which also displays known instances from homologous proteins through a novel ‘Instance Mapper’ protocol based on PHI-BLAST. ELM server output provides links to the ELM annotation as well as to a number of remote resources. Using the links, researchers can explore the motifs, proteins, complex structures and associated literature to evaluate whether candidate motifs might be worth experimental investigation. PMID:19920119
An application of cluster detection to scene analysis
NASA Technical Reports Server (NTRS)
Rosenfeld, A. H.; Lee, Y. H.
1971-01-01
Certain arrangements of local features in a scene tend to group together and to be seen as units. It is suggested that in some instances, this phenomenon might be interpretable as a process of cluster detection in a graph-structured space derived from the scene. This idea is illustrated using a class of scenes that contain only horizontal and vertical line segments.
Overcoming the Effects of Variation in Infant Speech Segmentation: Influences of Word Familiarity
ERIC Educational Resources Information Center
Singh, Leher; Nestor, Sarah S.; Bortfeld, Heather
2008-01-01
Previous studies have shown that 7.5-month-olds can track and encode words in fluent speech, but they fail to equate instances of a word that contrast in talker gender, vocal affect, and fundamental frequency. By 10.5 months, they succeed at generalizing across such variability, marking a clear transition period during which infants' word…
NASA Astrophysics Data System (ADS)
Kaftan, Jens N.; Tek, Hüseyin; Aach, Til
2009-02-01
The segmentation of the hepatic vascular tree in computed tomography (CT) images is important for many applications such as surgical planning of oncological resections and living liver donations. In surgical planning, vessel segmentation is often used as basis to support the surgeon in the decision about the location of the cut to be performed and the extent of the liver to be removed, respectively. We present a novel approach to hepatic vessel segmentation that can be divided into two stages. First, we detect and delineate the core vessel components efficiently with a high specificity. Second, smaller vessel branches are segmented by a robust vessel tracking technique based on a medialness filter response, which starts from the terminal points of the previously segmented vessels. Specifically, in the first phase major vessels are segmented using the globally optimal graphcuts algorithm in combination with foreground and background seed detection, while the computationally more demanding tracking approach needs to be applied only locally in areas of smaller vessels within the second stage. The method has been evaluated on contrast-enhanced liver CT scans from clinical routine showing promising results. In addition to the fully-automatic instance of this method, the vessel tracking technique can also be used to easily add missing branches/sub-trees to an already existing segmentation result by adding single seed-points.
NASA Astrophysics Data System (ADS)
Temme, A.; Langston, A. L.
2017-12-01
Traditional classification of channel networks is helpful for qualitative geologic and geomorphic inference. For instance, a dendritic network indicates no strong lithological control on where channels flow. However, an approach where channel network structure is quantified, is required to be able to indicate for instance how increasing levels of lithological control lead, gradually or suddenly, to a trellis-type drainage network Our contribution aims to aid this transition to a quantitative analysis of channel networks. First, to establish the range of typically occurring channel network properties, we selected 30 examples of traditional drainage network types from around the world. For each of these, we calculated a set of topological and geometric properties, such as total drainage length, average length of a channel segment and the average angle of intersection of channel segments. A decision tree was used to formalize the relation between these newly quantified properties on the one hand, and traditional network types on the other hand. Then, to explore how variations in lithological and geomorphic boundary conditions affect channel network structure, we ran a set of experiments with landscape evolution model Landlab. For each simulated channel network, the same set of topological and geometric properties was calculated as for the 30 real-world channel networks. The latter were used for a first, visual evaluation to find out whether a simulated network that looked, for instance, rectangular, also had the same set of properties as real-world rectangular channel networks. Ultimately, the relation between these properties and the imposed lithological and geomorphic boundary conditions was explored using simple bivariate statistics.
Schnettler, Berta; Grunert, Klaus G; Miranda-Zapata, Edgardo; Orellana, Ligia; Sepúlveda, José; Lobos, Germán; Hueche, Clementina; Höger, Yesli
2017-06-01
The aims of this study were to test the relationships between food neophobia, satisfaction with food-related life and food technology neophobia, distinguishing consumer segments according to these variables and characterizing them according to willingness to purchase food produced with novel technologies. A survey was conducted with 372 university students (mean aged=20.4years, SD=2.4). The questionnaire included the Abbreviated version of the Food Technology Neophobia Scale (AFTNS), Satisfaction with Life Scale (SWLS), and a 6-item version of the Food Neophobia Scale (FNS). Using confirmatory factor analysis, it was confirmed that SWFL correlated inversely with FNS, whereas FNS correlated inversely with AFTNS. No relationship was found between SWFL and AFTNS. Two main segments were identified using cluster analysis; these segments differed according to gender and family size. Group 1 (57.8%) possessed higher AFTNS and FNS scores than Group 2 (28.5%). However, these groups did not differ in their SWFL scores. Group 1 was less willing to purchase foods produced with new technologies than Group 2. The AFTNS and the 6-item version of the FNS are suitable instruments to measure acceptance of foods produced using new technologies in South American developing countries. The AFTNS constitutes a parsimonious alternative for the international study of food technology neophobia. Copyright © 2017 Elsevier Ltd. All rights reserved.
1983-09-01
6ENFRAL. ELECTROMAGNETIC MODEL FOR THE ANALYSIS OF COMPLEX SYSTEMS **%(GEMA CS) Computer Code Documentation ii( Version 3 ). A the BDM Corporation Dr...ANALYSIS FnlTcnclRpr F COMPLEX SYSTEM (GmCS) February 81 - July 83- I TR CODE DOCUMENTATION (Version 3 ) 6.PROMN N.REPORT NUMBER 5. CONTRACT ORGAT97...the ti and t2 directions on the source patch. 3 . METHOD: The electric field at a segment observation point due to the source patch j is given by 1-- lnA
Howell, Peter; Sackin, Stevie; Glenn, Kazan
2007-01-01
This program of work is intended to develop automatic recognition procedures to locate and assess stuttered dysfluencies. This and the following article together, develop and test recognizers for repetitions and prolongations. The automatic recognizers classify the speech in two stages: In the first, the speech is segmented and in the second the segments are categorized. The units that are segmented are words. Here assessments by human judges on the speech of 12 children who stutter are described using a corresponding procedure. The accuracy of word boundary placement across judges, categorization of the words as fluent, repetition or prolongation, and duration of the different fluency categories are reported. These measures allow reliable instances of repetitions and prolongations to be selected for training and assessing the recognizers in the subsequent paper. PMID:9328878
Appliance Efficiency Standards and Price Discrimination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spurlock, Cecily Anna
2013-05-08
I explore the effects of two simultaneous changes in minimum energy efficiency and ENERGY STAR standards for clothes washers. Adapting the Mussa and Rosen (1978) and Ronnen (1991) second-degree price discrimination model, I demonstrate that clothes washer prices and menus adjusted to the new standards in patterns consistent with a market in which firms had been price discriminating. In particular, I show evidence of discontinuous price drops at the time the standards were imposed, driven largely by mid-low efficiency segments of the market. The price discrimination model predicts this result. On the other hand, in a perfectly competition market, pricesmore » should increase for these market segments. Additionally, new models proliferated in the highest efficiency market segment following the standard changes. Finally, I show that firms appeared to use different adaptation strategies at the two instances of the standards changing.« less
Joint graph cut and relative fuzzy connectedness image segmentation algorithm.
Ciesielski, Krzysztof Chris; Miranda, Paulo A V; Falcão, Alexandre X; Udupa, Jayaram K
2013-12-01
We introduce an image segmentation algorithm, called GC(sum)(max), which combines, in novel manner, the strengths of two popular algorithms: Relative Fuzzy Connectedness (RFC) and (standard) Graph Cut (GC). We show, both theoretically and experimentally, that GC(sum)(max) preserves robustness of RFC with respect to the seed choice (thus, avoiding "shrinking problem" of GC), while keeping GC's stronger control over the problem of "leaking though poorly defined boundary segments." The analysis of GC(sum)(max) is greatly facilitated by our recent theoretical results that RFC can be described within the framework of Generalized GC (GGC) segmentation algorithms. In our implementation of GC(sum)(max) we use, as a subroutine, a version of RFC algorithm (based on Image Forest Transform) that runs (provably) in linear time with respect to the image size. This results in GC(sum)(max) running in a time close to linear. Experimental comparison of GC(sum)(max) to GC, an iterative version of RFC (IRFC), and power watershed (PW), based on a variety medical and non-medical images, indicates superior accuracy performance of GC(sum)(max) over these other methods, resulting in a rank ordering of GC(sum)(max)>PW∼IRFC>GC. Copyright © 2013 Elsevier B.V. All rights reserved.
Structure Inference from Mobility Encounters
2013-10-20
world dataset, which contains 230K trajectories of taxi cabs in Beijing . Our algorithm extracts a pathlet dictionary containing around 130K...data set, frequently used pathlets in the dictionary represent driving segments chosen by many taxi cab drivers in Beijing , reflecting the joint wisdom...limitations on storage and communication bandwidth. For instance, 50% of the Beijing Taxi Trajectories we employed in this study have at most one
Quantifying the Effectiveness of Crowd-Sourced Serious Games
2014-09-01
of All Metrics Used in the Thesis . . . . . . . . . . . . . . 37 Table 5.1 Average DAU and MAU for Selected Mobile , Social, and Online Games...of Sample VeriGames . . . . . . . . . . . . . . . . . . . . 41 Table 5.4 ER of Some Mobile , Social and Online Games and Developers . . 41 Table 5.5 ER...a code segment. A backend verification engine then combines the assertions produced from all related game instances and tries to obtain conditions
On a methodology for robust segmentation of nonideal iris images.
Schmid, Natalia A; Zuo, Jinyu
2010-06-01
Iris biometric is one of the most reliable biometrics with respect to performance. However, this reliability is a function of the ideality of the data. One of the most important steps in processing nonideal data is reliable and precise segmentation of the iris pattern from remaining background. In this paper, a segmentation methodology that aims at compensating various nonidealities contained in iris images during segmentation is proposed. The virtue of this methodology lies in its capability to reliably segment nonideal imagery that is simultaneously affected with such factors as specular reflection, blur, lighting variation, occlusion, and off-angle images. We demonstrate the robustness of our segmentation methodology by evaluating ideal and nonideal data sets, namely, the Chinese Academy of Sciences iris data version 3 interval subdirectory, the iris challenge evaluation data, the West Virginia University (WVU) data, and the WVU off-angle data. Furthermore, we compare our performance to that of our implementation of Camus and Wildes's algorithm and Masek's algorithm. We demonstrate considerable improvement in segmentation performance over the formerly mentioned algorithms.
Lang, Irene M
2018-05-23
Guidelines and recommendations are designed to guide physicians in making decisions in daily practice. Guidelines provide a condensed summary of all available evidence at the time of the writing process. Recommendations take into account the risk-benefit ratio of particular diagnostic or therapeutic means and the impact on outcome, but not monetary or political considerations. Guidelines are not substitutes but are complementary to textbooks and cover the European Society of Cardiology (ESC) core curriculum topics. The level of evidence and the strength of recommendations of particular treatment options were recently newly weighted and graded according to predefined scales. Guidelines endorsement and implementation strategies are based on abridged pocket guidelines versions, electronic version for digital applications, translations into the national languages or extracts with reference to main changes since the last version. The present article represents a condensed summary of new and practically relevant items contained in the 2017 European Society of Cardiology (ESC) guidelines for the management of acute myocardial infarction in patients with ST-segment elevation, with reference to key citations.
SLS Pathfinder Segments Car Train Departure
2016-03-02
An Iowa Northern locomotive, contracted by Goodloe Transportation of Chicago, departs from NASA’s Kennedy Space Center in Florida, with two containers on railcars for transport to the Jay Jay railroad yard. The containers held two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket that were delivered to the Rotation, Processing and Surge Facility (RPSF). Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will conduct a series of lifts, moves and stacking operations using the booster segments, which are inert, to prepare for Exploration Mission-1, deep-space missions and the journey to Mars. The pathfinder booster segments are from Orbital ATK in Utah.
SLS Pathfinder Segments Car Train Departure
2016-03-02
An Iowa Northern locomotive, contracted by Goodloe Transportation of Chicago, departs from the Rotation, Processing and Surge Facility (RPSF) at NASA’s Kennedy Space Center in Florida, with two containers on railcars for transport to the NASA Jay Jay railroad yard. The containers held two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket that were delivered to the RPSF. Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will conduct a series of lifts, moves and stacking operations using the booster segments, which are inert, to prepare for Exploration Mission-1, deep-space missions and the journey to Mars. The pathfinder booster segments are from Orbital ATK in Utah.
Automatic CT Brain Image Segmentation Using Two Level Multiresolution Mixture Model of EM
NASA Astrophysics Data System (ADS)
Jiji, G. Wiselin; Dehmeshki, Jamshid
2014-04-01
Tissue classification in computed tomography (CT) brain images is an important issue in the analysis of several brain dementias. A combination of different approaches for the segmentation of brain images is presented in this paper. A multi resolution algorithm is proposed along with scaled versions using Gaussian filter and wavelet analysis that extends expectation maximization (EM) algorithm. It is found that it is less sensitive to noise and got more accurate image segmentation than traditional EM. Moreover the algorithm has been applied on 20 sets of CT of the human brain and compared with other works. The segmentation results show the advantages of the proposed work have achieved more promising results and the results have been tested with Doctors.
Breast mass segmentation in mammography using plane fitting and dynamic programming.
Song, Enmin; Jiang, Luan; Jin, Renchao; Zhang, Lin; Yuan, Yuan; Li, Qiang
2009-07-01
Segmentation is an important and challenging task in a computer-aided diagnosis (CAD) system. Accurate segmentation could improve the accuracy in lesion detection and characterization. The objective of this study is to develop and test a new segmentation method that aims at improving the performance level of breast mass segmentation in mammography, which could be used to provide accurate features for classification. This automated segmentation method consists of two main steps and combines the edge gradient, the pixel intensity, as well as the shape characteristics of the lesions to achieve good segmentation results. First, a plane fitting method was applied to a background-trend corrected region-of-interest (ROI) of a mass to obtain the edge candidate points. Second, dynamic programming technique was used to find the "optimal" contour of the mass from the edge candidate points. Area-based similarity measures based on the radiologist's manually marked annotation and the segmented region were employed as criteria to evaluate the performance level of the segmentation method. With the evaluation criteria, the new method was compared with 1) the dynamic programming method developed by Timp and Karssemeijer, and 2) the normalized cut segmentation method, based on 337 ROIs extracted from a publicly available image database. The experimental results indicate that our segmentation method can achieve a higher performance level than the other two methods, and the improvements in segmentation performance level were statistically significant. For instance, the mean overlap percentage for the new algorithm was 0.71, whereas those for Timp's dynamic programming method and the normalized cut segmentation method were 0.63 (P < .001) and 0.61 (P < .001), respectively. We developed a new segmentation method by use of plane fitting and dynamic programming, which achieved a relatively high performance level. The new segmentation method would be useful for improving the accuracy of computerized detection and classification of breast cancer in mammography.
ERIC Educational Resources Information Center
Association of Research Libraries, 2009
2009-01-01
Libraries are making diverse contributions to the development of many types of digital repositories, particularly those housing locally created digital content, including new digital objects or digitized versions of locally held works. In some instances, libraries are managing a repository and its related services entirely on their own, but often…
Testing in Service-Oriented Environments
2010-03-01
software releases (versions, service packs, vulnerability patches) for one com- mon ESB during the 13-month period from January 1, 2008 through...impact on quality of service : Unlike traditional software compo- nents, a single instance of a web service can be used by multiple consumers. Since the...distributed, with heterogeneous hardware and software (SOA infrastructure, services , operating systems, and databases). Because of cost and security, it
A Hybrid Ant Colony Optimization Algorithm for the Extended Capacitated Arc Routing Problem.
Li-Ning Xing; Rohlfshagen, P; Ying-Wu Chen; Xin Yao
2011-08-01
The capacitated arc routing problem (CARP) is representative of numerous practical applications, and in order to widen its scope, we consider an extended version of this problem that entails both total service time and fixed investment costs. We subsequently propose a hybrid ant colony optimization (ACO) algorithm (HACOA) to solve instances of the extended CARP. This approach is characterized by the exploitation of heuristic information, adaptive parameters, and local optimization techniques: Two kinds of heuristic information, arc cluster information and arc priority information, are obtained continuously from the solutions sampled to guide the subsequent optimization process. The adaptive parameters ease the burden of choosing initial values and facilitate improved and more robust results. Finally, local optimization, based on the two-opt heuristic, is employed to improve the overall performance of the proposed algorithm. The resulting HACOA is tested on four sets of benchmark problems containing a total of 87 instances with up to 140 nodes and 380 arcs. In order to evaluate the effectiveness of the proposed method, some existing capacitated arc routing heuristics are extended to cope with the extended version of this problem; the experimental results indicate that the proposed ACO method outperforms these heuristics.
Bowsher, Julia H; Ang, Yuchen; Ferderer, Tanner; Meier, Rudolf
2013-04-01
Male abdomen appendages are a novel trait found within Sepsidae (Diptera). Here we demonstrate that they are likely to have evolved once, were lost three times, and then secondarily gained in one lineage. The developmental basis of these appendages was investigated by counting the number of histoblast cells in each abdominal segment in four species: two that represented the initial instance of appendage evolution, one that has secondarily gained appendages, and one species that did not have appendages. Males of all species with appendages have elevated cell counts for the fourth segment, which gives rise to the appendages. In Perochaeta dikowi, which reacquired the trait, the females also have elevated cell count on the fourth segment despite the fact that females do not develop appendages. The species without appendages has similar cell counts in all segments regardless of sex. These results suggest that the basis for appendage development is shared in males across all species, but the sexual dimorphism is regulated differently in P. dikowi. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.
NASA Astrophysics Data System (ADS)
Hadida, Jonathan; Desrosiers, Christian; Duong, Luc
2011-03-01
The segmentation of anatomical structures in Computed Tomography Angiography (CTA) is a pre-operative task useful in image guided surgery. Even though very robust and precise methods have been developed to help achieving a reliable segmentation (level sets, active contours, etc), it remains very time consuming both in terms of manual interactions and in terms of computation time. The goal of this study is to present a fast method to find coarse anatomical structures in CTA with few parameters, based on hierarchical clustering. The algorithm is organized as follows: first, a fast non-parametric histogram clustering method is proposed to compute a piecewise constant mask. A second step then indexes all the space-connected regions in the piecewise constant mask. Finally, a hierarchical clustering is achieved to build a graph representing the connections between the various regions in the piecewise constant mask. This step builds up a structural knowledge about the image. Several interactive features for segmentation are presented, for instance association or disassociation of anatomical structures. A comparison with the Mean-Shift algorithm is presented.
Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data
NASA Astrophysics Data System (ADS)
Yu, Q.; Helmholz, P.; Belton, D.; West, G.
2014-04-01
The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.
A streamlined artificial variable free version of simplex method.
Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad
2015-01-01
This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.
A Streamlined Artificial Variable Free Version of Simplex Method
Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad
2015-01-01
This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement. PMID:25767883
Torrance, Jaimie S; Wincenciak, Joanna; Hahn, Amanda C; DeBruine, Lisa M; Jones, Benedict C
2014-01-01
Although many studies have investigated the facial characteristics that influence perceptions of others' attractiveness and dominance, the majority of these studies have focused on either the effects of shape information or surface information alone. Consequently, the relative contributions of facial shape and surface characteristics to attractiveness and dominance perceptions are unclear. To address this issue, we investigated the relationships between ratings of original versions of faces and ratings of versions in which either surface information had been standardized (i.e., shape-only versions) or shape information had been standardized (i.e., surface-only versions). For attractiveness and dominance judgments of both male and female faces, ratings of shape-only and surface-only versions independently predicted ratings of the original versions of faces. The correlations between ratings of original and shape-only versions and between ratings of original and surface-only versions differed only in two instances. For male attractiveness, ratings of original versions were more strongly related to ratings of surface-only than shape-only versions, suggesting that surface information is particularly important for men's facial attractiveness. The opposite was true for female physical dominance, suggesting that shape information is particularly important for women's facial physical dominance. In summary, our results indicate that both facial shape and surface information contribute to judgments of others' attractiveness and dominance, suggesting that it may be important to consider both sources of information in research on these topics.
Hoyng, Lieke L; Frings, Virginie; Hoekstra, Otto S; Kenny, Laura M; Aboagye, Eric O; Boellaard, Ronald
2015-01-01
Positron emission tomography (PET) with (18)F-3'-deoxy-3'-fluorothymidine ([(18)F]FLT) can be used to assess tumour proliferation. A kinetic-filtering (KF) classification algorithm has been suggested for segmentation of tumours in dynamic [(18)F]FLT PET data. The aim of the present study was to evaluate KF segmentation and its test-retest performance in [(18)F]FLT PET in non-small cell lung cancer (NSCLC) patients. Nine NSCLC patients underwent two 60-min dynamic [(18)F]FLT PET scans within 7 days prior to treatment. Dynamic scans were reconstructed with filtered back projection (FBP) as well as with ordered subsets expectation maximisation (OSEM). Twenty-eight lesions were identified by an experienced physician. Segmentation was performed using KF applied to the dynamic data set and a source-to-background corrected 50% threshold (A50%) was applied to the sum image of the last three frames (45- to 60-min p.i.). Furthermore, several adaptations of KF were tested. Both for KF and A50% test-retest (TRT) variability of metabolically active tumour volume and standard uptake value (SUV) were evaluated. KF performed better on OSEM- than on FBP-reconstructed PET images. The original KF implementation segmented 15 out of 28 lesions, whereas A50% segmented each lesion. Adapted KF versions, however, were able to segment 26 out of 28 lesions. In the best performing adapted versions, metabolically active tumour volume and SUV TRT variability was similar to those of A50%. KF misclassified certain tumour areas as vertebrae or liver tissue, which was shown to be related to heterogeneous [(18)F]FLT uptake areas within the tumour. For [(18)F]FLT PET studies in NSCLC patients, KF and A50% show comparable tumour volume segmentation performance. The KF method needs, however, a site-specific optimisation. The A50% is therefore a good alternative for tumour segmentation in NSCLC [(18)F]FLT PET studies in multicentre studies. Yet, it was observed that KF has the potential to subsegment lesions in high and low proliferative areas.
Detection of EEG electrodes in brain volumes.
Graffigna, Juan P; Gómez, M Eugenia; Bustos, José J
2010-01-01
This paper presents a method to detect 128 EEG electrodes in image study and to merge with the Nuclear Magnetic Resonance volume for better diagnosis. First we propose three hypotheses to define a specific acquisition protocol in order to recognize the electrodes and to avoid distortions in the image. In the second instance we describe a method for segmenting the electrodes. Finally, registration is performed between volume of the electrodes and NMR.
CALIOP Version 3 Data Products: A Comparison to Version 2
NASA Technical Reports Server (NTRS)
Vaughan, Mark; Omar, Ali; Hunt, Bill; Getzewich, Brian; Tackett, Jason; Powell, Kathy; Avery, Melody; Kuehn, Ralph; Young, Stuart; Hu, Yong;
2010-01-01
After launch we discovered that the CALIOP daytime measurements were subject to thermally induced beamdrift,and this caused the calibration to vary by as much as 30% during the course of a single daytime orbit segment. Using an algorithm developed by Powell et al.(2010), empirically derived correction factors are now computed in near realtime as a function of orbit elapsed time, and these are used to compensate for the beam wandering effects.
A segmentation editing framework based on shape change statistics
NASA Astrophysics Data System (ADS)
Mostapha, Mahmoud; Vicory, Jared; Styner, Martin; Pizer, Stephen
2017-02-01
Segmentation is a key task in medical image analysis because its accuracy significantly affects successive steps. Automatic segmentation methods often produce inadequate segmentations, which require the user to manually edit the produced segmentation slice by slice. Because editing is time-consuming, an editing tool that enables the user to produce accurate segmentations by only drawing a sparse set of contours would be needed. This paper describes such a framework as applied to a single object. Constrained by the additional information enabled by the manually segmented contours, the proposed framework utilizes object shape statistics to transform the failed automatic segmentation to a more accurate version. Instead of modeling the object shape, the proposed framework utilizes shape change statistics that were generated to capture the object deformation from the failed automatic segmentation to its corresponding correct segmentation. An optimization procedure was used to minimize an energy function that consists of two terms, an external contour match term and an internal shape change regularity term. The high accuracy of the proposed segmentation editing approach was confirmed by testing it on a simulated data set based on 10 in-vivo infant magnetic resonance brain data sets using four similarity metrics. Segmentation results indicated that our method can provide efficient and adequately accurate segmentations (Dice segmentation accuracy increase of 10%), with very sparse contours (only 10%), which is promising in greatly decreasing the work expected from the user.
A Review of Large Solid Rocket Motor Free Field Acoustics, Part I
NASA Technical Reports Server (NTRS)
Pilkey, Debbie; Kenny, Robert Jeremy
2011-01-01
At the ATK facility in Utah, large full scale solid rocket motors are tested. The largest is a five segment version of the Reusable Solid Rocket Motor, which is for use on future launch vehicles. Since 2006, Acoustic measurements have been taken on large solid rocket motors at ATK. Both the four segment RSRM and the five segment RSRMV have been instrumented. Measurements are used to update acoustic prediction models and to correlate against vibration responses of the motor. Presentation focuses on two major sections: Part I) Unique challenges associated with measuring rocket acoustics Part II) Acoustic measurements summary over past five years
NASA Technical Reports Server (NTRS)
Davis, J. E.; Bonnett, W. S.; Medan, R. T.
1976-01-01
A computer program known as SOLN was developed as an independent segment of the NASA-Ames three-dimensional potential flow analysis systems of linear algebraic equations. Methods used include: LU decomposition, Householder's method, a partitioning scheme, and a block successive relaxation method. Due to the independent modular nature of the program, it may be used by itself and not necessarily in conjunction with other segments of the POTFAN system.
Lee, Haofu; Nguyen, Alan; Hong, Christine; Hoang, Paul; Pham, John; Ting, Kang
2017-01-01
Introduction The aims of this study were to evaluate the effects of rapid palatal expansion on the craniofacial skeleton of a patient with unilateral cleft lip and palate (UCLP) and to predict the points of force application for optimal expansion using a 3-dimensional finite element model. Methods A 3-dimensional finite element model of the craniofacial complex with UCLP was generated from spiral computed tomographic scans with imaging software (Mimics, version 13.1; Materialise, Leuven, Belgium). This model was imported into the finite element solver (version 12.0; ANSYS, Canonsburg, Pa) to evaluate transverse expansion forces from rapid palatal expansion. Finite element analysis was performed with transverse expansion to achieve 5 mm of anterolateral expansion of the collapsed minor segment to simulate correction of the anterior crossbite in a patient with UCLP. Results High-stress concentrations were observed at the body of the sphenoid, medial to the orbit, and at the inferior area of the zygomatic process of the maxilla. The craniofacial stress distribution was asymmetric, with higher stress levels on the cleft side. When forces were applied more anteriorly on the collapsed minor segment and more posteriorly on the major segment, there was greater expansion of the anterior region of the minor segment with minimal expansion of the major segment. Conclusions The transverse expansion forces from rapid palatal expansion are distributed to the 3 maxillary buttresses. Finite element analysis is an appropriate tool to study and predict the points of force application for better controlled expansion in patients with UCLP. PMID:27476365
Lee, Haofu; Nguyen, Alan; Hong, Christine; Hoang, Paul; Pham, John; Ting, Kang
2016-08-01
The aims of this study were to evaluate the effects of rapid palatal expansion on the craniofacial skeleton of a patient with unilateral cleft lip and palate (UCLP) and to predict the points of force application for optimal expansion using a 3-dimensional finite element model. A 3-dimensional finite element model of the craniofacial complex with UCLP was generated from spiral computed tomographic scans with imaging software (Mimics, version 13.1; Materialise, Leuven, Belgium). This model was imported into the finite element solver (version 12.0; ANSYS, Canonsburg, Pa) to evaluate transverse expansion forces from rapid palatal expansion. Finite element analysis was performed with transverse expansion to achieve 5 mm of anterolateral expansion of the collapsed minor segment to simulate correction of the anterior crossbite in a patient with UCLP. High-stress concentrations were observed at the body of the sphenoid, medial to the orbit, and at the inferior area of the zygomatic process of the maxilla. The craniofacial stress distribution was asymmetric, with higher stress levels on the cleft side. When forces were applied more anteriorly on the collapsed minor segment and more posteriorly on the major segment, there was greater expansion of the anterior region of the minor segment with minimal expansion of the major segment. The transverse expansion forces from rapid palatal expansion are distributed to the 3 maxillary buttresses. Finite element analysis is an appropriate tool to study and predict the points of force application for better controlled expansion in patients with UCLP. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
PDB ligand conformational energies calculated quantum-mechanically.
Sitzmann, Markus; Weidlich, Iwona E; Filippov, Igor V; Liao, Chenzhong; Peach, Megan L; Ihlenfeldt, Wolf-Dietrich; Karki, Rajeshri G; Borodina, Yulia V; Cachau, Raul E; Nicklaus, Marc C
2012-03-26
We present here a greatly updated version of an earlier study on the conformational energies of protein-ligand complexes in the Protein Data Bank (PDB) [Nicklaus et al. Bioorg. Med. Chem. 1995, 3, 411-428], with the goal of improving on all possible aspects such as number and selection of ligand instances, energy calculations performed, and additional analyses conducted. Starting from about 357,000 ligand instances deposited in the 2008 version of the Ligand Expo database of the experimental 3D coordinates of all small-molecule instances in the PDB, we created a "high-quality" subset of ligand instances by various filtering steps including application of crystallographic quality criteria and structural unambiguousness. Submission of 640 Gaussian 03 jobs yielded a set of about 415 successfully concluded runs. We used a stepwise optimization of internal degrees of freedom at the DFT level of theory with the B3LYP/6-31G(d) basis set and a single-point energy calculation at B3LYP/6-311++G(3df,2p) after each round of (partial) optimization to separate energy changes due to bond length stretches vs bond angle changes vs torsion changes. Even for the most "conservative" choice of all the possible conformational energies-the energy difference between the conformation in which all internal degrees of freedom except torsions have been optimized and the fully optimized conformer-significant energy values were found. The range of 0 to ~25 kcal/mol was populated quite evenly and independently of the crystallographic resolution. A smaller number of "outliers" of yet higher energies were seen only at resolutions above 1.3 Å. The energies showed some correlation with molecular size and flexibility but not with crystallographic quality metrics such as the Cruickshank diffraction-component precision index (DPI) and R(free)-R, or with the ligand instance-specific metrics such as occupancy-weighted B-factor (OWAB), real-space R factor (RSR), and real-space correlation coefficient (RSCC). We repeated these calculations with the solvent model IEFPCM, which yielded energy differences that were generally somewhat lower than the corresponding vacuum results but did not produce a qualitatively different picture. Torsional sampling around the crystal conformation at the molecular mechanics level using the MMFF94s force field typically led to an increase in energy. © 2012 American Chemical Society
Panda, Rashmi; Puhan, N B; Panda, Ganapati
2018-02-01
Accurate optic disc (OD) segmentation is an important step in obtaining cup-to-disc ratio-based glaucoma screening using fundus imaging. It is a challenging task because of the subtle OD boundary, blood vessel occlusion and intensity inhomogeneity. In this Letter, the authors propose an improved version of the random walk algorithm for OD segmentation to tackle such challenges. The algorithm incorporates the mean curvature and Gabor texture energy features to define the new composite weight function to compute the edge weights. Unlike the deformable model-based OD segmentation techniques, the proposed algorithm remains unaffected by curve initialisation and local energy minima problem. The effectiveness of the proposed method is verified with DRIVE, DIARETDB1, DRISHTI-GS and MESSIDOR database images using the performance measures such as mean absolute distance, overlapping ratio, dice coefficient, sensitivity, specificity and precision. The obtained OD segmentation results and quantitative performance measures show robustness and superiority of the proposed algorithm in handling the complex challenges in OD segmentation.
SLS Pathfinder Segments Car Train Departure
2016-03-02
An Iowa Northern locomotive, contracted by Goodloe Transportation of Chicago, travels along the NASA railroad bridge over the Indian River north of Kennedy Space Center, carrying one of two containers on a railcar for transport to the NASA Jay Jay railroad yard. The containers held two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket that were delivered to the Rotation, Processing and Surge Facility (RPSF). Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will conduct a series of lifts, moves and stacking operations using the booster segments, which are inert, to prepare for Exploration Mission-1, deep-space missions and the journey to Mars. The pathfinder booster segments are from Orbital ATK in Utah.
SLS Pathfinder Segments Car Train Departure
2016-03-02
An Iowa Northern locomotive, conracted by Goodloe Transportation of Chicago, travels along the NASA railroad bridge over the Indian River north of Kennedy Space Center, with two containers on railcars for transport to the NASA Jay Jay railroad yard. The containers held two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket that were delivered to the Rotation, Processing and Surge Facility (RPSF). Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will conduct a series of lifts, moves and stacking operations using the booster segments, which are inert, to prepare for Exploration Mission-1, deep-space missions and the journey to Mars. The pathfinder booster segments are from Orbital ATK in Utah.
SLS Pathfinder Segments Car Train Departure
2016-03-02
An Iowa Northern locomotive, contracted by Goodloe Transportation of Chicago, approaches the raised span of the NASA railroad bridge to continue over the Indian River north of Kennedy Space Center with two containers on railcars for storage at the NASA Jay Jay railroad yard. The containers held two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket that were delivered to the Rotation, Processing and Surge Facility (RPSF). Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will conduct a series of lifts, moves and stacking operations using the booster segments, which are inert, to prepare for Exploration Mission-1, deep-space missions and the journey to Mars. The pathfinder booster segments are from Orbital ATK in Utah.
SLS Pathfinder Segments Car Train Departure
2016-03-02
An Iowa Northern locomotive, contracted by Goodloe Transportation of Chicago, travels along the NASA railroad bridge over the Indian River north of Kennedy Space Center, carrying one of two containers on a railcar for transport to the NASA Jay Jay railroad yard near the center. The containers held two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket that were delivered to the Rotation, Processing and Surge Facility (RPSF). Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will conduct a series of lifts, moves and stacking operations using the booster segments, which are inert, to prepare for Exploration Mission-1, deep-space missions and the journey to Mars. The pathfinder booster segments are from Orbital ATK in Utah.
SLS Pathfinder Segments Car Train Departure
2016-03-02
An Iowa Northern locomotive, contracted by Goodloe Transportation of Chicago, continues along the NASA railroad bridge over the Indian River north of Kennedy Space Center, carrying one of two containers on a railcar for transport to the NASA Jay Jay railroad yard. The containers held two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket that were delivered to the Rotation, Processing and Surge Facility (RPSF). Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will conduct a series of lifts, moves and stacking operations using the booster segments, which are inert, to prepare for Exploration Mission-1, deep-space missions and the journey to Mars. The pathfinder booster segments are from Orbital ATK in Utah.
[Glossary of Terms for Thoracic Imaging--German Version of the Fleischner Society Recommendations].
Wormanns, D; Hamer, O W
2015-08-01
The Fleischner Society has published several recommendations for terms for thoracic imaging. The most recent glossary was released in 2008. One glossary in German language was published in 1996. This review provides an updated German glossary of terms for thoracic imaging. It closely adheres to the Fleischner Society terminology. In some instances adaptions to the usage of German language were necessary, as well as some additions of terms which were later defined or redefined. These deviations are summarized in a revision report. The Fleischner Society has published a revised version of her glossary of terms for thoracic imaging in 2008. This paper presents a German adaption of this glossary. Some terms not contained in the original version have been added. The general use of the presented terminology in radiological reports is recommended. © Georg Thieme Verlag KG Stuttgart · New York.
Reverberation negatively impacts musical sound quality for cochlear implant users.
Roy, Alexis T; Vigeant, Michelle; Munjal, Tina; Carver, Courtney; Jiradejvong, Patpong; Limb, Charles J
2015-09-01
Satisfactory musical sound quality remains a challenge for many cochlear implant (CI) users. In particular, questionnaires completed by CI users suggest that reverberation due to room acoustics can negatively impact their music listening experience. The objective of this study was to more specifically characterize of the effect of reverberation on musical sound quality in CI users, normal hearing (NH) non-musicians, and NH musicians using a previously designed assessment method, called Cochlear Implant-MUltiple Stimulus with Hidden Reference and Anchor (CI-MUSHRA). In this method, listeners were randomly presented with an anechoic musical segment and five-versions of this segment in which increasing amounts of reverberation were artificially added. Participants listened to the six reverberation versions and provided sound quality ratings between 0 (very poor) and 100 (excellent). Results demonstrated that on average CI users and NH non-musicians preferred the sound quality of anechoic versions to more reverberant versions. In comparison, NH musicians could be delineated into those who preferred the sound quality of anechoic pieces and those who preferred pieces with some reverberation. This is the first study, to our knowledge, to objectively compare the effects of reverberation on musical sound quality ratings in CI users. These results suggest that musical sound quality for CI users can be improved by non-reverberant listening conditions and musical stimuli in which reverberation is removed.
Solving Connected Subgraph Problems in Wildlife Conservation
NASA Astrophysics Data System (ADS)
Dilkina, Bistra; Gomes, Carla P.
We investigate mathematical formulations and solution techniques for a variant of the Connected Subgraph Problem. Given a connected graph with costs and profits associated with the nodes, the goal is to find a connected subgraph that contains a subset of distinguished vertices. In this work we focus on the budget-constrained version, where we maximize the total profit of the nodes in the subgraph subject to a budget constraint on the total cost. We propose several mixed-integer formulations for enforcing the subgraph connectivity requirement, which plays a key role in the combinatorial structure of the problem. We show that a new formulation based on subtour elimination constraints is more effective at capturing the combinatorial structure of the problem, providing significant advantages over the previously considered encoding which was based on a single commodity flow. We test our formulations on synthetic instances as well as on real-world instances of an important problem in environmental conservation concerning the design of wildlife corridors. Our encoding results in a much tighter LP relaxation, and more importantly, it results in finding better integer feasible solutions as well as much better upper bounds on the objective (often proving optimality or within less than 1% of optimality), both when considering the synthetic instances as well as the real-world wildlife corridor instances.
Fast Automatic Segmentation of White Matter Streamlines Based on a Multi-Subject Bundle Atlas.
Labra, Nicole; Guevara, Pamela; Duclap, Delphine; Houenou, Josselin; Poupon, Cyril; Mangin, Jean-François; Figueroa, Miguel
2017-01-01
This paper presents an algorithm for fast segmentation of white matter bundles from massive dMRI tractography datasets using a multisubject atlas. We use a distance metric to compare streamlines in a subject dataset to labeled centroids in the atlas, and label them using a per-bundle configurable threshold. In order to reduce segmentation time, the algorithm first preprocesses the data using a simplified distance metric to rapidly discard candidate streamlines in multiple stages, while guaranteeing that no false negatives are produced. The smaller set of remaining streamlines is then segmented using the original metric, thus eliminating any false positives from the preprocessing stage. As a result, a single-thread implementation of the algorithm can segment a dataset of almost 9 million streamlines in less than 6 minutes. Moreover, parallel versions of our algorithm for multicore processors and graphics processing units further reduce the segmentation time to less than 22 seconds and to 5 seconds, respectively. This performance enables the use of the algorithm in truly interactive applications for visualization, analysis, and segmentation of large white matter tractography datasets.
The functional unit of Japanese word naming: evidence from masked priming.
Verdonschot, Rinus G; Kiyama, Sachiko; Tamaoka, Katsuo; Kinoshita, Sachiko; Heij, Wido La; Schiller, Niels O
2011-11-01
Theories of language production generally describe the segment as the basic unit in phonological encoding (e.g., Dell, 1988; Levelt, Roelofs, & Meyer, 1999). However, there is also evidence that such a unit might be language specific. Chen, Chen, and Dell (2002), for instance, found no effect of single segments when using a preparation paradigm. To shed more light on the functional unit of phonological encoding in Japanese, a language often described as being mora based, we report the results of 4 experiments using word reading tasks and masked priming. Experiment 1 demonstrated using Japanese kana script that primes, which overlapped in the whole mora with target words, sped up word reading latencies but not when just the onset overlapped. Experiments 2 and 3 investigated a possible role of script by using combinations of romaji (Romanized Japanese) and hiragana; again, facilitation effects were found only when the whole mora and not the onset segment overlapped. Experiment 4 distinguished mora priming from syllable priming and revealed that the mora priming effects obtained in the first 3 experiments are also obtained when a mora is part of a syllable. Again, no priming effect was found for single segments. Our findings suggest that the mora and not the segment (phoneme) is the basic functional phonological unit in Japanese language production planning.
Clustering approach for unsupervised segmentation of malarial Plasmodium vivax parasite
NASA Astrophysics Data System (ADS)
Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Mohamed, Zeehaida
2017-10-01
Malaria is a global health problem, particularly in Africa and south Asia where it causes countless deaths and morbidity cases. Efficient control and prompt of this disease require early detection and accurate diagnosis due to the large number of cases reported yearly. To achieve this aim, this paper proposes an image segmentation approach via unsupervised pixel segmentation of malaria parasite to automate the diagnosis of malaria. In this study, a modified clustering algorithm namely enhanced k-means (EKM) clustering, is proposed for malaria image segmentation. In the proposed EKM clustering, the concept of variance and a new version of transferring process for clustered members are used to assist the assignation of data to the proper centre during the process of clustering, so that good segmented malaria image can be generated. The effectiveness of the proposed EKM clustering has been analyzed qualitatively and quantitatively by comparing this algorithm with two popular image segmentation techniques namely Otsu's thresholding and k-means clustering. The experimental results show that the proposed EKM clustering has successfully segmented 100 malaria images of P. vivax species with segmentation accuracy, sensitivity and specificity of 99.20%, 87.53% and 99.58%, respectively. Hence, the proposed EKM clustering can be considered as an image segmentation tool for segmenting the malaria images.
Hong, Ki Pyo
2015-01-01
Background The aim of this study was to evaluate the midterm clinical outcomes after modified high ligation and segmental stripping of small saphenous vein (SSV) varicosities. Methods Between January 2010 and March 2013, 62 patients (69 legs) with isolated primary small saphenous varicose veins were enrolled in this study. The outcomes measured were reflux in the remaining distal SSV, the recurrence of varicose veins, the improvement of preoperative symptoms, and the rate of postoperative complications. Results No major complications occurred. No instances of the recurrence of varicose veins at previous stripping sites were noted. Three legs (4.3%) showed reflux in the remaining distal small saphenous veins. The preoperative symptoms were found to have improved in 96.4% of the cases. Conclusion In the absence of flush ligation of the saphenopopliteal junction, modified high ligation and segmental stripping of small saphenous vein varicosities with preoperative duplex marking is an effective treatment method for reducing postoperative complications and the recurrence of SSV incompetence. PMID:26665106
Generation of Fullspan Leading-Edge 3D Ice Shapes for Swept-Wing Aerodynamic Testing
NASA Technical Reports Server (NTRS)
Camello, Stephanie C.; Lee, Sam; Lum, Christopher; Bragg, Michael B.
2016-01-01
The deleterious effect of ice accretion on aircraft is often assessed through dry-air flight and wind tunnel testing with artificial ice shapes. This paper describes a method to create fullspan swept-wing artificial ice shapes from partial span ice segments acquired in the NASA Glenn Icing Reserch Tunnel for aerodynamic wind-tunnel testing. Full-scale ice accretion segments were laser scanned from the Inboard, Midspan, and Outboard wing station models of the 65% scale Common Research Model (CRM65) aircraft configuration. These were interpolated and extrapolated using a weighted averaging method to generate fullspan ice shapes from the root to the tip of the CRM65 wing. The results showed that this interpolation method was able to preserve many of the highly three dimensional features typically found on swept-wing ice accretions. The interpolated fullspan ice shapes were then scaled to fit the leading edge of a 8.9% scale version of the CRM65 wing for aerodynamic wind-tunnel testing. Reduced fidelity versions of the fullspan ice shapes were also created where most of the local three-dimensional features were removed. The fullspan artificial ice shapes and the reduced fidelity versions were manufactured using stereolithography.
Identification of everyday objects on the basis of Gaborized outline versions
Sassi, Michaël; Vancleef, Kathleen; Machilsen, Bart; Panis, Sven; Wagemans, Johan
2010-01-01
Using outlines derived from a widely used set of line drawings, we created stimuli geared towards the investigation of contour integration and texture segmentation using shapes of everyday objects. Each stimulus consisted of Gabor elements positioned and oriented curvilinearly along the outline of an object, embedded within a larger Gabor array of homogeneous density. We created six versions of the resulting Gaborized outline stimuli by varying the orientations of elements inside and outside the outline. Data from two experiments, in which participants attempted to identify the objects in the stimuli, provide norms for identifiability and name agreement, and show differences in identifiability between stimulus versions. While there was substantial variability between the individual objects in our stimulus set, further analyses suggest a number of stimulus properties which are generally predictive of identification performance. The stimuli and the accompanying normative data, both available on our website (http://www.gestaltrevision.be/sources/gaboroutlines), provide a useful tool to further investigate contour integration and texture segmentation in both normal and clinical populations, especially when top-down influences on these processes, such as the role of prior knowledge of familiar objects, are of main interest. PMID:23145218
Identification of everyday objects on the basis of Gaborized outline versions.
Sassi, Michaël; Vancleef, Kathleen; Machilsen, Bart; Panis, Sven; Wagemans, Johan
2010-01-01
Using outlines derived from a widely used set of line drawings, we created stimuli geared towards the investigation of contour integration and texture segmentation using shapes of everyday objects. Each stimulus consisted of Gabor elements positioned and oriented curvilinearly along the outline of an object, embedded within a larger Gabor array of homogeneous density. We created six versions of the resulting Gaborized outline stimuli by varying the orientations of elements inside and outside the outline. Data from two experiments, in which participants attempted to identify the objects in the stimuli, provide norms for identifiability and name agreement, and show differences in identifiability between stimulus versions. While there was substantial variability between the individual objects in our stimulus set, further analyses suggest a number of stimulus properties which are generally predictive of identification performance. The stimuli and the accompanying normative data, both available on our website (http://www.gestaltrevision.be/sources/gaboroutlines), provide a useful tool to further investigate contour integration and texture segmentation in both normal and clinical populations, especially when top-down influences on these processes, such as the role of prior knowledge of familiar objects, are of main interest.
NASA Astrophysics Data System (ADS)
Mayernik, M. S.; Daniels, M. D.; Maull, K. E.; Khan, H.; Krafft, D. B.; Gross, M. B.; Rowan, L. R.
2016-12-01
Geosciences research is often conducted using distributed networks of researchers and resources. To better enable the discovery of the research output from the scientists and resources used within these organizations, UCAR, Cornell University, and UNAVCO are collaborating on the EarthCollab (http://earthcube.org/group/earthcollab) project which seeks to leverage semantic technologies to manage and link scientific data. As part of this effort, we have been exploring how to leverage information distributed across multiple research organizations. EarthCollab is using the VIVO semantic software suite to lookup and display Semantic Web information across our project partners.Our presentation will include a demonstration of linking between VIVO instances, discussing how to create linkages between entities in different VIVO instances where both entities describe the same person or resource. This discussion will explore how we designate the equivalence of these entities using "same as" assertions between identifiers representing these entities including URIs and ORCID IDs and how we have extended the base VIVO architecture to support the lookup of which entities in separate VIVO instances may be equivalent and to then display information from external linked entities. We will also discuss how these extensions can support other linked data lookups and sources of information.This VIVO cross-linking mechanism helps bring information from multiple VIVO instances together and helps users in navigating information spread-out between multiple VIVO instances. Challenges and open questions for this approach relate to how to display the information obtained from an external VIVO instance, both in order to preserve the brands of the internal and external systems and to handle discrepancies between ontologies, content, and/or VIVO versions.
MaMiCo: Transient multi-instance molecular-continuum flow simulation on supercomputers
NASA Astrophysics Data System (ADS)
Neumann, Philipp; Bian, Xin
2017-11-01
We present extensions of the macro-micro-coupling tool MaMiCo, which was designed to couple continuum fluid dynamics solvers with discrete particle dynamics. To enable local extraction of smooth flow field quantities especially on rather short time scales, sampling over an ensemble of molecular dynamics simulations is introduced. We provide details on these extensions including the transient coupling algorithm, open boundary forcing, and multi-instance sampling. Furthermore, we validate the coupling in Couette flow using different particle simulation software packages and particle models, i.e. molecular dynamics and dissipative particle dynamics. Finally, we demonstrate the parallel scalability of the molecular-continuum simulations by using up to 65 536 compute cores of the supercomputer Shaheen II located at KAUST. Program Files doi:http://dx.doi.org/10.17632/w7rgdrhb85.1 Licensing provisions: BSD 3-clause Programming language: C, C++ External routines/libraries: For compiling: SCons, MPI (optional) Subprograms used: ESPResSo, LAMMPS, ls1 mardyn, waLBerla For installation procedures of the MaMiCo interfaces, see the README files in the respective code directories located in coupling/interface/impl. Journal reference of previous version: P. Neumann, H. Flohr, R. Arora, P. Jarmatz, N. Tchipev, H.-J. Bungartz. MaMiCo: Software design for parallel molecular-continuum flow simulations, Computer Physics Communications 200: 324-335, 2016 Does the new version supersede the previous version?: Yes. The functionality of the previous version is completely retained in the new version. Nature of problem: Coupled molecular-continuum simulation for multi-resolution fluid dynamics: parts of the domain are resolved by molecular dynamics or another particle-based solver whereas large parts are covered by a mesh-based CFD solver, e.g. a lattice Boltzmann automaton. Solution method: We couple existing MD and CFD solvers via MaMiCo (macro-micro coupling tool). Data exchange and coupling algorithmics are abstracted and incorporated in MaMiCo. Once an algorithm is set up in MaMiCo, it can be used and extended, even if other solvers are used (as soon as the respective interfaces are implemented/available). Reasons for the new version: We have incorporated a new algorithm to simulate transient molecular-continuum systems and to automatically sample data over multiple MD runs that can be executed simultaneously (on, e.g., a compute cluster). MaMiCo has further been extended by an interface to incorporate boundary forcing to account for open molecular dynamics boundaries. Besides support for coupling with various MD and CFD frameworks, the new version contains a test case that allows to run molecular-continuum Couette flow simulations out-of-the-box. No external tools or simulation codes are required anymore. However, the user is free to switch from the included MD simulation package to LAMMPS. For details on how to run the transient Couette problem, see the file README in the folder coupling/tests, Remark on MaMiCo V1.1. Summary of revisions: Open boundary forcing; Multi-instance MD sampling; support for transient molecular-continuum systems Restrictions: Currently, only single-centered systems are supported. For access to the LAMMPS-based implementation of DPD boundary forcing, please contact Xin Bian, xin.bian@tum.de. Additional comments: Please see file license_mamico.txt for further details regarding distribution and advertising of this software.
SRB Processing Facilities Media Event
2016-03-01
Members of the news media view the high bay inside the Rotation, Processing and Surge Facility (RPSF) at NASA’s Kennedy Space Center in Florida. Kerry Chreist, with Jacobs Engineering on the Test and Operations Support Contract, talks with a reporter about the booster segments for NASA’s Space Launch System (SLS) rocket. In the far corner, in the vertical position, is one of two pathfinders, or test versions, of solid rocket booster segments for the SLS rocket. The Ground Systems Development and Operations Program and Jacobs are preparing the booster segments, which are inert, for a series of lifts, moves and stacking operations to prepare for Exploration Mission-1, deep-space missions and the journey to Mars.
SRB Processing Facilities Media Event
2016-03-01
Members of the news media watch as two cranes are used to lift one of two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System (SLS) rocket into the vertical position inside the Rotation, Processing and Surge Facility at NASA’s Kennedy Space Center in Florida. The pathfinder booster segment will be moved to the other end of the RPSF and secured on a test stand. The Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will prepare the booster segments, which are inert, for a series of lifts, moves and stacking operations to prepare for Exploration Mission-1, deep-space missions and the journey to Mars.
EOS MLS Level 2 Data Processing Software Version 3
NASA Technical Reports Server (NTRS)
Livesey, Nathaniel J.; VanSnyder, Livesey W.; Read, William G.; Schwartz, Michael J.; Lambert, Alyn; Santee, Michelle L.; Nguyen, Honghanh T.; Froidevaux, Lucien; wang, Shuhui; Manney, Gloria L.;
2011-01-01
This software accepts the EOS MLS calibrated measurements of microwave radiances products and operational meteorological data, and produces a set of estimates of atmospheric temperature and composition. This version has been designed to be as flexible as possible. The software is controlled by a Level 2 Configuration File that controls all aspects of the software: defining the contents of state and measurement vectors, defining the configurations of the various forward models available, reading appropriate a priori spectroscopic and calibration data, performing retrievals, post-processing results, computing diagnostics, and outputting results in appropriate files. In production mode, the software operates in a parallel form, with one instance of the program acting as a master, coordinating the work of multiple slave instances on a cluster of computers, each computing the results for individual chunks of data. In addition, to do conventional retrieval calculations and producing geophysical products, the Level 2 Configuration File can instruct the software to produce files of simulated radiances based on a state vector formed from a set of geophysical product files taken as input. Combining both the retrieval and simulation tasks in a single piece of software makes it far easier to ensure that identical forward model algorithms and parameters are used in both tasks. This also dramatically reduces the complexity of the code maintenance effort.
Recurrent network dynamics reconciles visual motion segmentation and integration.
Medathati, N V Kartheek; Rankin, James; Meso, Andrew I; Kornprobst, Pierre; Masson, Guillaume S
2017-09-12
In sensory systems, a range of computational rules are presumed to be implemented by neuronal subpopulations with different tuning functions. For instance, in primate cortical area MT, different classes of direction-selective cells have been identified and related either to motion integration, segmentation or transparency. Still, how such different tuning properties are constructed is unclear. The dominant theoretical viewpoint based on a linear-nonlinear feed-forward cascade does not account for their complex temporal dynamics and their versatility when facing different input statistics. Here, we demonstrate that a recurrent network model of visual motion processing can reconcile these different properties. Using a ring network, we show how excitatory and inhibitory interactions can implement different computational rules such as vector averaging, winner-take-all or superposition. The model also captures ordered temporal transitions between these behaviors. In particular, depending on the inhibition regime the network can switch from motion integration to segmentation, thus being able to compute either a single pattern motion or to superpose multiple inputs as in motion transparency. We thus demonstrate that recurrent architectures can adaptively give rise to different cortical computational regimes depending upon the input statistics, from sensory flow integration to segmentation.
Perceiving non-native speech: Word segmentation
NASA Astrophysics Data System (ADS)
Mondini, Michèle; Miller, Joanne L.
2004-05-01
One important source of information listeners use to segment speech into discrete words is allophonic variation at word junctures. Previous research has shown that non-native speakers impose their native-language phonetic norms on their second language; as a consequence, non-native speech may (in some cases) exhibit altered patterns of allophonic variation at word junctures. We investigated the perceptual consequences of this for word segmentation by presenting native-English listeners with English word pairs produced either by six native-English speakers or six highly fluent, native-French speakers of English. The target word pairs had contrastive word juncture involving voiceless stop consonants (e.g., why pink/wipe ink; gray ties/great eyes; we cash/weak ash). The task was to identify randomized instances of each individual target word pair (as well as control pairs) by selecting one of four possible choices (e.g., why pink, wipe ink, why ink, wipe pink). Overall, listeners were more accurate in identifying target word pairs produced by the native-English speakers than by the non-native English speakers. These findings suggest that one contribution to the processing cost associated with listening to non-native speech may be the presence of altered allophonic information important for word segmentation. [Work supported by NIH/NIDCD.
Overcoming the Effects of Variation in Infant Speech Segmentation: Influences of Word Familiarity
Singh, Leher; Nestor, Sarah S.; Bortfeld, Heather
2010-01-01
Previous studies have shown that 7.5-month-olds can track and encode words in fluent speech, but they fail to equate instances of a word that contrast in talker gender, vocal affect, and fundamental frequency. By 10.5 months, they succeed at generalizing across such variability, marking a clear transition period during which infants’ word recognition skills become qualitatively more mature. Here we explore the role of word familiarity in this critical transition and, in particular, whether words that occur frequently in a child’s listening environment (i.e., “Mommy” and “Daddy”) are more easily recognized when they differ in surface characteristics than those that infants have not previously encountered (termed nonwords). Results demonstrate that words are segmented from continuous speech in a more linguistically mature fashion than nonwords at 7.5 months, but at 10.5 months, both words and nonwords are segmented in a relatively mature fashion. These findings suggest that early word recognition is facilitated in cases where infants have had significant exposure to items, but at later stages, infants are able to segment items regardless of their presumed familiarity. PMID:21088702
NASA Astrophysics Data System (ADS)
Perfit, M. R.; Walters, R. L.
2014-12-01
High spatial density geochemical data sets from the N-EPR and S-JdFR are used to re-evaluate the across-axis geochemical variations in major and trace elements at mid-ocean ridges (MORs). At two axial melt lens (AML) segments, north and south, at the 9-10°N EPR, N-MORB MgO varies across-axis from the most primitive above the AML to more evolved away from the axis. This trend is distinct at the northern (magmatically more robust) segment with an axial MgO range of 8-9 wt% and off-axis (>2km) range of 6.5-8 wt%. This decrease is also reflected in E-MORB MgO variation. There is more variability at the southern segment but, off-axis progression to more evolved MgO is still evident. Interestingly, the Cleft segment, JdFR, displays similar geochemical behavior to the EPR with an axial MgO range of 7-8.5 wt% and off-axis (>2km) range of 6-7.5 wt%. EPR geochemical studies over the past 30 years have described models of upper crustal accumulation ranging from eruptions limited to the axis, to temporal variation in the composition of magma in the AML, to multiple eruption sites across the ridge crest and flanks (<5km). Eruptions limited to the axis, with topographically controlled flow off-axis, cannot reproduce the observed off-axis change to more evolved N-MORB. Time-dependence could explain one instance of evolved lavas off-axis but, similar geochemical behavior is observed at two separate AML segments. Multiple instances of consistent compositional variability at multiple AML segments, and at different ridges, point to a common process of crustal accretion at MORs. In light of recent geophysical discoveries of Off-axis AMLs (OAMLs) at the EPR and JdFR, we propose that the trend of more evolved lavas for the majority of N-MORB lavas with distance from the axis is controlled by thermal distribution in the underlying crystal mush zone (CMZ). Higher magma flux beneath the axis facilitates higher temperatures and high porosity melt pathways, reducing crustal residence times, and erupting more primitive lava compositions. OAMLs at the edges of the CMZ, where it is cooler, feed more evolved off-axis eruptions. Lower magma flux at the edges increases crustal residence time and the extent to which magmas crystallize. OAMLs outside of the CMZ host magmas that may escaped any central mixing and erupt a greater range of compositions.
GPU-based relative fuzzy connectedness image segmentation.
Zhuge, Ying; Ciesielski, Krzysztof C; Udupa, Jayaram K; Miller, Robert W
2013-01-01
Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. The most common FC segmentations, optimizing an [script-l](∞)-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.
GPU-based relative fuzzy connectedness image segmentation
Zhuge, Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.
2013-01-01
Purpose: Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an ℓ∞-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA’s Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology. PMID:23298094
Grunert, Klaus G; Perrea, Toula; Zhou, Yanfeng; Huang, Guang; Sørensen, Bjarne T; Krystallis, Athanasios
2011-04-01
Research related to food-related behaviour in China is still scarce, one reason being the fact that food consumption patterns in East Asia do not appear to be easily analyzed by models originating in Western cultures. The objective of the present work is to examine the ability of the food related lifestyle (FRL) instrument to reveal food consumption patterns in a Chinese context. Data were collected from 479 respondents in 6 major Chinese cities using a Chinese version of the FRL instrument. Analysis of reliability and dimensionality of the scales resulted in a revised version of the instrument, in which a number of dimensions of the original instrument had to be omitted. This revised instrument was tested for statistical robustness and used as a basis for the derivation of consumer segments. Construct validity of the instrument was then investigated by profiling the segments in terms of consumer values, attitudes and purchase behaviour, using frequency of consumption of pork products as an example. Three consumer segments were identified: concerned, uninvolved and traditional. This pattern replicates partly those identified in Western cultures. Moreover, all three segments showed consistent value-attitude-behaviour profiles. The results also suggest which dimensions may be missing in the instrument in a more comprehensive instrument adapted to Chinese conditions, most notably a broader treatment of eating out activities. Copyright © 2010 Elsevier Ltd. All rights reserved.
Efficient Algorithms for Segmentation of Item-Set Time Series
NASA Astrophysics Data System (ADS)
Chundi, Parvathi; Rosenkrantz, Daniel J.
We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.
Primary versus secondary achalasia: New signs on barium esophagogram
Gupta, Pankaj; Debi, Uma; Sinha, Saroj Kant; Prasad, Kaushal Kishor
2015-01-01
Aim: To investigate new signs on barium swallow that can differentiate primary from secondary achalasia. Materials and Methods: Records of 30 patients with primary achalasia and 17 patients with secondary achalasia were reviewed. Clinical, endoscopic, and manometric data was recorded. Barium esophagograms were evaluated for peristalsis and morphology of distal esophageal segment (length, symmetry, nodularity, shouldering, filling defects, and “tram-track sign”). Results: Mean age at presentation was 39 years in primary achalasia and 49 years in secondary achalasia. The mean duration of symptoms was 3.5 years in primary achalasia and 3 months in secondary achalasia. False-negative endoscopic results were noted in the first instance in five patients. In the secondary achalasia group, five patients had distal esophageal segment morphology indistinguishable from that of primary achalasia. None of the patients with primary achalasia and 35% patients with secondary achalasia had a length of the distal segment approaching combined height of two vertebral bodies. None of the patients with secondary achalasia and 34% patients with primary achalasia had maximum caliber of esophagus approaching combined height of two vertebral bodies. Tertiary contractions were noted in 90% patients with primary achalasia and 24% patients with secondary achalasia. Tram-track sign was found in 55% patients with primary achalasia. Filling defects in the distal esophageal segment were noted in 94% patients with secondary achalasia. Conclusion: Length of distal esophageal segment, tertiary contractions, tram-track sign, and filling defects in distal esophageal segment are useful esophagographic features distinguishing primary from secondary achalasia. PMID:26288525
Aitken, Douglas S.
1997-01-01
This Open-File report is a digital topographic map database. It contains a digital version of the 1970 U.S. Geological Survey topographic map of the San Francisco Bay Region (3 sheets), at a scale of 1:125,000. These ARC/INFO coverages are in vector format. The vectorization process has distorted characters representing letters and numbers, as well as some road and other symbols, making them difficult to read in some instances. This pamphlet serves to introduce and describe the digital data. There is no paper map included in the Open-File report. The content and character of the database and methods of obtaining it are described herein.
Brain tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN).
Iqbal, Sajid; Ghani, M Usman; Saba, Tanzila; Rehman, Amjad
2018-04-01
A tumor could be found in any area of the brain and could be of any size, shape, and contrast. There may exist multiple tumors of different types in a human brain at the same time. Accurate tumor area segmentation is considered primary step for treatment of brain tumors. Deep Learning is a set of promising techniques that could provide better results as compared to nondeep learning techniques for segmenting timorous part inside a brain. This article presents a deep convolutional neural network (CNN) to segment brain tumors in MRIs. The proposed network uses BRATS segmentation challenge dataset which is composed of images obtained through four different modalities. Accordingly, we present an extended version of existing network to solve segmentation problem. The network architecture consists of multiple neural network layers connected in sequential order with the feeding of Convolutional feature maps at the peer level. Experimental results on BRATS 2015 benchmark data thus show the usability of the proposed approach and its superiority over the other approaches in this area of research. © 2018 Wiley Periodicals, Inc.
Kuzmina, Margarita; Manykin, Eduard; Surina, Irina
2004-01-01
An oscillatory network of columnar architecture located in 3D spatial lattice was recently designed by the authors as oscillatory model of the brain visual cortex. Single network oscillator is a relaxational neural oscillator with internal dynamics tunable by visual image characteristics - local brightness and elementary bar orientation. It is able to demonstrate either activity state (stable undamped oscillations) or "silence" (quickly damped oscillations). Self-organized nonlocal dynamical connections of oscillators depend on oscillator activity levels and orientations of cortical receptive fields. Network performance consists in transfer into a state of clusterized synchronization. At current stage grey-level image segmentation tasks are carried out by 2D oscillatory network, obtained as a limit version of the source model. Due to supplemented network coupling strength control the 2D reduced network provides synchronization-based image segmentation. New results on segmentation of brightness and texture images presented in the paper demonstrate accurate network performance and informative visualization of segmentation results, inherent in the model.
Aalaei, Shima; Rajabi Naraki, Zahra; Nematollahi, Fatemeh; Beyabanaki, Elaheh; Shahrokhi Rad, Afsaneh
2017-01-01
Background. Screw-retained restorations are favored in some clinical situations such as limited inter-occlusal spaces. This study was designed to compare stresses developed in the peri-implant bone in two different types of screw-retained restorations (segmented vs. non-segmented abutment) using a finite element model. Methods. An implant, 4.1 mm in diameter and 10 mm in length, was placed in the first molar site of a mandibular model with 1 mm of cortical bone on the buccal and lingual sides. Segmented and non-segmented screw abutments with their crowns were placed on the simulated implant in each model. After loading (100 N, axial and 45° non-axial), von Mises stress was recorded using ANSYS software, version 12.0.1. Results. The maximum stresses in the non-segmented abutment screw were less than those of segmented abutment (87 vs. 100, and 375 vs. 430 MPa under axial and non-axial loading, respectively). The maximum stresses in the peri-implant bone for the model with segmented abutment were less than those of non-segmented ones (21 vs. 24 MPa, and 31 vs. 126 MPa under vertical and angular loading, respectively). In addition, the micro-strain of peri-implant bone for the segmented abutment restoration was less than that of non-segmented abutment. Conclusion. Under axial and non-axial loadings, non-segmented abutment showed less stress concentration in the screw, while there was less stress and strain in the peri-implant bone in the segmented abutment. PMID:29184629
Food Recognition: A New Dataset, Experiments, and Results.
Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo
2017-05-01
We propose a new dataset for the evaluation of food recognition algorithms that can be used in dietary monitoring applications. Each image depicts a real canteen tray with dishes and foods arranged in different ways. Each tray contains multiple instances of food classes. The dataset contains 1027 canteen trays for a total of 3616 food instances belonging to 73 food classes. The food on the tray images has been manually segmented using carefully drawn polygonal boundaries. We have benchmarked the dataset by designing an automatic tray analysis pipeline that takes a tray image as input, finds the regions of interest, and predicts for each region the corresponding food class. We have experimented with three different classification strategies using also several visual descriptors. We achieve about 79% of food and tray recognition accuracy using convolutional-neural-networks-based features. The dataset, as well as the benchmark framework, are available to the research community.
2012-03-08
members past and present, including Dr. David Krantz, Dr. Mark Ettenhofer, Dr. Cara Olsen and Dr. Neil Grunberg. Additionally, I want to express my...version of the HPQ, referred to as “the absenteeism and presenteeism questions of the Heath and Work Performance Questionnaire,” (Kessler, et al...were carefully chosen for their reliability in other studies. For instance, the absenteeism and presenteeism questions of the Heath and Work
Arabic handwritten: pre-processing and segmentation
NASA Astrophysics Data System (ADS)
Maliki, Makki; Jassim, Sabah; Al-Jawad, Naseer; Sellahewa, Harin
2012-06-01
This paper is concerned with pre-processing and segmentation tasks that influence the performance of Optical Character Recognition (OCR) systems and handwritten/printed text recognition. In Arabic, these tasks are adversely effected by the fact that many words are made up of sub-words, with many sub-words there associated one or more diacritics that are not connected to the sub-word's body; there could be multiple instances of sub-words overlap. To overcome these problems we investigate and develop segmentation techniques that first segment a document into sub-words, link the diacritics with their sub-words, and removes possible overlapping between words and sub-words. We shall also investigate two approaches for pre-processing tasks to estimate sub-words baseline, and to determine parameters that yield appropriate slope correction, slant removal. We shall investigate the use of linear regression on sub-words pixels to determine their central x and y coordinates, as well as their high density part. We also develop a new incremental rotation procedure to be performed on sub-words that determines the best rotation angle needed to realign baselines. We shall demonstrate the benefits of these proposals by conducting extensive experiments on publicly available databases and in-house created databases. These algorithms help improve character segmentation accuracy by transforming handwritten Arabic text into a form that could benefit from analysis of printed text.
SRB Processing Facilities Media Event
2016-03-01
At the Rotation, Processing and Surge Facility (RPSF) at NASA’s Kennedy Space Center in Florida, members of the news media photograph the process as cranes are used to lift one of two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket. The Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, are preparing the booster segments, which are inert, for a series of lifts, moves and stacking operations to prepare for Exploration Mission-1, deep-space missions and the journey to Mars.
SRB Processing Facilities Media Event
2016-03-01
At the Rotation, Processing and Surge Facility (RPSF) at NASA’s Kennedy Space Center in Florida, members of the news media watch as cranes are used to lift one of two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket. The Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, are preparing the booster segments, which are inert, for a series of lifts, moves and stacking operations to prepare for Exploration Mission-1, deep-space missions and the journey to Mars.
Modified SSCP method using sequential electrophoresis of multiple nucleic acid segments
Gatti, Richard A.
2002-10-01
The present invention is directed to a method of screening large, complex, polyexonic eukaryotic genes such as the ATM gene for mutations and polymorphisms by an improved version of single strand conformation polymorphism (SSCP) electrophoresis that allows electrophoresis of two or three amplified segments in a single lane. The present invention also is directed to new mutations and polymorphisms in the ATM gene that are useful in performing more accurate screening of human DNA samples for mutations and in distinguishing mutations from polymorphisms, thereby improving the efficiency of automated screening methods.
SRB Processing Facilities Media Event
2016-03-01
Members of the news media view the high bay inside the Rotation, Processing and Surge Facility (RPSF) at NASA’s Kennedy Space Center in Florida. Kerry Chreist, with Jacobs Engineering on the Test and Operations Support Contract, explains the various test stands and how they will be used to prepare booster segments for NASA’s Space Launch System (SLS) rocket. In the far corner, in the vertical position, is one of two pathfinders, or test versions, of solid rocket booster segments for the SLS rocket. The Ground Systems Development and Operations Program and Jacobs are preparing the booster segments, which are inert, for a series of lifts, moves and stacking operations to prepare for Exploration Mission-1, deep-space missions and the journey to Mars.
Mathematical morphology for automated analysis of remotely sensed objects in radar images
NASA Technical Reports Server (NTRS)
Daida, Jason M.; Vesecky, John F.
1991-01-01
A symbiosis of pyramidal segmentation and morphological transmission is described. The pyramidal segmentation portion of the symbiosis has resulted in low (2.6 percent) misclassification error rate for a one-look simulation. Other simulations indicate lower error rates (1.8 percent for a four-look image). The morphological transformation portion has resulted in meaningful partitions with a minimal loss of fractal boundary information. An unpublished version of Thicken, suitable for watersheds transformations of fractal objects, is also presented. It is demonstrated that the proposed symbiosis works with SAR (synthetic aperture radar) images: in this case, a four-look Seasat image of sea ice. It is concluded that the symbiotic forms of both segmentation and morphological transformation seem well suited for unsupervised geophysical analysis.
Clinical validation of a non-heteronormative version of the Social Interaction Anxiety Scale (SIAS).
Lindner, Philip; Martell, Christopher; Bergström, Jan; Andersson, Gerhard; Carlbring, Per
2013-12-19
Despite welcomed changes in societal attitudes and practices towards sexual minorities, instances of heteronormativity can still be found within healthcare and research. The Social Interaction Anxiety Scale (SIAS) is a valid and reliable self-rating scale of social anxiety, which includes one item (number 14) with an explicit heteronormative assumption about the respondent's sexual orientation. This heteronormative phrasing may confuse, insult or alienate sexual minority respondents. A clinically validated version of the SIAS featuring a non-heteronormative phrasing of item 14 is thus needed. 129 participants with diagnosed social anxiety disorder, enrolled in an Internet-based intervention trial, were randomly assigned to responding to the SIAS featuring either the original or a novel non-heteronormative phrasing of item 14, and then answered the other item version. Within-subject, correlation between item versions was calculated and the two scores were statistically compared. The two items' correlations with the other SIAS items and other psychiatric rating scales were also statistically compared. Item versions were highly correlated and scores did not differ statistically. The two items' correlations with other measures did not differ statistically either. The SIAS can be revised with a non-heteronormative formulation of item 14 with psychometric equivalence on item and scale level. Implications for other psychiatric instruments with heteronormative phrasings are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
III, FredOppel; Rigdon, J. Brian
2014-09-08
A collection of general Umbra modules that are reused by other Umbra libraries. These capabilities include line segments, file utilities, color utilities, string utilities (for std::string), list utilities (for std ::vector ), bounding box intersections, range limiters, simple filters, cubic roots solvers and a few other utilities.
Gebreyesus, Grum; Lund, Mogens S; Buitenhuis, Bart; Bovenhuis, Henk; Poulsen, Nina A; Janss, Luc G
2017-12-05
Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls. Single-nucleotide polymorphisms (SNPs), from 50K SNP arrays, were grouped into non-overlapping genome segments. A segment was defined as one SNP, or a group of 50, 100, or 200 adjacent SNPs, or one chromosome, or the whole genome. Traditional univariate and bivariate genomic best linear unbiased prediction (GBLUP) models were also run for comparison. Reliabilities were calculated through a resampling strategy and using deterministic formula. BayesAS models improved prediction reliability for most of the traits compared to GBLUP models and this gain depended on segment size and genetic architecture of the traits. The gain in prediction reliability was especially marked for the protein composition traits β-CN, κ-CN and β-LG, for which prediction reliabilities were improved by 49 percentage points on average using the MT-BayesAS model with a 100-SNP segment size compared to the bivariate GBLUP. Prediction reliabilities were highest with the BayesAS model that uses a 100-SNP segment size. The bivariate versions of our BayesAS models resulted in extra gains of up to 6% in prediction reliability compared to the univariate versions. Substantial improvement in prediction reliability was possible for most of the traits related to milk protein composition using our novel BayesAS models. Grouping adjacent SNPs into segments provided enhanced information to estimate parameters and allowing the segments to have different (co)variances helped disentangle heterogeneous (co)variances across the genome.
NASA Astrophysics Data System (ADS)
Remmele, Steffen; Ritzerfeld, Julia; Nickel, Walter; Hesser, Jürgen
2011-03-01
RNAi-based high-throughput microscopy screens have become an important tool in biological sciences in order to decrypt mostly unknown biological functions of human genes. However, manual analysis is impossible for such screens since the amount of image data sets can often be in the hundred thousands. Reliable automated tools are thus required to analyse the fluorescence microscopy image data sets usually containing two or more reaction channels. The herein presented image analysis tool is designed to analyse an RNAi screen investigating the intracellular trafficking and targeting of acylated Src kinases. In this specific screen, a data set consists of three reaction channels and the investigated cells can appear in different phenotypes. The main issue of the image processing task is an automatic cell segmentation which has to be robust and accurate for all different phenotypes and a successive phenotype classification. The cell segmentation is done in two steps by segmenting the cell nuclei first and then using a classifier-enhanced region growing on basis of the cell nuclei to segment the cells. The classification of the cells is realized by a support vector machine which has to be trained manually using supervised learning. Furthermore, the tool is brightness invariant allowing different staining quality and it provides a quality control that copes with typical defects during preparation and acquisition. A first version of the tool has already been successfully applied for an RNAi-screen containing three hundred thousand image data sets and the SVM extended version is designed for additional screens.
Consensus properties and their large-scale applications for the gene duplication problem.
Moon, Jucheol; Lin, Harris T; Eulenstein, Oliver
2016-06-01
Solving the gene duplication problem is a classical approach for species tree inference from gene trees that are confounded by gene duplications. This problem takes a collection of gene trees and seeks a species tree that implies the minimum number of gene duplications. Wilkinson et al. posed the conjecture that the gene duplication problem satisfies the desirable Pareto property for clusters. That is, for every instance of the problem, all clusters that are commonly present in the input gene trees of this instance, called strict consensus, will also be found in every solution to this instance. We prove that this conjecture does not generally hold. Despite this negative result we show that the gene duplication problem satisfies a weaker version of the Pareto property where the strict consensus is found in at least one solution (rather than all solutions). This weaker property contributes to our design of an efficient scalable algorithm for the gene duplication problem. We demonstrate the performance of our algorithm in analyzing large-scale empirical datasets. Finally, we utilize the algorithm to evaluate the accuracy of standard heuristics for the gene duplication problem using simulated datasets.
NASA Technical Reports Server (NTRS)
Owre, Sam; Shankar, Natarajan; Butler, Ricky W. (Technical Monitor)
2001-01-01
The purpose of this task was to provide a mechanism for theory interpretations in a prototype verification system (PVS) so that it is possible to demonstrate the consistency of a theory by exhibiting an interpretation that validates the axioms. The mechanization makes it possible to show that one collection of theories is correctly interpreted by another collection of theories under a user-specified interpretation for the uninterpreted types and constants. A theory instance is generated and imported, while the axiom instances are generated as proof obligations to ensure that the interpretation is valid. Interpretations can be used to show that an implementation is a correct refinement of a specification, that an axiomatically defined specification is consistent, or that a axiomatically defined specification captures its intended models. In addition, the theory parameter mechanism has been extended with a notion of theory as parameter so that a theory instance can be given as an actual parameter to an imported theory. Theory interpretations can thus be used to refine an abstract specification or to demonstrate the consistency of an axiomatic theory. In this report we describe the mechanism in detail. This extension is a part of PVS version 3.0, which will be publicly released in mid-2001.
2014-06-19
decision theory (Berger, 1985), and quantum probability theory (Busemeyer, Pothos, Franco, & Trueblood, 2011). Similarly, explanations in the 1 rch in...trengths of associations) are consciously inaccessible and con- titute the implicit knowledge of the model (Gonzalez & Lebiere, 005; Lebiere, Wallach...in memory (Lebiere et al., 2013). It is important to note hat this wholly implicit process is not consciously available to the odel. The second level
DCU@TRECMed 2012: Using Ad-Hoc Baselines for Domain-Specific Retrieval
2012-11-01
description to extend the query, for example: Patients with complicated GERD who receive endoscopy will be extended with Gastroesophageal reflux disease ... Diseases and Related Health Problems, version 9) for the patient’s admission or discharge status [1, 5]; treating negation (e.g. negative test results or...codes were mapped to a description of the code, usually a short phrase/sentence. For instance, the ICD9 code 253.5 corresponds to the disease Diabetes
The Computational Science Environment (CSE)
2009-08-01
supported CSE platforms. Developers can also build against different versions of a particular package (e.g., Python-2.4 vs . Python-2.5) via a...8.2.1 TK Testing Error and Workaround It has been found that TK tends to produces more testing errors when using KDE , and in some instances, the test...suite freezes when reaching the TK select test. These issues have not been seen when using Gnome . 8.2.2 VTK Testing Error and Workaround VTK test
Conci, Markus; Müller, Hermann J; von Mühlenen, Adrian
2013-07-09
In visual search, detection of a target is faster when it is presented within a spatial layout of repeatedly encountered nontarget items, indicating that contextual invariances can guide selective attention (contextual cueing; Chun & Jiang, 1998). However, perceptual regularities may interfere with contextual learning; for instance, no contextual facilitation occurs when four nontargets form a square-shaped grouping, even though the square location predicts the target location (Conci & von Mühlenen, 2009). Here, we further investigated potential causes for this interference-effect: We show that contextual cueing can reliably occur for targets located within the region of a segmented object, but not for targets presented outside of the object's boundaries. Four experiments demonstrate an object-based facilitation in contextual cueing, with a modulation of context-based learning by relatively subtle grouping cues including closure, symmetry, and spatial regularity. Moreover, the lack of contextual cueing for targets located outside the segmented region was due to an absence of (latent) learning of contextual layouts, rather than due to an attentional bias towards the grouped region. Taken together, these results indicate that perceptual segmentation provides a basic structure within which contextual scene regularities are acquired. This in turn argues that contextual learning is constrained by object-based selection.
Learning to rank atlases for multiple-atlas segmentation.
Sanroma, Gerard; Wu, Guorong; Gao, Yaozong; Shen, Dinggang
2014-10-01
Recently, multiple-atlas segmentation (MAS) has achieved a great success in the medical imaging area. The key assumption is that multiple atlases have greater chances of correctly labeling a target image than a single atlas. However, the problem of atlas selection still remains unexplored. Traditionally, image similarity is used to select a set of atlases. Unfortunately, this heuristic criterion is not necessarily related to the final segmentation performance. To solve this seemingly simple but critical problem, we propose a learning-based atlas selection method to pick up the best atlases that would lead to a more accurate segmentation. Our main idea is to learn the relationship between the pairwise appearance of observed instances (i.e., a pair of atlas and target images) and their final labeling performance (e.g., using the Dice ratio). In this way, we select the best atlases based on their expected labeling accuracy. Our atlas selection method is general enough to be integrated with any existing MAS method. We show the advantages of our atlas selection method in an extensive experimental evaluation in the ADNI, SATA, IXI, and LONI LPBA40 datasets. As shown in the experiments, our method can boost the performance of three widely used MAS methods, outperforming other learning-based and image-similarity-based atlas selection methods.
Posterior segment involvement in cat-scratch disease: A case series.
Tolou, C; Mahieu, L; Martin-Blondel, G; Ollé, P; Matonti, F; Hamid, S; Benouaich, X; Debard, A; Cassagne, M; Soler, V
2015-12-01
Cat-scratch disease (CSD) is a systemic infectious disease. The most well-known posterior segment presentation is neuroretinitis with a macular star. In this study, we present a case series emphasising the heterogeneity of the disease and the various posterior segment manifestations. A retrospective case series of consecutive patients presenting with posterior segment CSD, over a 5-year period (2010 to 2015), at two ophthalmological centres in Midi-Pyrénées. Twelve patients (17 eyes) were included, of whom 11 (92%) presented with rapidly decreasing visual acuity, with 6 of these (50%) extremely abrupt. CSD was bilateral in 5 (42% of all patients). Posterior manifestations were: 12 instances of optic nerve edema (100%), 8 of focal chorioretinitis (67%) and only 6 of the classic macular edema with macular star (25% at first examination, but 50% later). Other ophthalmological complications developed in three patients; one developed acute anterior ischemic optic neuropathy, one a retrohyaloid hemorrhage and one a branch retinal artery occlusion, all secondary to occlusive focal vasculitis adjacent to focal chorioretinitis. Classical neuroretinitis with macular star is not the only clinical presentation of CSD. Practitioners should screen for Bartonella henselae in all patients with papillitis or focal chorioretinitis. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Heritability and reliability of automatically segmented human hippocampal formation subregions
Whelan, Christopher D.; Hibar, Derrek P.; van Velzen, Laura S.; Zannas, Anthony S.; Carrillo-Roa, Tania; McMahon, Katie; Prasad, Gautam; Kelly, Sinéad; Faskowitz, Joshua; deZubiracay, Greig; Iglesias, Juan E.; van Erp, Theo G.M.; Frodl, Thomas; Martin, Nicholas G.; Wright, Margaret J.; Jahanshad, Neda; Schmaal, Lianne; Sämann, Philipp G.; Thompson, Paul M.
2016-01-01
The human hippocampal formation can be divided into a set of cytoarchitecturally and functionally distinct subregions, involved in different aspects of memory formation. Neuroanatomical disruptions within these subregions are associated with several debilitating brain disorders including Alzheimer’s disease, major depression, schizophrenia, and bipolar disorder. Multi-center brain imaging consortia, such as the Enhancing Neuro Imaging Genetics through Meta-Analysis (ENIGMA) consortium, are interested in studying disease effects on these subregions, and in the genetic factors that affect them. For large-scale studies, automated extraction and subsequent genomic association studies of these hippocampal subregion measures may provide additional insight. Here, we evaluated the test–retest reliability and transplatform reliability (1.5 T versus 3 T) of the subregion segmentation module in the FreeSurfer software package using three independent cohorts of healthy adults, one young (Queensland Twins Imaging Study, N = 39), another elderly (Alzheimer’s Disease Neuroimaging Initiative, ADNI-2, N = 163) and another mixed cohort of healthy and depressed participants (Max Planck Institute, MPIP, N = 598). We also investigated agreement between the most recent version of this algorithm (v6.0) and an older version (v5.3), again using the ADNI-2 and MPIP cohorts in addition to a sample from the Netherlands Study for Depression and Anxiety (NESDA) (N = 221). Finally, we estimated the heritability (h2) of the segmented subregion volumes using the full sample of young, healthy QTIM twins (N = 728). Test–retest reliability was high for all twelve subregions in the 3 T ADNI-2 sample (intraclass correlation coefficient (ICC) = 0.70–0.97) and moderate-to-high in the 4 T QTIM sample (ICC = 0.5–0.89). Transplatform reliability was strong for eleven of the twelve subregions (ICC = 0.66–0.96); however, the hippocampal fissure was not consistently reconstructed across 1.5 T and 3 T field strengths (ICC = 0.47–0.57). Between-version agreement was moderate for the hippocampal tail, subiculum and presubiculum (ICC = 0.78–0.84; Dice Similarity Coefficient (DSC) = 0.55–0.70), and poor for all other subregions (ICC = 0.34–0.81; DSC = 0.28–0.51). All hippocampal subregion volumes were highly heritable (h2 = 0.67–0.91). Our findings indicate that eleven of the twelve human hippocampal subregions segmented using FreeSurfer version 6.0 may serve as reliable and informative quantitative phenotypes for future multi-site imaging genetics initiatives such as those of the ENIGMA consortium. PMID:26747746
The Librarians' Dilemma: Should We Purchase the E-Book? The P-Book? Both? Neither?
NASA Astrophysics Data System (ADS)
Holmquist, J.
2012-08-01
Publishers of books in astronomy and astrophysics vary greatly in how they market the electronic versions of the print. In most cases, the electronic version for a single user costs the same as the print, and it costs even more for multiple simultaneous users. Some publishers encourage libraries to subscribe to an entire year's output by subject; others make single titles available via the publisher's website, or a vendor's platform such as ebrary. In the latter instance, readers are often surprised to discover that although they can read the entire text online, they can print or download only limited portions. Can we afford to purchase both print and online, if they are only using one? What is the library's obligation to future users and other questions will be addressed.
Yanovich, Polina; Isenhower, Robert W.; Sage, Jacob; Torres, Elizabeth B.
2013-01-01
Background Often in Parkinson’s disease (PD) motor-related problems overshadow latent non-motor deficits as it is difficult to dissociate one from the other with commonly used observational inventories. Here we ask if the variability patterns of hand speed and acceleration would be revealing of deficits in spatial-orientation related decisions as patients performed a familiar reach-to-grasp task. To this end we use spatial-orientation priming which normally facilitates motor-program selection and asked whether in PD spatial-orientation priming helps or hinders performance. Methods To dissociate spatial-orientation- and motor-related deficits participants performed two versions of the task. The biomechanical version (DEFAULT) required the same postural- and hand-paths as the orientation-priming version (primed-UP). Any differences in the patients here could not be due to motor issues as the tasks were biomechanically identical. The other priming version (primed-DOWN) however required additional spatial and postural processing. We assessed in all three cases both the forward segment deliberately aimed towards the spatial-target and the retracting segment, spontaneously bringing the hand to rest without an instructed goal. Results and Conclusions We found that forward and retracting segments belonged in two different statistical classes according to the fluctuations of speed and acceleration maxima. Further inspection revealed conservation of the forward (voluntary) control of speed but in PD a discontinuity of this control emerged during the uninstructed retractions which was absent in NC. Two PD groups self-emerged: one group in which priming always affected the retractions and the other in which only the more challenging primed-DOWN condition was affected. These PD-groups self-formed according to the speed variability patterns, which systematically changed along a gradient that depended on the priming, thus dissociating motor from spatial-orientation issues. Priming did not facilitate the motor task in PD but it did reveal a breakdown in the spatial-orientation decision that was independent of the motor-postural path. PMID:23843963
A Model for Semantic Equivalence Discovery for Harmonizing Master Data
NASA Astrophysics Data System (ADS)
Piprani, Baba
IT projects often face the challenge of harmonizing metadata and data so as to have a "single" version of the truth. Determining equivalency of multiple data instances against the given type, or set of types, is mandatory in establishing master data legitimacy in a data set that contains multiple incarnations of instances belonging to the same semantic data record . The results of a real-life application define how measuring criteria and equivalence path determination were established via a set of "probes" in conjunction with a score-card approach. There is a need for a suite of supporting models to help determine master data equivalency towards entity resolution—including mapping models, transform models, selection models, match models, an audit and control model, a scorecard model, a rating model. An ORM schema defines the set of supporting models along with their incarnation into an attribute based model as implemented in an RDBMS.
2017-01-01
Drosophila segmentation is a well-established paradigm for developmental pattern formation. However, the later stages of segment patterning, regulated by the “pair-rule” genes, are still not well understood at the system level. Building on established genetic interactions, I construct a logical model of the Drosophila pair-rule system that takes into account the demonstrated stage-specific architecture of the pair-rule gene network. Simulation of this model can accurately recapitulate the observed spatiotemporal expression of the pair-rule genes, but only when the system is provided with dynamic “gap” inputs. This result suggests that dynamic shifts of pair-rule stripes are essential for segment patterning in the trunk and provides a functional role for observed posterior-to-anterior gap domain shifts that occur during cellularisation. The model also suggests revised patterning mechanisms for the parasegment boundaries and explains the aetiology of the even-skipped null mutant phenotype. Strikingly, a slightly modified version of the model is able to pattern segments in either simultaneous or sequential modes, depending only on initial conditions. This suggests that fundamentally similar mechanisms may underlie segmentation in short-germ and long-germ arthropods. PMID:28953896
SRB Processing Facilities Media Event
2016-03-01
Members of the news media watch as a crane is used to move one of two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket to a test stand in the Rotation, Processing and Surge Facility at NASA’s Kennedy Space Center in Florida. Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will prepare the booster segments, which are inert, for a series of lifts, moves and stacking operations to prepare for Exploration Mission-1, deep-space missions and the journey to Mars.
Labor Market Structure and Salary Determination among Professional Basketball Players.
ERIC Educational Resources Information Center
Wallace, Michael
1988-01-01
The author investigates the labor market structure and determinants of salaries for professional basketball players. An expanded version of the resource perspective is used. A three-tiered model of labor market segmentation is revealed for professional basketball players, but other variables also are important in salary determination. (Author/CH)
Ensemble Semi-supervised Frame-work for Brain Magnetic Resonance Imaging Tissue Segmentation.
Azmi, Reza; Pishgoo, Boshra; Norozi, Narges; Yeganeh, Samira
2013-04-01
Brain magnetic resonance images (MRIs) tissue segmentation is one of the most important parts of the clinical diagnostic tools. Pixel classification methods have been frequently used in the image segmentation with two supervised and unsupervised approaches up to now. Supervised segmentation methods lead to high accuracy, but they need a large amount of labeled data, which is hard, expensive, and slow to obtain. Moreover, they cannot use unlabeled data to train classifiers. On the other hand, unsupervised segmentation methods have no prior knowledge and lead to low level of performance. However, semi-supervised learning which uses a few labeled data together with a large amount of unlabeled data causes higher accuracy with less trouble. In this paper, we propose an ensemble semi-supervised frame-work for segmenting of brain magnetic resonance imaging (MRI) tissues that it has been used results of several semi-supervised classifiers simultaneously. Selecting appropriate classifiers has a significant role in the performance of this frame-work. Hence, in this paper, we present two semi-supervised algorithms expectation filtering maximization and MCo_Training that are improved versions of semi-supervised methods expectation maximization and Co_Training and increase segmentation accuracy. Afterward, we use these improved classifiers together with graph-based semi-supervised classifier as components of the ensemble frame-work. Experimental results show that performance of segmentation in this approach is higher than both supervised methods and the individual semi-supervised classifiers.
Elleithy, Khaled; Elleithy, Abdelrahman
2018-01-01
Eye exam can be as efficacious as physical one in determining health concerns. Retina screening can be the very first clue for detecting a variety of hidden health issues including pre-diabetes and diabetes. Through the process of clinical diagnosis and prognosis; ophthalmologists rely heavily on the binary segmented version of retina fundus image; where the accuracy of segmented vessels, optic disc, and abnormal lesions extremely affects the diagnosis accuracy which in turn affect the subsequent clinical treatment steps. This paper proposes an automated retinal fundus image segmentation system composed of three segmentation subsystems follow same core segmentation algorithm. Despite of broad difference in features and characteristics; retinal vessels, optic disc, and exudate lesions are extracted by each subsystem without the need for texture analysis or synthesis. For sake of compact diagnosis and complete clinical insight, our proposed system can detect these anatomical structures in one session with high accuracy even in pathological retina images. The proposed system uses a robust hybrid segmentation algorithm combines adaptive fuzzy thresholding and mathematical morphology. The proposed system is validated using four benchmark datasets: DRIVE and STARE (vessels), DRISHTI-GS (optic disc), and DIARETDB1 (exudates lesions). Competitive segmentation performance is achieved, outperforming a variety of up-to-date systems and demonstrating the capacity to deal with other heterogeneous anatomical structures. PMID:29888146
Simplifying the Reinsch algorithm for the Baker-Campbell-Hausdorff series
NASA Astrophysics Data System (ADS)
Van-Brunt, Alexander; Visser, Matt
2016-02-01
The Goldberg version of the Baker-Campbell-Hausdorff series computes the quantity Z ( X , Y ) = ln (" separators=" e X e Y ) = ∑ w g ( w ) w ( X , Y ) , where X and Y are not necessarily commuting in terms of "words" constructed from the {X, Y} "alphabet." The so-called Goldberg coefficients g(w) are the central topic of this article. This Baker-Campbell-Hausdorff series is a general purpose tool of very wide applicability in mathematical physics, quantum physics, and many other fields. The Reinsch algorithm for the truncated series permits one to calculate the Goldberg coefficients up to some fixed word length |w| by using nilpotent (|w| + 1) × (|w| + 1) matrices. We shall show how to further simplify the Reinsch algorithm, making its implementation (in principle) utterly straightforward using "off the shelf" symbolic manipulation software. Specific computations provide examples which help to provide a deeper understanding of the Goldberg coefficients and their properties. For instance, we shall establish some strict bounds (and some equalities) on the number of non-zero Goldberg coefficients. Unfortunately, we shall see that the number of nonzero Goldberg coefficients often grows very rapidly (in fact exponentially) with the word length |w|. Furthermore, the simplified Reinsch algorithm readily generalizes to many closely related but still quite distinct problems—we shall also present closely related results for the symmetric product S ( X , Y ) = ln (" separators=" e X / 2 e Y e X / 2 ) = ∑ w g S ( w ) w ( X , Y ) . Variations on such themes are straightforward. For instance, one can just as easily consider the "loop" product L ( X , Y ) = ln (" separators=" e X e Y e - X e - Y ) = ∑ w g L ( w ) w ( X , Y ) . This "loop" type of series is of interest, for instance, when considering either differential geometric parallel transport around a closed curve, non-Abelian versions of Stokes' theorem, or even Wigner rotation/Thomas precession in special relativity. Several other closely related series are also briefly investigated.
3D variational brain tumor segmentation on a clustered feature set
NASA Astrophysics Data System (ADS)
Popuri, Karteek; Cobzas, Dana; Jagersand, Martin; Shah, Sirish L.; Murtha, Albert
2009-02-01
Tumor segmentation from MRI data is a particularly challenging and time consuming task. Tumors have a large diversity in shape and appearance with intensities overlapping the normal brain tissues. In addition, an expanding tumor can also deflect and deform nearby tissue. Our work addresses these last two difficult problems. We use the available MRI modalities (T1, T1c, T2) and their texture characteristics to construct a multi-dimensional feature set. Further, we extract clusters which provide a compact representation of the essential information in these features. The main idea in this paper is to incorporate these clustered features into the 3D variational segmentation framework. In contrast to the previous variational approaches, we propose a segmentation method that evolves the contour in a supervised fashion. The segmentation boundary is driven by the learned inside and outside region voxel probabilities in the cluster space. We incorporate prior knowledge about the normal brain tissue appearance, during the estimation of these region statistics. In particular, we use a Dirichlet prior that discourages the clusters in the ventricles to be in the tumor and hence better disambiguate the tumor from brain tissue. We show the performance of our method on real MRI scans. The experimental dataset includes MRI scans, from patients with difficult instances, with tumors that are inhomogeneous in appearance, small in size and in proximity to the major structures in the brain. Our method shows good results on these test cases.
Integrating shape into an interactive segmentation framework
NASA Astrophysics Data System (ADS)
Kamalakannan, S.; Bryant, B.; Sari-Sarraf, H.; Long, R.; Antani, S.; Thoma, G.
2013-02-01
This paper presents a novel interactive annotation toolbox which extends a well-known user-steered segmentation framework, namely Intelligent Scissors (IS). IS, posed as a shortest path problem, is essentially driven by lower level image based features. All the higher level knowledge about the problem domain is obtained from the user through mouse clicks. The proposed work integrates one higher level feature, namely shape up to a rigid transform, into the IS framework, thus reducing the burden on the user and the subjectivity involved in the annotation procedure, especially during instances of occlusions, broken edges, noise and spurious boundaries. The above mentioned scenarios are commonplace in medical image annotation applications and, hence, such a tool will be of immense help to the medical community. As a first step, an offline training procedure is performed in which a mean shape and the corresponding shape variance is computed by registering training shapes up to a rigid transform in a level-set framework. The user starts the interactive segmentation procedure by providing a training segment, which is a part of the target boundary. A partial shape matching scheme based on a scale-invariant curvature signature is employed in order to extract shape correspondences and subsequently predict the shape of the unsegmented target boundary. A `zone of confidence' is generated for the predicted boundary to accommodate shape variations. The method is evaluated on segmentation of digital chest x-ray images for lung annotation which is a crucial step in developing algorithms for screening Tuberculosis.
NASA Astrophysics Data System (ADS)
Fang, Leyuan; Yang, Liumao; Li, Shutao; Rabbani, Hossein; Liu, Zhimin; Peng, Qinghua; Chen, Xiangdong
2017-06-01
Detection and recognition of macular lesions in optical coherence tomography (OCT) are very important for retinal diseases diagnosis and treatment. As one kind of retinal disease (e.g., diabetic retinopathy) may contain multiple lesions (e.g., edema, exudates, and microaneurysms) and eye patients may suffer from multiple retinal diseases, multiple lesions often coexist within one retinal image. Therefore, one single-lesion-based detector may not support the diagnosis of clinical eye diseases. To address this issue, we propose a multi-instance multilabel-based lesions recognition (MIML-LR) method for the simultaneous detection and recognition of multiple lesions. The proposed MIML-LR method consists of the following steps: (1) segment the regions of interest (ROIs) for different lesions, (2) compute descriptive instances (features) for each lesion region, (3) construct multilabel detectors, and (4) recognize each ROI with the detectors. The proposed MIML-LR method was tested on 823 clinically labeled OCT images with normal macular and macular with three common lesions: epiretinal membrane, edema, and drusen. For each input OCT image, our MIML-LR method can automatically identify the number of lesions and assign the class labels, achieving the average accuracy of 88.72% for the cases with multiple lesions, which better assists macular disease diagnosis and treatment.
NASA Astrophysics Data System (ADS)
Kwon, N.; Gentle, J.; Pierce, S. A.
2015-12-01
Software code developed for research is often used for a relatively short period of time before it is abandoned, lost, or becomes outdated. This unintentional abandonment of code is a valid problem in the 21st century scientific process, hindering widespread reusability and increasing the effort needed to develop research software. Potentially important assets, these legacy codes may be resurrected and documented digitally for long-term reuse, often with modest effort. Furthermore, the revived code may be openly accessible in a public repository for researchers to reuse or improve. For this study, the research team has begun to revive the codebase for Groundwater Decision Support System (GWDSS), originally developed for participatory decision making to aid urban planning and groundwater management, though it may serve multiple use cases beyond those originally envisioned. GWDSS was designed as a java-based wrapper with loosely federated commercial and open source components. If successfully revitalized, GWDSS will be useful for both practical applications as a teaching tool and case study for groundwater management, as well as informing theoretical research. Using the knowledge-sharing approaches documented by the NSF-funded Ontosoft project, digital documentation of GWDSS is underway, from conception to development, deployment, characterization, integration, composition, and dissemination through open source communities and geosciences modeling frameworks. Information assets, documentation, and examples are shared using open platforms for data sharing and assigned digital object identifiers. Two instances of GWDSS version 3.0 are being created: 1) a virtual machine instance for the original case study to serve as a live demonstration of the decision support tool, assuring the original version is usable, and 2) an open version of the codebase, executable installation files, and developer guide available via an open repository, assuring the source for the application is accessible with version control and potential for new branch developments. Finally, metadata about the software has been completed within the OntoSoft portal to provide descriptive curation, make GWDSS searchable, and complete documentation of the scientific software lifecycle.
SuBSENSE: a universal change detection method with local adaptive sensitivity.
St-Charles, Pierre-Luc; Bilodeau, Guillaume-Alexandre; Bergevin, Robert
2015-01-01
Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method's internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online.
GPU-based relative fuzzy connectedness image segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhuge Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.
2013-01-15
Purpose:Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an Script-Small-L {sub {infinity}}-based energy, are known as relative fuzzymore » connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8 Multiplication-Sign , 22.9 Multiplication-Sign , 20.9 Multiplication-Sign , and 17.5 Multiplication-Sign , correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.« less
Region segmentation and contextual cuing in visual search.
Conci, Markus; von Mühlenen, Adrian
2009-10-01
Contextual information provides an important source for behavioral orienting. For instance, in the contextual-cuing paradigm, repetitions of the spatial layout of elements in a search display can guide attention to the target location. The present study explored how this contextual-cuing effect is influenced by the grouping of search elements. In Experiment 1, four nontarget items could be arranged collinearly to form an imaginary square. The presence of such a square eliminated the contextual-cuing effect, despite the fact that the square's location still had a predictive value for the target location. Three follow-up experiments demonstrated that other types of grouping abolished contextual cuing in a similar way and that the mere presence of a task-irrelevant singleton had only a diminishing effect (by half) on contextual cuing. These findings suggest that a segmented, salient region can interfere with contextual cuing, reducing its predictive impact on search.
NASA Technical Reports Server (NTRS)
Erickson, Robert J.; Howe, John, Jr.; Kulp, Galen W.; VanKeuren, Steven P.
2008-01-01
The International Space Station (ISS) United States Orbital Segment (USOS) Oxygen Generation System (OGS) was originally intended to be installed in ISS Node 3. The OGS rack delivery was accelerated, and it was launched to ISS in July of 2006 and installed in the US Laboratory Module. Various modification kits were installed to provide its interfaces, and the OGS was first activated in July of 2007 for 15 hours, In October of 2007 it was again activated for 76 hours with varied production rates and day/night cycling. Operational time in each instance was limited by the quantity of feedwater in a Payload Water Reservoir (PWR) bag. Feedwater will be provided by PWR bag until the USOS Water Recovery System (WRS) is delivered to SS in fall of 2008. This paper will discuss operating experience and characteristics of the OGS, as well as operational issues and their resolution.
Morphodynamics of submarine channel inception revealed by new experimental approach
de Leeuw, Jan; Eggenhuisen, Joris T.; Cartigny, Matthieu J. B.
2016-01-01
Submarine channels are ubiquitous on the seafloor and their inception and evolution is a result of dynamic interaction between turbidity currents and the evolving seafloor. However, the morphodynamic links between channel inception and flow dynamics have not yet been monitored in experiments and only in one instance on the modern seafloor. Previous experimental flows did not show channel inception, because flow conditions were not appropriately scaled to sustain suspended sediment transport. Here we introduce and apply new scaling constraints for similarity between natural and experimental turbidity currents. The scaled currents initiate a leveed channel from an initially featureless slope. Channelization commences with deposition of levees in some slope segments and erosion of a conduit in other segments. Channel relief and flow confinement increase progressively during subsequent flows. This morphodynamic evolution determines the architecture of submarine channel deposits in the stratigraphic record and efficiency of sediment bypass to the basin floor. PMID:26996440
On Multiple Zagreb Indices of TiO2 Nanotubes.
Malik, Mehar Ali; Imran, Muhammad
2015-01-01
The First and Second Zagreb indices were first introduced by I. Gutman and N. Trinajstic in 1972. It is reported that these indices are useful in the study of anti-inflammatory activities of certain chemical instances, and in elsewhere. Recently, the first and second multiple Zagreb indices of a graph were introduced by Ghorbani and Azimi in 2012. In this paper, we calculate the Zagreb indices and the multiplicative versions of the Zagreb indices of an infinite class of Titania nanotubes TiO(2)[m,n].
Josiński, Henryk; Kostrzewa, Daniel; Michalczuk, Agnieszka; Switoński, Adam
2014-01-01
This paper introduces an expanded version of the Invasive Weed Optimization algorithm (exIWO) distinguished by the hybrid strategy of the search space exploration proposed by the authors. The algorithm is evaluated by solving three well-known optimization problems: minimization of numerical functions, feature selection, and the Mona Lisa TSP Challenge as one of the instances of the traveling salesman problem. The achieved results are compared with analogous outcomes produced by other optimization methods reported in the literature.
NASA Astrophysics Data System (ADS)
Ayral, Thomas; Lee, Tsung-Han; Kotliar, Gabriel
2017-12-01
We present a unified perspective on dynamical mean-field theory (DMFT), density-matrix embedding theory (DMET), and rotationally invariant slave bosons (RISB). We show that DMET can be regarded as a simplification of the RISB method where the quasiparticle weight is set to unity. This relation makes it easy to transpose extensions of a given method to another: For instance, a temperature-dependent version of RISB can be used to derive a temperature-dependent free-energy formula for DMET.
ANNS An X Window Based Version of the AFIT Neural Network Simulator
1993-06-01
programer or user can view the dy- namic behavior of an algorithm and its changes of learning state while the neural network paradigms or algorithms...an object as "something you can do things to. An object has state, behavior , and identity, the structure and behavior of similar objects are defined in...their common class. The terms instance and object are interchangeable" [5:516]. The behavior of an object is "characterized by the actions that it
TAPRegExt: a VOResource Schema Extension for Describing TAP Services Version 1.0
NASA Astrophysics Data System (ADS)
Demleitner, Markus; Dowler, Patrick; Plante, Ray; Rixon, Guy; Taylor, Mark; Demleitner, Markus
2012-08-01
This document describes an XML encoding standard for metadata about services implementing the table access protocol TAP [TAP], referred to as TAPRegExt. Instance documents are part of the service's registry record or can be obtained from the service itself. They deliver information to both humans and software on the languages, output formats, and upload methods supported by the service, as well as data models implemented by the exposed tables, optional language features, and certain limits enforced by the service.
Lim, Ik Soo; Leek, E Charles
2012-07-01
Previous empirical studies have shown that information along visual contours is known to be concentrated in regions of high magnitude of curvature, and, for closed contours, segments of negative curvature (i.e., concave segments) carry greater perceptual relevance than corresponding regions of positive curvature (i.e., convex segments). Lately, Feldman and Singh (2005, Psychological Review, 112, 243-252) proposed a mathematical derivation to yield information content as a function of curvature along a contour. Here, we highlight several fundamental errors in their derivation and in its associated implementation, which are problematic in both mathematical and psychological senses. Instead, we propose an alternative mathematical formulation for information measure of contour curvature that addresses these issues. Additionally, unlike in previous work, we extend this approach to 3-dimensional (3D) shape by providing a formal measure of information content for surface curvature and outline a modified version of the minima rule relating to part segmentation using curvature in 3D shape. Copyright 2012 APA, all rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The model is designed to enable decision makers to compare the economics of geothermal projects with the economics of alternative energy systems at an early stage in the decision process. The geothermal engineering and economic feasibility computer model (GEEF) is written in FORTRAN IV language and can be run on a mainframe or a mini-computer system. An abbreviated version of the model is being developed for usage in conjunction with a programmable desk calculator. The GEEF model has two main segments, namely (i) the engineering design/cost segment and (ii) the economic analysis segment. In the engineering segment, the model determinesmore » the numbers of production and injection wells, heat exchanger design, operating parameters for the system, requirement of supplementary system (to augment the working fluid temperature if the resource temperature is not sufficiently high), and the fluid flow rates. The model can handle single stage systems as well as two stage cascaded systems in which the second stage may involve a space heating application after a process heat application in the first stage.« less
The use of self checks and voting in software error detection - An empirical study
NASA Technical Reports Server (NTRS)
Leveson, Nancy G.; Cha, Stephen S.; Knight, John C.; Shimeall, Timothy J.
1990-01-01
The results of an empirical study of software error detection using self checks and N-version voting are presented. Working independently, each of 24 programmers first prepared a set of self checks using just the requirements specification of an aerospace application, and then each added self checks to an existing implementation of that specification. The modified programs were executed to measure the error-detection performance of the checks and to compare this with error detection using simple voting among multiple versions. The analysis of the checks revealed that there are great differences in the ability of individual programmers to design effective checks. It was found that some checks that might have been effective failed to detect an error because they were badly placed, and there were numerous instances of checks signaling nonexistent errors. In general, specification-based checks alone were not as effective as specification-based checks combined with code-based checks. Self checks made it possible to identify faults that had not been detected previously by voting 28 versions of the program over a million randomly generated inputs. This appeared to result from the fact that the self checks could examine the internal state of the executing program, whereas voting examines only final results of computations. If internal states had to be identical in N-version voting systems, then there would be no reason to write multiple versions.
Dynamics of uniaxially oriented elastomers using dielectric spectroscopy
NASA Astrophysics Data System (ADS)
Lee, Hyungki; Fragiadakis, Daniel; Martin, Darren; Runt, James
2009-03-01
We summarize our initial dielectric spectroscopy investigation of the dynamics of oriented segmented polyurethanes and crosslinked polyisoprene elastomers. A specially designed uniaxial stretching rig is used to control the draw ratio, and the electric field is applied normal to the draw direction. For the segmented PUs, we observe a dramatic reduction in relaxation strength of the soft phase segmental process with increasing extension ratio, accompanied by a modest decrease in relaxation frequency. Crosslinking of the polyisoprene was accomplished with dicumyl peroxide and the dynamics of uncrosslinked and crosslinked versions are investigated in the undrawn state and at different extension ratios. Complimentary analysis of the crosslinked PI is conducted with wide angle X- ray diffraction to examine possible strain-induced crystallization, DSC, and swelling experiments. Quantitative analysis of relaxation strengths and shapes as a function of draw ratio will be discussed.
Reconceptualizing Social Work Behaviors from a Human Rights Perspective
ERIC Educational Resources Information Center
Steen, Julie A.
2018-01-01
Although the human rights philosophy has relevance for many segments of the social work curriculum, the latest version of accreditation standards only includes a few behaviors specific to human rights. This deficit can be remedied by incorporating innovations found in the social work literature, which provides a wealth of material for…
MILE Curriculum [and Nine CD-ROM Lessons].
ERIC Educational Resources Information Center
Reiman, John
This curriculum on money management skills for deaf adolescent and young adult students is presented on nine video CD-ROMs as well as in a print version. The curriculum was developed following a survey of the needs of school and rehabilitation programs. It was also piloted and subsequently revised. Each teaching segment is presented in sign…
Automated tissue segmentation of MR brain images in the presence of white matter lesions.
Valverde, Sergi; Oliver, Arnau; Roura, Eloy; González-Villà, Sandra; Pareto, Deborah; Vilanova, Joan C; Ramió-Torrentà, Lluís; Rovira, Àlex; Lladó, Xavier
2017-01-01
Over the last few years, the increasing interest in brain tissue volume measurements on clinical settings has led to the development of a wide number of automated tissue segmentation methods. However, white matter lesions are known to reduce the performance of automated tissue segmentation methods, which requires manual annotation of the lesions and refilling them before segmentation, which is tedious and time-consuming. Here, we propose a new, fully automated T1-w/FLAIR tissue segmentation approach designed to deal with images in the presence of WM lesions. This approach integrates a robust partial volume tissue segmentation with WM outlier rejection and filling, combining intensity and probabilistic and morphological prior maps. We evaluate the performance of this method on the MRBrainS13 tissue segmentation challenge database, which contains images with vascular WM lesions, and also on a set of Multiple Sclerosis (MS) patient images. On both databases, we validate the performance of our method with other state-of-the-art techniques. On the MRBrainS13 data, the presented approach was at the time of submission the best ranked unsupervised intensity model method of the challenge (7th position) and clearly outperformed the other unsupervised pipelines such as FAST and SPM12. On MS data, the differences in tissue segmentation between the images segmented with our method and the same images where manual expert annotations were used to refill lesions on T1-w images before segmentation were lower or similar to the best state-of-the-art pipeline incorporating automated lesion segmentation and filling. Our results show that the proposed pipeline achieved very competitive results on both vascular and MS lesions. A public version of this approach is available to download for the neuro-imaging community. Copyright © 2016 Elsevier B.V. All rights reserved.
Risk Assessment Update: Russian Segment
NASA Technical Reports Server (NTRS)
Christiansen, Eric; Lear, Dana; Hyde, James; Bjorkman, Michael; Hoffman, Kevin
2012-01-01
BUMPER-II version 1.95j source code was provided to RSC-E- and Khrunichev at January 2012 MMOD TIM in Moscow. MEMCxP and ORDEM 3.0 environments implemented as external data files. NASA provided a sample ORDEM 3.0 g."key" & "daf" environment file set for demonstration and benchmarking BUMPER -II v1.95j installation at the Jan-12 TIM. ORDEM 3.0 has been completed and is currently in beta testing. NASA will provide a preliminary set of ORDEM 3.0 ".key" & ".daf" environment files for the years 2012 through 2028. Bumper output files produced using the new ORDEM 3.0 data files are intended for internal use only, not for requirements verification. Output files will contain these words ORDEM FILE DESCRIPTION = PRELIMINARY VERSION: not for production. The projectile density term in many BUMPER-II ballistic limit equations will need to be updated. Cube demo scripts and output files delivered at the Jan-12 TIM have been updated for the new ORDEM 3.0 data files. Risk assessment results based on ORDEM 3.0 and MEM will be presented for the Russian Segment (RS) of ISS.
Fotopoulos, Christos; Krystallis, Athanasios; Vassallo, Marco; Pagiaslis, Anastasios
2009-02-01
Recognising the need for a more statistically robust instrument to investigate general food selection determinants, the research validates and confirms Food Choice Questionnaire (FCQ's) factorial design, develops ad hoc a more robust FCQ version and tests its ability to discriminate between consumer segments in terms of the importance they assign to the FCQ motivational factors. The original FCQ appears to represent a comprehensive and reliable research instrument. However, the empirical data do not support the robustness of its 9-factorial design. On the other hand, segmentation results at the subpopulation level based on the enhanced FCQ version bring about an optimistic message for the FCQ's ability to predict food selection behaviour. The paper concludes that some of the basic components of the original FCQ can be used as a basis for a new general food motivation typology. The development of such a new instrument, with fewer, of higher abstraction FCQ-based dimensions and fewer items per dimension, is a right step forward; yet such a step should be theory-driven, while a rigorous statistical testing across and within population would be necessary.
NASA Astrophysics Data System (ADS)
Behlim, Sadaf Iqbal; Syed, Tahir Qasim; Malik, Muhammad Yameen; Vigneron, Vincent
2016-11-01
Grouping image tokens is an intermediate step needed to arrive at meaningful image representation and summarization. Usually, perceptual cues, for instance, gestalt properties inform token grouping. However, they do not take into account structural continuities that could be derived from other tokens belonging to similar structures irrespective of their location. We propose an image representation that encodes structural constraints emerging from local binary patterns (LBP), which provides a long-distance measure of similarity but in a structurally connected way. Our representation provides a grouping of pixels or larger image tokens that is free of numeric similarity measures and could therefore be extended to nonmetric spaces. The representation lends itself nicely to ubiquitous image processing applications such as connected component labeling and segmentation. We test our proposed representation on the perceptual grouping or segmentation task on the popular Berkeley segmentation dataset (BSD500) that with respect to human segmented images achieves an average F-measure of 0.559. Our algorithm achieves a high average recall of 0.787 and is therefore well-suited to other applications such as object retrieval and category-independent object recognition. The proposed merging heuristic based on levels of singular tree component has shown promising results on the BSD500 dataset and currently ranks 12th among all benchmarked algorithms, but contrary to the others, it requires no data-driven training or specialized preprocessing.
Ensemble Semi-supervised Frame-work for Brain Magnetic Resonance Imaging Tissue Segmentation
Azmi, Reza; Pishgoo, Boshra; Norozi, Narges; Yeganeh, Samira
2013-01-01
Brain magnetic resonance images (MRIs) tissue segmentation is one of the most important parts of the clinical diagnostic tools. Pixel classification methods have been frequently used in the image segmentation with two supervised and unsupervised approaches up to now. Supervised segmentation methods lead to high accuracy, but they need a large amount of labeled data, which is hard, expensive, and slow to obtain. Moreover, they cannot use unlabeled data to train classifiers. On the other hand, unsupervised segmentation methods have no prior knowledge and lead to low level of performance. However, semi-supervised learning which uses a few labeled data together with a large amount of unlabeled data causes higher accuracy with less trouble. In this paper, we propose an ensemble semi-supervised frame-work for segmenting of brain magnetic resonance imaging (MRI) tissues that it has been used results of several semi-supervised classifiers simultaneously. Selecting appropriate classifiers has a significant role in the performance of this frame-work. Hence, in this paper, we present two semi-supervised algorithms expectation filtering maximization and MCo_Training that are improved versions of semi-supervised methods expectation maximization and Co_Training and increase segmentation accuracy. Afterward, we use these improved classifiers together with graph-based semi-supervised classifier as components of the ensemble frame-work. Experimental results show that performance of segmentation in this approach is higher than both supervised methods and the individual semi-supervised classifiers. PMID:24098863
Bayesian inference of stress release models applied to some Italian seismogenic zones
NASA Astrophysics Data System (ADS)
Rotondi, R.; Varini, E.
2007-04-01
In this paper, we evaluate the seismic hazard of a region in southern Italy by analysing stress release models from the Bayesian viewpoint; the data are drawn from the most recent version of the parametric catalogue of Italian earthquakes. For estimation we just use the events up to 1992, then we forecast the date of the next event through a stochastic simulation method and we compare the result with the really occurred shocks in the span 1993-2002. The original version of the stress release model, proposed by Vere-Jones in 1978, transposes Reid's elastic rebound theory in the framework of stochastic point processes. Since the nineties enriched versions of this model have appeared in the literature, applied to historical catalogues from China, Iran, Japan; they envisage the identification of independent or interacting tectonic subunits constituting the region under exam. It follows that the stress release models, designed for regional analyses, are evolving towards studies on fault segments, realizing some degree of convergence to those models that start from an individual fault and, considering the interactions with nearby segments, are driven to studies on regional scale. The optimal performance of the models we consider depends on a set of choices among which: the seismogenic region and possible subzones, the threshold magnitude, the length of the time period. In this paper, we focus our attention on the influence of the subdivision of the region under exam into tectonic units; in the light of the recent studies on the fault segmentation model of Italy we propose a partition of Sannio-Matese-Ofanto-Irpinia, one of the most seismically active region in southern Italy. The results show that the performance of the stress release models improves in terms of both fitting and forecasting when the region is split up into parts including new information about potential seismogenic sources.
Collaborative mining of graph patterns from multiple sources
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy; Colonna-Romanoa, John
2016-05-01
Intelligence analysts require automated tools to mine multi-source data, including answering queries, learning patterns of life, and discovering malicious or anomalous activities. Graph mining algorithms have recently attracted significant attention in intelligence community, because the text-derived knowledge can be efficiently represented as graphs of entities and relationships. However, graph mining models are limited to use-cases involving collocated data, and often make restrictive assumptions about the types of patterns that need to be discovered, the relationships between individual sources, and availability of accurate data segmentation. In this paper we present a model to learn the graph patterns from multiple relational data sources, when each source might have only a fragment (or subgraph) of the knowledge that needs to be discovered, and segmentation of data into training or testing instances is not available. Our model is based on distributed collaborative graph learning, and is effective in situations when the data is kept locally and cannot be moved to a centralized location. Our experiments show that proposed collaborative learning achieves learning quality better than aggregated centralized graph learning, and has learning time comparable to traditional distributed learning in which a knowledge of data segmentation is needed.
Simultaneous segmentation of retinal surfaces and microcystic macular edema in SDOCT volumes
NASA Astrophysics Data System (ADS)
Antony, Bhavna J.; Lang, Andrew; Swingle, Emily K.; Al-Louzi, Omar; Carass, Aaron; Solomon, Sharon; Calabresi, Peter A.; Saidha, Shiv; Prince, Jerry L.
2016-03-01
Optical coherence tomography (OCT) is a noninvasive imaging modality that has begun to find widespread use in retinal imaging for the detection of a variety of ocular diseases. In addition to structural changes in the form of altered retinal layer thicknesses, pathological conditions may also cause the formation of edema within the retina. In multiple sclerosis, for instance, the nerve fiber and ganglion cell layers are known to thin. Additionally, the formation of pseudocysts called microcystic macular edema (MME) have also been observed in the eyes of about 5% of MS patients, and its presence has been shown to be correlated with disease severity. Previously, we proposed separate algorithms for the segmentation of retinal layers and MME, but since MME mainly occurs within specific regions of the retina, a simultaneous approach is advantageous. In this work, we propose an automated globally optimal graph-theoretic approach that simultaneously segments the retinal layers and the MME in volumetric OCT scans. SD-OCT scans from one eye of 12 MS patients with known MME and 8 healthy controls were acquired and the pseudocysts manually traced. The overall precision and recall of the pseudocyst detection was found to be 86.0% and 79.5%, respectively.
Segmentation of acute pyelonephritis area on kidney SPECT images using binary shape analysis
NASA Astrophysics Data System (ADS)
Wu, Chia-Hsiang; Sun, Yung-Nien; Chiu, Nan-Tsing
1999-05-01
Acute pyelonephritis is a serious disease in children that may result in irreversible renal scarring. The ability to localize the site of urinary tract infection and the extent of acute pyelonephritis has considerable clinical importance. In this paper, we are devoted to segment the acute pyelonephritis area from kidney SPECT images. A two-step algorithm is proposed. First, the original images are translated into binary versions by automatic thresholding. Then the acute pyelonephritis areas are located by finding convex deficiencies in the obtained binary images. This work gives important diagnosis information for physicians and improves the quality of medical care for children acute pyelonephritis disease.
SRB Processing Facilities Media Event
2016-03-01
Members of the news media view the high bay inside the Rotation, Processing and Surge Facility (RPSF) at NASA’s Kennedy Space Center in Florida. Inside the RPSF, engineers and technicians with Jacobs Engineering on the Test and Operations Support Contract, explain the various test stands. In the far corner is one of two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket. The Ground Systems Development and Operations Program and Jacobs are preparing the booster segments, which are inert, for a series of lifts, moves and stacking operations to prepare for Exploration Mission-1, deep-space missions and the journey to Mars.
Griffith, Jennifer M; Fichter, Marlie; Fowler, Floyd J; Lewis, Carmen; Pignone, Michael P
2008-01-01
Background An important question in the development of decision aids about colon cancer (CRC) screening is whether to include an explicit discussion of the option of not being screened. We examined the effect of including or not including an explicit discussion of the option of deciding not to be screened in a CRC screening decision aid on subjective measures of decision aid content; interest in screening; and knowledge. Methods Adults ages 50–85 were assigned to view one of two versions of the decision aid. The two versions differed only in the inclusion of video segments of two men, one of whom decided against being screened. Participants completed questionnaires before and after viewing the decision aid to compare subjective measures of content, screening interest and intent, and knowledge between groups. Likert response categories (5-point) were used for subjective measures of content (eg. clarity, balance in favor/against screening, and overall rating), and screening interest. Knowledge was measured with a three item index and individual questions. Higher scores indicated favorable responses for subjective measures, greater interest, and better knowledge. For the subjective balance, lower numbers were associated with the impression of the decision aid favoring CRC screening. Results 57 viewed the "with" version which included the two segments and 49 viewed the "without" version. After viewing, participants found the "without" version to have better subjective clarity about benefits of screening ("with" 3.4, "without" 4.1, p < 0.01), and to have greater clarity about downsides of screening ("with" 3.2, "without" 3.6, p = 0.03). The "with" version was considered to be less strongly balanced in favor of screening. ("with" 1.8, "without" 1.6, p = 0.05); but the "without" version received a better overall rating ("with" 3.5, "without" 3.8, p = 0.03). Groups did not differ in screening interest after viewing a decision aid or knowledge. Conclusion A decision aid with the explicit discussion of the option of deciding not to be screened appears to increase the impression that the program was not as strongly in favor of screening, but decreases the impression of clarity and resulted in a lower overall rating. We did not observe clinically important or statistically significant differences in interest in screening or knowledge. PMID:18321377
A Reference Implementation of the OGC CSW EO Standard for the ESA HMA-T project
NASA Astrophysics Data System (ADS)
Bigagli, Lorenzo; Boldrini, Enrico; Papeschi, Fabrizio; Vitale, Fabrizio
2010-05-01
This work was developed in the context of the ESA Heterogeneous Missions Accessibility (HMA) project, whose main objective is to involve the stakeholders, namely National space agencies, satellite or mission owners and operators, in an harmonization and standardization process of their ground segment services and related interfaces. Among HMA objectives was the specification, conformance testing, and experimentation of two Extension Packages (EPs) of the ebRIM Application Profile (AP) of the OGC Catalog Service for the Web (CSW) specification: the Earth Observation Products (EO) EP (OGC 06-131) and the Cataloguing of ISO Metadata (CIM) EP (OGC 07-038). Our contributions have included the development and deployment of Reference Implementations (RIs) for both the above specifications, and their integration with the ESA Service Support Environment (SSE). The RIs are based on the GI-cat framework, an implementation of a distributed catalog service, able to query disparate Earth and Space Science data sources (e.g. OGC Web Services, Unidata THREDDS) and to expose several standard interfaces for data discovery (e.g. OGC CSW ISO AP). Following our initial planning, the GI-cat framework has been extended in order to expose the CSW.ebRIM-CIM and CSW.ebRIM-EO interfaces, and to distribute queries to CSW.ebRIM-CIM and CSW.ebRIM-EO data sources. We expected that a mapping strategy would suffice for accommodating CIM, but this proved to be unpractical during implementation. Hence, a model extension strategy was eventually implemented for both the CIM and EO EPs, and the GI-cat federal model was enhanced in order to support the underlying ebRIM AP. This work has provided us with new insights into the different data models for geospatial data, and the technologies for their implementation. The extension is used by suitable CIM and EO profilers (front-end mediator components) and accessors (back-end mediator components), that relate ISO 19115 concepts to EO and CIM ones. Moreover, a mapping to GI-cat federal model was developed for each EP (quite limited for EO; complete for CIM), in order to enable the discovery of resources through any of GI-cat profilers. The query manager was also improved. GI-cat-EO and -CIM installation packages were made available for distribution, and two RI instances were deployed on the Amazon EC2 facility (plus an ad-hoc instance returning incorrect control data). Integration activities of the EO RI with the ESA SSE Portal for Earth Observation Products were also successfully carried on. During our work, we have contributed feedback and comments to the CIM and EO EP specification working groups. Our contributions resulted in version 0.2.5 of the EO EP, recently approved as an OGC standard, and were useful to consolidate version 0.1.11 of the CIM EP (still being developed).
Hybrid Active/Passive Jet Engine Noise Suppression System
NASA Technical Reports Server (NTRS)
Parente, C. A.; Arcas, N.; Walker, B. E.; Hersh, A. S.; Rice, E. J.
1999-01-01
A novel adaptive segmented liner concept has been developed that employs active control elements to modify the in-duct sound field to enhance the tone-suppressing performance of passive liner elements. This could potentially allow engine designs that inherently produce more tone noise but less broadband noise, or could allow passive liner designs to more optimally address high frequency broadband noise. A proof-of-concept validation program was undertaken, consisting of the development of an adaptive segmented liner that would maximize attenuation of two radial modes in a circular or annular duct. The liner consisted of a leading active segment with dual annuli of axially spaced active Helmholtz resonators, followed by an optimized passive liner and then an array of sensing microphones. Three successively complex versions of the adaptive liner were constructed and their performances tested relative to the performance of optimized uniform passive and segmented passive liners. The salient results of the tests were: The adaptive segmented liner performed well in a high flow speed model fan inlet environment, was successfully scaled to a high sound frequency and successfully attenuated three radial modes using sensor and active resonator arrays that were designed for a two mode, lower frequency environment.
NASA Technical Reports Server (NTRS)
Montgomery, Edward E., IV; Smith, W. Scott (Technical Monitor)
2002-01-01
This paper explores the history and results of the last two year's efforts to transition inductive edge sensor technology from Technology Readiness Level 2 to Technology Readiness Level 6. Both technical and programmatic challenges were overcome in the design, fabrication, test, and installation of over a thousand sensors making up the Segment Alignment Maintenance System (SAMs) for the 91 segment, 9.2-meter. Hobby Eberly Telescope (HET). The integration of these sensors with the control system will be discussed along with serendipitous leverage they provided for both initialization alignment and operational maintenance. The experience gained important insights into the fundamental motion mechanics of large segmented mirrors, the relative importance of the variance sources of misalignment errors, the efficient conduct of a program to mature the technology to the higher levels. Unanticipated factors required the team to develop new implementation strategies for the edge sensor information which enabled major segmented mirror controller design simplifications. The resulting increase in the science efficiency of HET will be shown. Finally, the on-going effort to complete the maturation of inductive edge sensor by delivering space qualified versions for future IR (infrared radiation) space telescopes.
A Scalable Framework For Segmenting Magnetic Resonance Images
Hore, Prodip; Goldgof, Dmitry B.; Gu, Yuhua; Maudsley, Andrew A.; Darkazanli, Ammar
2009-01-01
A fast, accurate and fully automatic method of segmenting magnetic resonance images of the human brain is introduced. The approach scales well allowing fast segmentations of fine resolution images. The approach is based on modifications of the soft clustering algorithm, fuzzy c-means, that enable it to scale to large data sets. Two types of modifications to create incremental versions of fuzzy c-means are discussed. They are much faster when compared to fuzzy c-means for medium to extremely large data sets because they work on successive subsets of the data. They are comparable in quality to application of fuzzy c-means to all of the data. The clustering algorithms coupled with inhomogeneity correction and smoothing are used to create a framework for automatically segmenting magnetic resonance images of the human brain. The framework is applied to a set of normal human brain volumes acquired from different magnetic resonance scanners using different head coils, acquisition parameters and field strengths. Results are compared to those from two widely used magnetic resonance image segmentation programs, Statistical Parametric Mapping and the FMRIB Software Library (FSL). The results are comparable to FSL while providing significant speed-up and better scalability to larger volumes of data. PMID:20046893
Impact of Noise Reduction Algorithm in Cochlear Implant Processing on Music Enjoyment.
Kohlberg, Gavriel D; Mancuso, Dean M; Griffin, Brianna M; Spitzer, Jaclyn B; Lalwani, Anil K
2016-06-01
Noise reduction algorithm (NRA) in speech processing strategy has positive impact on speech perception among cochlear implant (CI) listeners. We sought to evaluate the effect of NRA on music enjoyment. Prospective analysis of music enjoyment. Academic medical center. Normal-hearing (NH) adults (N = 16) and CI listeners (N = 9). Subjective rating of music excerpts. NH and CI listeners evaluated country music piece on three enjoyment modalities: pleasantness, musicality, and naturalness. Participants listened to the original version and 20 modified, less complex versions created by including subsets of musical instruments from the original song. NH participants listened to the segments through CI simulation and CI listeners listened to the segments with their usual speech processing strategy, with and without NRA. Decreasing the number of instruments was significantly associated with increase in the pleasantness and naturalness in both NH and CI subjects (p < 0.05). However, there was no difference in music enjoyment with or without NRA for either NH listeners with CI simulation or CI listeners across all three modalities of pleasantness, musicality, and naturalness (p > 0.05): this was true for the original and the modified music segments with one to three instruments (p > 0.05). NRA does not affect music enjoyment in CI listener or NH individual with CI simulation. This suggests that strategies to enhance speech processing will not necessarily have a positive impact on music enjoyment. However, reducing the complexity of music shows promise in enhancing music enjoyment and should be further explored.
Veldkamp, Wouter J H; Joemai, Raoul M S; van der Molen, Aart J; Geleijns, Jacob
2010-02-01
Metal prostheses cause artifacts in computed tomography (CT) images. The purpose of this work was to design an efficient and accurate metal segmentation in raw data to achieve artifact suppression and to improve CT image quality for patients with metal hip or shoulder prostheses. The artifact suppression technique incorporates two steps: metal object segmentation in raw data and replacement of the segmented region by new values using an interpolation scheme, followed by addition of the scaled metal signal intensity. Segmentation of metal is performed directly in sinograms, making it efficient and different from current methods that perform segmentation in reconstructed images in combination with Radon transformations. Metal signal segmentation is achieved by using a Markov random field model (MRF). Three interpolation methods are applied and investigated. To provide a proof of concept, CT data of five patients with metal implants were included in the study, as well as CT data of a PMMA phantom with Teflon, PVC, and titanium inserts. Accuracy was determined quantitatively by comparing mean Hounsfield (HU) values and standard deviation (SD) as a measure of distortion in phantom images with titanium (original and suppressed) and without titanium insert. Qualitative improvement was assessed by comparing uncorrected clinical images with artifact suppressed images. Artifacts in CT data of a phantom and five patients were automatically suppressed. The general visibility of structures clearly improved. In phantom images, the technique showed reduced SD close to the SD for the case where titanium was not inserted, indicating improved image quality. HU values in corrected images were different from expected values for all interpolation methods. Subtle differences between interpolation methods were found. The new artifact suppression design is efficient, for instance, in terms of preserving spatial resolution, as it is applied directly to original raw data. It successfully reduced artifacts in CT images of five patients and in phantom images. Sophisticated interpolation methods are needed to obtain reliable HU values close to the prosthesis.
Stenglein, Mark D.; Jacobson, Elliott R.; Chang, Li-Wen; Sanders, Chris; Hawkins, Michelle G.; Guzman, David S-M.; Drazenovich, Tracy; Dunker, Freeland; Kamaka, Elizabeth K.; Fisher, Debbie; Reavill, Drury R.; Meola, Linda F.; Levens, Gregory; DeRisi, Joseph L.
2015-01-01
Arenaviruses are one of the largest families of human hemorrhagic fever viruses and are known to infect both mammals and snakes. Arenaviruses package a large (L) and small (S) genome segment in their virions. For segmented RNA viruses like these, novel genotypes can be generated through mutation, recombination, and reassortment. Although it is believed that an ancient recombination event led to the emergence of a new lineage of mammalian arenaviruses, neither recombination nor reassortment has been definitively documented in natural arenavirus infections. Here, we used metagenomic sequencing to survey the viral diversity present in captive arenavirus-infected snakes. From 48 infected animals, we determined the complete or near complete sequence of 210 genome segments that grouped into 23 L and 11 S genotypes. The majority of snakes were multiply infected, with up to 4 distinct S and 11 distinct L segment genotypes in individual animals. This S/L imbalance was typical: in all cases intrahost L segment genotypes outnumbered S genotypes, and a particular S segment genotype dominated in individual animals and at a population level. We corroborated sequencing results by qRT-PCR and virus isolation, and isolates replicated as ensembles in culture. Numerous instances of recombination and reassortment were detected, including recombinant segments with unusual organizations featuring 2 intergenic regions and superfluous content, which were capable of stable replication and transmission despite their atypical structures. Overall, this represents intrahost diversity of an extent and form that goes well beyond what has been observed for arenaviruses or for viruses in general. This diversity can be plausibly attributed to the captive intermingling of sub-clinically infected wild-caught snakes. Thus, beyond providing a unique opportunity to study arenavirus evolution and adaptation, these findings allow the investigation of unintended anthropogenic impacts on viral ecology, diversity, and disease potential. PMID:25993603
Russian Earth Science Research Program on ISS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armand, N. A.; Tishchenko, Yu. G.
1999-01-22
Version of the Russian Earth Science Research Program on the Russian segment of ISS is proposed. The favorite tasks are selected, which may be solved with the use of space remote sensing methods and tools and which are worthwhile for realization. For solving these tasks the specialized device sets (submodules), corresponding to the specific of solved tasks, are working out. They would be specialized modules, transported to the ISS. Earth remote sensing research and ecological monitoring (high rates and large bodies transmitted from spaceborne information, comparatively stringent requirements to the period of its processing, etc.) cause rather high requirements tomore » the ground segment of receiving, processing, storing, and distribution of space information in the interests of the Earth natural resources investigation. Creation of the ground segment has required the development of the interdepartmental data receiving and processing center. Main directions of works within the framework of the ISS program are determined.« less
Aging and perceived event structure as a function of modality
Magliano, Joseph; Kopp, Kristopher; McNerney, M. Windy; Radvansky, Gabriel A.; Zacks, Jeffrey M.
2012-01-01
The majority of research on situation model processing in older adults has focused on narrative texts. Much of this research has shown that many important aspects of constructing a situation model for a text are preserved and may even improve with age. However, narratives need not be text-based, and little is known as to whether these findings generalize to visually-based narratives. The present study assessed the impact of story modality on event segmentation, which is a basic component of event comprehension. Older and younger adults viewed picture stories or read text versions of them and segmented them into events. There was comparable alignment between the segmentation judgments and a theoretically guided analysis of shifts in situational features across modalities for both populations. These results suggest that situation models provide older adults with a stable basis for event comprehension across different modalities of expereinces. PMID:22182344
Therapeutic self-disclosure in integrative psychotherapy: When is this a clinical error?
Ziv-Beiman, Sharon; Shahar, Golan
2016-09-01
Ascending to prominence in virtually all forms of psychotherapy, therapist self-disclosure (TSD) has recently been identified as a primarily integrative intervention (Ziv-Beiman, 2013). In the present article, we discuss various instances in which using TSD in integrative psychotherapy might constitute a clinical error. First, we briefly review extant theory and empirical research on TSD, followed by our preferred version of integrative psychotherapy (i.e., a version of Wachtel's Cyclical Psychodynamics [Wachtel, 1977, 1997, 2014]), which we title cognitive existential psychodynamics. Next, we provide and discuss three examples in which implementing TSD constitutes a clinical error. In essence, we submit that using TSD constitutes an error when patients, constrained by their representational structures (object relations), experience the subjectivity of the other as impinging, and thus propels them to "react" instead of "emerge." (PsycINFO Database Record (c) 2016 APA, all rights reserved).
GPU accelerated implementation of NCI calculations using promolecular density.
Rubez, Gaëtan; Etancelin, Jean-Matthieu; Vigouroux, Xavier; Krajecki, Michael; Boisson, Jean-Charles; Hénon, Eric
2017-05-30
The NCI approach is a modern tool to reveal chemical noncovalent interactions. It is particularly attractive to describe ligand-protein binding. A custom implementation for NCI using promolecular density is presented. It is designed to leverage the computational power of NVIDIA graphics processing unit (GPU) accelerators through the CUDA programming model. The code performances of three versions are examined on a test set of 144 systems. NCI calculations are particularly well suited to the GPU architecture, which reduces drastically the computational time. On a single compute node, the dual-GPU version leads to a 39-fold improvement for the biggest instance compared to the optimal OpenMP parallel run (C code, icc compiler) with 16 CPU cores. Energy consumption measurements carried out on both CPU and GPU NCI tests show that the GPU approach provides substantial energy savings. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Gaussian Mean Field Lattice Gas
NASA Astrophysics Data System (ADS)
Scoppola, Benedetto; Troiani, Alessio
2018-03-01
We study rigorously a lattice gas version of the Sherrington-Kirckpatrick spin glass model. In discrete optimization literature this problem is known as unconstrained binary quadratic programming and it belongs to the class NP-hard. We prove that the fluctuations of the ground state energy tend to vanish in the thermodynamic limit, and we give a lower bound of such ground state energy. Then we present a heuristic algorithm, based on a probabilistic cellular automaton, which seems to be able to find configurations with energy very close to the minimum, even for quite large instances.
Ayral, Thomas; Lee, Tsung-Han; Kotliar, Gabriel
2017-12-26
In this paper, we present a unified perspective on dynamical mean-field theory (DMFT), density-matrix embedding theory (DMET), and rotationally invariant slave bosons (RISB). We show that DMET can be regarded as a simplification of the RISB method where the quasiparticle weight is set to unity. Finally, this relation makes it easy to transpose extensions of a given method to another: For instance, a temperature-dependent version of RISB can be used to derive a temperature-dependent free-energy formula for DMET.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ayral, Thomas; Lee, Tsung-Han; Kotliar, Gabriel
In this paper, we present a unified perspective on dynamical mean-field theory (DMFT), density-matrix embedding theory (DMET), and rotationally invariant slave bosons (RISB). We show that DMET can be regarded as a simplification of the RISB method where the quasiparticle weight is set to unity. Finally, this relation makes it easy to transpose extensions of a given method to another: For instance, a temperature-dependent version of RISB can be used to derive a temperature-dependent free-energy formula for DMET.
An Examination of New Paradigms for Spline Approximations.
Witzgall, Christoph; Gilsinn, David E; McClain, Marjorie A
2006-01-01
Lavery splines are examined in the univariate and bivariate cases. In both instances relaxation based algorithms for approximate calculation of Lavery splines are proposed. Following previous work Gilsinn, et al. [7] addressing the bivariate case, a rotationally invariant functional is assumed. The version of bivariate splines proposed in this paper also aims at irregularly spaced data and uses Hseih-Clough-Tocher elements based on the triangulated irregular network (TIN) concept. In this paper, the univariate case, however, is investigated in greater detail so as to further the understanding of the bivariate case.
INFORMS Section on Location Analysis Dissertation Award Submission
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waddell, Lucas
This research effort can be summarized by two main thrusts, each of which has a chapter of the dissertation dedicated to it. First, I pose a novel polyhedral approach for identifying polynomially solvable in- stances of the QAP based on an application of the reformulation-linearization technique (RLT), a general procedure for constructing mixed 0-1 linear reformulations of 0-1 pro- grams. The feasible region to the continuous relaxation of the level-1 RLT form is a polytope having a highly specialized structure. Every binary solution to the QAP is associated with an extreme point of this polytope, and the objective function valuemore » is preserved at each such point. However, there exist extreme points that do not correspond to binary solutions. The key insight is a previously unnoticed and unexpected relationship between the polyhedral structure of the continuous relaxation of the level-1 RLT representation and various classes of readily solvable instances. Specifically, we show that a variety of apparently unrelated solvable cases of the QAP can all be categorized in the following sense: each such case has an objective function which ensures that an optimal solution to the continuous relaxation of the level-1 RLT form occurs at a binary extreme point. Interestingly, there exist instances that are solvable by the level-1 RLT form which do not satisfy the conditions of these cases, so that the level-1 form theoretically identifies a richer family of solvable instances. Second, I focus on instances of the QAP known in the literature as linearizable. An instance of the QAP is defined to be linearizable if and only if the problem can be equivalently written as a linear assignment problem that preserves the objective function value at all feasible solutions. I provide an entirely new polyheral-based perspective on the concept of linearizable by showing that an instance of the QAP is linearizable if and only if a relaxed version of the continuous relaxation of the level-1 RLT form is bounded. We also shows that the level-1 RLT form can identify a richer family of solvable instances than those deemed linearizable by demonstrating that the continuous relaxation of the level-1 RLT form can have an optimal binary solution for instances that are not linearizable. As a byproduct, I use this theoretical framework to explicity, in closed form, characterize the dimensions of the level-1 RLT form and various other problem relaxations.« less
B-Mode ultrasound pose recovery via surgical fiducial segmentation and tracking
NASA Astrophysics Data System (ADS)
Asoni, Alessandro; Ketcha, Michael; Kuo, Nathanael; Chen, Lei; Boctor, Emad; Coon, Devin; Prince, Jerry L.
2015-03-01
Ultrasound Doppler imaging may be used to detect blood clots after surgery, a common problem. However, this requires consistent probe positioning over multiple time instances and therefore significant sonographic expertise. Analysis of ultrasound B-mode images of a fiducial implanted at the surgical site offers a landmark to guide a user to the same location repeatedly. We demonstrate that such an implanted fiducial may be successfully detected and tracked to calculate pose and guide a clinician consistently to the site of surgery, potentially reducing the ultrasound experience required for point of care monitoring.
Semi-automatic geographic atrophy segmentation for SD-OCT images.
Chen, Qiang; de Sisternes, Luis; Leng, Theodore; Zheng, Luoluo; Kutzscher, Lauren; Rubin, Daniel L
2013-01-01
Geographic atrophy (GA) is a condition that is associated with retinal thinning and loss of the retinal pigment epithelium (RPE) layer. It appears in advanced stages of non-exudative age-related macular degeneration (AMD) and can lead to vision loss. We present a semi-automated GA segmentation algorithm for spectral-domain optical coherence tomography (SD-OCT) images. The method first identifies and segments a surface between the RPE and the choroid to generate retinal projection images in which the projection region is restricted to a sub-volume of the retina where the presence of GA can be identified. Subsequently, a geometric active contour model is employed to automatically detect and segment the extent of GA in the projection images. Two image data sets, consisting on 55 SD-OCT scans from twelve eyes in eight patients with GA and 56 SD-OCT scans from 56 eyes in 56 patients with GA, respectively, were utilized to qualitatively and quantitatively evaluate the proposed GA segmentation method. Experimental results suggest that the proposed algorithm can achieve high segmentation accuracy. The mean GA overlap ratios between our proposed method and outlines drawn in the SD-OCT scans, our method and outlines drawn in the fundus auto-fluorescence (FAF) images, and the commercial software (Carl Zeiss Meditec proprietary software, Cirrus version 6.0) and outlines drawn in FAF images were 72.60%, 65.88% and 59.83%, respectively.
Exploration of sequence space as the basis of viral RNA genome segmentation.
Moreno, Elena; Ojosnegros, Samuel; García-Arriaza, Juan; Escarmís, Cristina; Domingo, Esteban; Perales, Celia
2014-05-06
The mechanisms of viral RNA genome segmentation are unknown. On extensive passage of foot-and-mouth disease virus in baby hamster kidney-21 cells, the virus accumulated multiple point mutations and underwent a transition akin to genome segmentation. The standard single RNA genome molecule was replaced by genomes harboring internal in-frame deletions affecting the L- or capsid-coding region. These genomes were infectious and killed cells by complementation. Here we show that the point mutations in the nonstructural protein-coding region (P2, P3) that accumulated in the standard genome before segmentation increased the relative fitness of the segmented version relative to the standard genome. Fitness increase was documented by intracellular expression of virus-coded proteins and infectious progeny production by RNAs with the internal deletions placed in the sequence context of the parental and evolved genome. The complementation activity involved several viral proteins, one of them being the leader proteinase L. Thus, a history of genetic drift with accumulation of point mutations was needed to allow a major variation in the structure of a viral genome. Thus, exploration of sequence space by a viral genome (in this case an unsegmented RNA) can reach a point of the space in which a totally different genome structure (in this case, a segmented RNA) is favored over the form that performed the exploration.
Iglesias, Juan Eugenio; Augustinack, Jean C; Nguyen, Khoa; Player, Christopher M; Player, Allison; Wright, Michelle; Roy, Nicole; Frosch, Matthew P; McKee, Ann C; Wald, Lawrence L; Fischl, Bruce; Van Leemput, Koen
2015-07-15
Automated analysis of MRI data of the subregions of the hippocampus requires computational atlases built at a higher resolution than those that are typically used in current neuroimaging studies. Here we describe the construction of a statistical atlas of the hippocampal formation at the subregion level using ultra-high resolution, ex vivo MRI. Fifteen autopsy samples were scanned at 0.13 mm isotropic resolution (on average) using customized hardware. The images were manually segmented into 13 different hippocampal substructures using a protocol specifically designed for this study; precise delineations were made possible by the extraordinary resolution of the scans. In addition to the subregions, manual annotations for neighboring structures (e.g., amygdala, cortex) were obtained from a separate dataset of in vivo, T1-weighted MRI scans of the whole brain (1mm resolution). The manual labels from the in vivo and ex vivo data were combined into a single computational atlas of the hippocampal formation with a novel atlas building algorithm based on Bayesian inference. The resulting atlas can be used to automatically segment the hippocampal subregions in structural MRI images, using an algorithm that can analyze multimodal data and adapt to variations in MRI contrast due to differences in acquisition hardware or pulse sequences. The applicability of the atlas, which we are releasing as part of FreeSurfer (version 6.0), is demonstrated with experiments on three different publicly available datasets with different types of MRI contrast. The results show that the atlas and companion segmentation method: 1) can segment T1 and T2 images, as well as their combination, 2) replicate findings on mild cognitive impairment based on high-resolution T2 data, and 3) can discriminate between Alzheimer's disease subjects and elderly controls with 88% accuracy in standard resolution (1mm) T1 data, significantly outperforming the atlas in FreeSurfer version 5.3 (86% accuracy) and classification based on whole hippocampal volume (82% accuracy). Copyright © 2015. Published by Elsevier Inc.
Placental fetal stem segmentation in a sequence of histology images
NASA Astrophysics Data System (ADS)
Athavale, Prashant; Vese, Luminita A.
2012-02-01
Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental fetal stems. Analysis of the fetal stems in a placenta could be useful in the study and diagnosis of some diseases like autism. To study the fetal stem structure effectively, we need to automatically and accurately track fetal stems through a sequence of digitized hematoxylin and eosin (H&E) stained histology slides. There are many problems in successfully achieving this goal. A few of the problems are: large size of images, misalignment of the consecutive H&E slides, unpredictable inaccuracies of manual tracing, very complicated texture patterns of various tissue types without clear characteristics, just to name a few. In this paper we propose a novel algorithm to achieve automatic tracing of the fetal stem in a sequence of H&E images, based on an inaccurate manual segmentation of a fetal stem in one of the images. This algorithm combines global affine registration, local non-affine registration and a novel 'dynamic' version of the active contours model without edges. We first use global affine image registration of all the images based on displacement, scaling and rotation. This gives us approximate location of the corresponding fetal stem in the image that needs to be traced. We then use the affine registration algorithm "locally" near this location. At this point, we use a fast non-affine registration based on L2-similarity measure and diffusion regularization to get a better location of the fetal stem. Finally, we have to take into account inaccuracies in the initial tracing. This is achieved through a novel dynamic version of the active contours model without edges where the coefficients of the fitting terms are computed iteratively to ensure that we obtain a unique stem in the segmentation. The segmentation thus obtained can then be used as an initial guess to obtain segmentation in the rest of the images in the sequence. This constitutes an important step in the extraction and understanding of the fetal stem vasculature.
Theory and algorithms for image reconstruction on chords and within regions of interest
NASA Astrophysics Data System (ADS)
Zou, Yu; Pan, Xiaochuan; Sidky, Emilâ Y.
2005-11-01
We introduce a formula for image reconstruction on a chord of a general source trajectory. We subsequently develop three algorithms for exact image reconstruction on a chord from data acquired with the general trajectory. Interestingly, two of the developed algorithms can accommodate data containing transverse truncations. The widely used helical trajectory and other trajectories discussed in literature can be interpreted as special cases of the general trajectory, and the developed theory and algorithms are thus directly applicable to reconstructing images exactly from data acquired with these trajectories. For instance, chords on a helical trajectory are equivalent to the n-PI-line segments. In this situation, the proposed algorithms become the algorithms that we proposed previously for image reconstruction on PI-line segments. We have performed preliminary numerical studies, which include the study on image reconstruction on chords of two-circle trajectory, which is nonsmooth, and on n-PI lines of a helical trajectory, which is smooth. Quantitative results of these studies verify and demonstrate the proposed theory and algorithms.
ERIC Educational Resources Information Center
Jover, Julio Lillo; Moreira, Humberto
2005-01-01
Four experiments evaluated AMLA temporal version accuracy to measure relative luminosity in people with and without color blindness and, consequently, to provide the essential information to avoid poor figure-background combinations in any possible "specific screen-specific observer" pair. Experiment 1 showed that two very different apparatus, a…
Ewert, Siobhan; Plettig, Philip; Li, Ningfei; Chakravarty, M Mallar; Collins, D Louis; Herrington, Todd M; Kühn, Andrea A; Horn, Andreas
2018-04-15
Three-dimensional atlases of subcortical brain structures are valuable tools to reference anatomy in neuroscience and neurology. For instance, they can be used to study the position and shape of the three most common deep brain stimulation (DBS) targets, the subthalamic nucleus (STN), internal part of the pallidum (GPi) and ventral intermediate nucleus of the thalamus (VIM) in spatial relationship to DBS electrodes. Here, we present a composite atlas based on manual segmentations of a multimodal high resolution brain template, histology and structural connectivity. In a first step, four key structures were defined on the template itself using a combination of multispectral image analysis and manual segmentation. Second, these structures were used as anchor points to coregister a detailed histological atlas into standard space. Results show that this approach significantly improved coregistration accuracy over previously published methods. Finally, a sub-segmentation of STN and GPi into functional zones was achieved based on structural connectivity. The result is a composite atlas that defines key nuclei on the template itself, fills the gaps between them using histology and further subdivides them using structural connectivity. We show that the atlas can be used to segment DBS targets in single subjects, yielding more accurate results compared to priorly published atlases. The atlas will be made publicly available and constitutes a resource to study DBS electrode localizations in combination with modern neuroimaging methods. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stützer, Kristin; Haase, Robert; Exner, Florian
2016-09-15
Purpose: Rating both a lung segmentation algorithm and a deformable image registration (DIR) algorithm for subsequent lung computed tomography (CT) images by different evaluation techniques. Furthermore, investigating the relative performance and the correlation of the different evaluation techniques to address their potential value in a clinical setting. Methods: Two to seven subsequent CT images (69 in total) of 15 lung cancer patients were acquired prior, during, and after radiochemotherapy. Automated lung segmentations were compared to manually adapted contours. DIR between the first and all following CT images was performed with a fast algorithm specialized for lung tissue registration, requiring themore » lung segmentation as input. DIR results were evaluated based on landmark distances, lung contour metrics, and vector field inconsistencies in different subvolumes defined by eroding the lung contour. Correlations between the results from the three methods were evaluated. Results: Automated lung contour segmentation was satisfactory in 18 cases (26%), failed in 6 cases (9%), and required manual correction in 45 cases (66%). Initial and corrected contours had large overlap but showed strong local deviations. Landmark-based DIR evaluation revealed high accuracy compared to CT resolution with an average error of 2.9 mm. Contour metrics of deformed contours were largely satisfactory. The median vector length of inconsistency vector fields was 0.9 mm in the lung volume and slightly smaller for the eroded volumes. There was no clear correlation between the three evaluation approaches. Conclusions: Automatic lung segmentation remains challenging but can assist the manual delineation process. Proven by three techniques, the inspected DIR algorithm delivers reliable results for the lung CT data sets acquired at different time points. Clinical application of DIR demands a fast DIR evaluation to identify unacceptable results, for instance, by combining different automated DIR evaluation methods.« less
DiCanio, Christian; Nam, Hosung; Whalen, Douglas H.; Timothy Bunnell, H.; Amith, Jonathan D.; García, Rey Castillo
2013-01-01
While efforts to document endangered languages have steadily increased, the phonetic analysis of endangered language data remains a challenge. The transcription of large documentation corpora is, by itself, a tremendous feat. Yet, the process of segmentation remains a bottleneck for research with data of this kind. This paper examines whether a speech processing tool, forced alignment, can facilitate the segmentation task for small data sets, even when the target language differs from the training language. The authors also examined whether a phone set with contextualization outperforms a more general one. The accuracy of two forced aligners trained on English (hmalign and p2fa) was assessed using corpus data from Yoloxóchitl Mixtec. Overall, agreement performance was relatively good, with accuracy at 70.9% within 30 ms for hmalign and 65.7% within 30 ms for p2fa. Segmental and tonal categories influenced accuracy as well. For instance, additional stop allophones in hmalign's phone set aided alignment accuracy. Agreement differences between aligners also corresponded closely with the types of data on which the aligners were trained. Overall, using existing alignment systems was found to have potential for making phonetic analysis of small corpora more efficient, with more allophonic phone sets providing better agreement than general ones. PMID:23967953
DiCanio, Christian; Nam, Hosung; Whalen, Douglas H; Bunnell, H Timothy; Amith, Jonathan D; García, Rey Castillo
2013-09-01
While efforts to document endangered languages have steadily increased, the phonetic analysis of endangered language data remains a challenge. The transcription of large documentation corpora is, by itself, a tremendous feat. Yet, the process of segmentation remains a bottleneck for research with data of this kind. This paper examines whether a speech processing tool, forced alignment, can facilitate the segmentation task for small data sets, even when the target language differs from the training language. The authors also examined whether a phone set with contextualization outperforms a more general one. The accuracy of two forced aligners trained on English (hmalign and p2fa) was assessed using corpus data from Yoloxóchitl Mixtec. Overall, agreement performance was relatively good, with accuracy at 70.9% within 30 ms for hmalign and 65.7% within 30 ms for p2fa. Segmental and tonal categories influenced accuracy as well. For instance, additional stop allophones in hmalign's phone set aided alignment accuracy. Agreement differences between aligners also corresponded closely with the types of data on which the aligners were trained. Overall, using existing alignment systems was found to have potential for making phonetic analysis of small corpora more efficient, with more allophonic phone sets providing better agreement than general ones.
Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S
2003-01-01
In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).
An improved parallel fuzzy connected image segmentation method based on CUDA.
Wang, Liansheng; Li, Dong; Huang, Shaohui
2016-05-12
Fuzzy connectedness method (FC) is an effective method for extracting fuzzy objects from medical images. However, when FC is applied to large medical image datasets, its running time will be greatly expensive. Therefore, a parallel CUDA version of FC (CUDA-kFOE) was proposed by Ying et al. to accelerate the original FC. Unfortunately, CUDA-kFOE does not consider the edges between GPU blocks, which causes miscalculation of edge points. In this paper, an improved algorithm is proposed by adding a correction step on the edge points. The improved algorithm can greatly enhance the calculation accuracy. In the improved method, an iterative manner is applied. In the first iteration, the affinity computation strategy is changed and a look up table is employed for memory reduction. In the second iteration, the error voxels because of asynchronism are updated again. Three different CT sequences of hepatic vascular with different sizes were used in the experiments with three different seeds. NVIDIA Tesla C2075 is used to evaluate our improved method over these three data sets. Experimental results show that the improved algorithm can achieve a faster segmentation compared to the CPU version and higher accuracy than CUDA-kFOE. The calculation results were consistent with the CPU version, which demonstrates that it corrects the edge point calculation error of the original CUDA-kFOE. The proposed method has a comparable time cost and has less errors compared to the original CUDA-kFOE as demonstrated in the experimental results. In the future, we will focus on automatic acquisition method and automatic processing.
The elastic ratio: introducing curvature into ratio-based image segmentation.
Schoenemann, Thomas; Masnou, Simon; Cremers, Daniel
2011-09-01
We present the first ratio-based image segmentation method that allows imposing curvature regularity of the region boundary. Our approach is a generalization of the ratio framework pioneered by Jermyn and Ishikawa so as to allow penalty functions that take into account the local curvature of the curve. The key idea is to cast the segmentation problem as one of finding cyclic paths of minimal ratio in a graph where each graph node represents a line segment. Among ratios whose discrete counterparts can be globally minimized with our approach, we focus in particular on the elastic ratio [Formula: see text] that depends, given an image I, on the oriented boundary C of the segmented region candidate. Minimizing this ratio amounts to finding a curve, neither small nor too curvy, through which the brightness flux is maximal. We prove the existence of minimizers for this criterion among continuous curves with mild regularity assumptions. We also prove that the discrete minimizers provided by our graph-based algorithm converge, as the resolution increases, to continuous minimizers. In contrast to most existing segmentation methods with computable and meaningful, i.e., nondegenerate, global optima, the proposed approach is fully unsupervised in the sense that it does not require any kind of user input such as seed nodes. Numerical experiments demonstrate that curvature regularity allows substantial improvement of the quality of segmentations. Furthermore, our results allow drawing conclusions about global optima of a parameterization-independent version of the snakes functional: the proposed algorithm allows determining parameter values where the functional has a meaningful solution and simultaneously provides the corresponding global solution.
Module 4: Text Versions | State, Local, and Tribal Governments | NREL
own or finance a system. We'll help you understand the different financing types available to local often specific to a particular segment of the market with different amounts of incentives, different system size caps, and different total funds or aggregate capacity. The customer can identify if solar PV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matamala, Roser
This is the AmeriFlux version of the carbon flux data for the site US-IB1 Fermi National Accelerator Laboratory- Batavia (Agricultural site). Site Description - Two eddy correlation systems are installed at Fermi National Accelerator Laboratory: one on a restored prairie (established October 2004) and one on a corn/soybean rotation agricultural field (established in July 2005). The prairie site had been farmed for more than 100 years, but was converted to prairie in 1989. The agricultural site has likely been farmed for more than 100 years, but the first documented instance of agricultural activity dates back to a picture taken inmore » 1952.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campione, Salvatore; Warne, Larry K.; Schiek, Richard
2017-09-01
This report details the modeling results for the response of a finite-length dissipative conductor interacting with a conducting ground to a hypothetical nuclear device with the same output energy spectrum as the Fat Man device. We use a time-domain method based on transmission line theory that allows accounting for time-varying air conductivities. We implemented such method in a code we call ATLOG - Analytic Transmission Line Over Ground. Results are compared the frequency-domain version of ATLOG previously developed and to the circuit simulator Xyce in some instances. Intentionally Left Blank
Themistocleous, Charalambos
2016-12-01
Although tonal alignment constitutes a quintessential property of pitch accents, its exact characteristics remain unclear. This study, by exploring the timing of the Cypriot Greek L*+H prenuclear pitch accent, examines the predictions of three hypotheses about tonal alignment: the invariance hypothesis, the segmental anchoring hypothesis, and the segmental anchorage hypothesis. The study reports on two experiments: the first of which manipulates the syllable patterns of the stressed syllable, and the second of which modifies the distance of the L*+H from the following pitch accent. The findings on the alignment of the low tone (L) are illustrative of the segmental anchoring hypothesis predictions: the L persistently aligns inside the onset consonant, a few milliseconds before the stressed vowel. However, the findings on the alignment of the high tone (H) are both intriguing and unexpected: the alignment of the H depends on the number of unstressed syllables that follow the prenuclear pitch accent. The 'wandering' of the H over multiple syllables is extremely rare among languages, and casts doubt on the invariance hypothesis and the segmental anchoring hypothesis, as well as indicating the need for a modified version of the segmental anchorage hypothesis. To address the alignment of the H, we suggest that it aligns within a segmental anchorage-the area that follows the prenuclear pitch accent-in such a way as to protect the paradigmatic contrast between the L*+H prenuclear pitch accent and the L+H* nuclear pitch accent.
Manufacture of a 1.7m prototype of the GMT primary mirror segments
NASA Astrophysics Data System (ADS)
Martin, H. M.; Burge, J. H.; Miller, S. M.; Smith, B. K.; Zehnder, R.; Zhao, C.
2006-06-01
We have nearly completed the manufacture of a 1.7 m off-axis mirror as part of the technology development for the Giant Magellan Telescope. The mirror is an off-axis section of a 5.3 m f/0.73 parent paraboloid, making it roughly a 1:5 model of the outer 8.4 m GMT segment. The 1.7 m mirror will be the primary mirror of the New Solar Telescope at Big Bear Solar Observatory. It has a 2.7 mm peak-to-valley departure from the best-fit sphere, presenting a serious challenge in terms of both polishing and measurement. The mirror was polished with a stressed lap, which bends actively to match the local curvature at each point on the mirror surface, and works for asymmetric mirrors as well as symmetric aspheres. It was measured using a hybrid reflective-diffractive null corrector to compensate for the mirror's asphericity. Both techniques will be applied in scaled-up versions to the GMT segments.
Fast Segmentation From Blurred Data in 3D Fluorescence Microscopy.
Storath, Martin; Rickert, Dennis; Unser, Michael; Weinmann, Andreas
2017-10-01
We develop a fast algorithm for segmenting 3D images from linear measurements based on the Potts model (or piecewise constant Mumford-Shah model). To that end, we first derive suitable space discretizations of the 3D Potts model, which are capable of dealing with 3D images defined on non-cubic grids. Our discretization allows us to utilize a specific splitting approach, which results in decoupled subproblems of moderate size. The crucial point in the 3D setup is that the number of independent subproblems is so large that we can reasonably exploit the parallel processing capabilities of the graphics processing units (GPUs). Our GPU implementation is up to 18 times faster than the sequential CPU version. This allows to process even large volumes in acceptable runtimes. As a further contribution, we extend the algorithm in order to deal with non-negativity constraints. We demonstrate the efficiency of our method for combined image deconvolution and segmentation on simulated data and on real 3D wide field fluorescence microscopy data.
Logo recognition in video by line profile classification
NASA Astrophysics Data System (ADS)
den Hollander, Richard J. M.; Hanjalic, Alan
2003-12-01
We present an extension to earlier work on recognizing logos in video stills. The logo instances considered here are rigid planar objects observed at a distance in the scene, so the possible perspective transformation can be approximated by an affine transformation. For this reason we can classify the logos by matching (invariant) line profiles. We enhance our previous method by considering multiple line profiles instead of a single profile of the logo. The positions of the lines are based on maxima in the Hough transform space of the segmented logo foreground image. Experiments are performed on MPEG1 sport video sequences to show the performance of the proposed method.
Shilov, V N; Borkovskaja, Y B; Dukhin, A S
2004-09-15
Existing theories of electroacoustic phenomena in concentrated colloids neglect the possibility of double layer overlap and are valid mostly for the "thin double layer," when the double layer thickness is much less than the particle size. In this paper we present a new electroacoustic theory which removes this restriction. This would make this new theory applicable to characterizing a variety of aqueous nanocolloids and of nonaqueous dispersions. There are two versions of the theory leading to the analytical solutions. The first version corresponds to strongly overlapped diffuse layers (so-called quasi-homogeneous model). It yields a simple analytical formula for colloid vibration current (CVI), which is valid for arbitrary ultrasound frequency, but for restricted kappa alpha range. This version of the theory, as well the Smoluchowski theory for microelectrophoresis, is independent of particle shape and polydispersity. This makes it very attractive for practical use, with the hope that it might be as useful as classical Smoluchowski theory. In order to determine the kappa alpha range of the quasi-homogeneous model validity we develop the second version that limits ultrasound frequency, but applies no restriction on kappa alpha. The ultrasound frequency should substantially exceed the Maxwell-Wagner relaxation frequency. This limitation makes active conductivity related current negligible compared to the passive dielectric displacement current. It is possible to derive an expression for CVI in the concentrated dispersion as formulae inhering definite integrals with integrands depending on equilibrium potential distribution. This second version allowed us to estimate the ranges of the applicability of the first, quasi-homogeneous version. It turns out that the quasi-homogeneous model works for kappa alpha values up to almost 1. For instance, at volume fraction 30%, the highest kappa alpha limit of the quasi-homogeneous model is 0.65. Therefore, this version of the electroacoustic theory is valid for almost all nonaqueous dispersions and a wide variety of nanocolloids, especially with sizes under 100 nm.
Adsorption of hairy particles with mobile ligands: Molecular dynamics and density functional study
NASA Astrophysics Data System (ADS)
Borówko, M.; Sokołowski, S.; Staszewski, T.; Pizio, O.
2018-01-01
We study models of hairy nanoparticles in contact with a hard wall. Each particle is built of a spherical core with a number of ligands attached to it and each ligand is composed of several spherical, tangentially jointed segments. The number of segments is the same for all ligands. Particular models differ by the numbers of ligands and of segments per ligand, but the total number of segments is constant. Moreover, our model assumes that the ligands are tethered to the core in such a manner that they can "slide" over the core surface. Using molecular dynamics simulations we investigate the differences in the structure of a system close to the wall. In order to characterize the distribution of the ligands around the core, we have calculated the end-to-end distances of the ligands and the lengths and orientation of the mass dipoles. Additionally, we also employed a density functional approach to obtain the density profiles. We have found that if the number of ligands is not too high, the proposed version of the theory is capable to predict the structure of the system with a reasonable accuracy.
Improvement in Recursive Hierarchical Segmentation of Data
NASA Technical Reports Server (NTRS)
Tilton, James C.
2006-01-01
A further modification has been made in the algorithm and implementing software reported in Modified Recursive Hierarchical Segmentation of Data (GSC- 14681-1), NASA Tech Briefs, Vol. 30, No. 6 (June 2006), page 51. That software performs recursive hierarchical segmentation of data having spatial characteristics (e.g., spectral-image data). The output of a prior version of the software contained artifacts, including spurious segmentation-image regions bounded by processing-window edges. The modification for suppressing the artifacts, mentioned in the cited article, was addition of a subroutine that analyzes data in the vicinities of seams to find pairs of regions that tend to lie adjacent to each other on opposite sides of the seams. Within each such pair, pixels in one region that are more similar to pixels in the other region are reassigned to the other region. The present modification provides for a parameter ranging from 0 to 1 for controlling the relative priority of merges between spatially adjacent and spatially non-adjacent regions. At 1, spatially-adjacent-/spatially- non-adjacent-region merges have equal priority. At 0, only spatially-adjacent-region merges (no spectral clustering) are allowed. Between 0 and 1, spatially-adjacent- region merges have priority over spatially- non-adjacent ones.
Automatic generation of pictorial transcripts of video programs
NASA Astrophysics Data System (ADS)
Shahraray, Behzad; Gibbon, David C.
1995-03-01
An automatic authoring system for the generation of pictorial transcripts of video programs which are accompanied by closed caption information is presented. A number of key frames, each of which represents the visual information in a segment of the video (i.e., a scene), are selected automatically by performing a content-based sampling of the video program. The textual information is recovered from the closed caption signal and is initially segmented based on its implied temporal relationship with the video segments. The text segmentation boundaries are then adjusted, based on lexical analysis and/or caption control information, to account for synchronization errors due to possible delays in the detection of scene boundaries or the transmission of the caption information. The closed caption text is further refined through linguistic processing for conversion to lower- case with correct capitalization. The key frames and the related text generate a compact multimedia presentation of the contents of the video program which lends itself to efficient storage and transmission. This compact representation can be viewed on a computer screen, or used to generate the input to a commercial text processing package to generate a printed version of the program.
Hedonic analysis of the price of UHT-treated milk in Italy.
Bimbo, Francesco; Bonanno, Alessandro; Liu, Xuan; Viscecchia, Rosaria
2016-02-01
The Italian market for UHT milk has been growing thanks to both consumers' interest in products with an extended shelf life and to the lower prices of these products compared with refrigerated, pasteurized milk. However, because the lower prices of UHT milk can hinder producers' margins, manufacturers have introduced new versions of UHT milk products such as lactose-free options, vitamin-enriched products, and milk for infants, with the goal of differentiating their products, escaping the price competition, and gaining higher margins. In this paper, we estimated the contribution of different attributes to UHT milk prices in Italy by using a database of Italian UHT milk sales and a hedonic price model. In our analysis, we considered 2 UHT milk market segments: products for infants and those for the general population. We found premiums varied with the milk's attributes as well as between the segments analyzed: n-3 fatty acids, organic, and added calcium were the most valuable product features in the general population segment, whereas in the infant segment fiber, glass packaging, and the targeting of newborns delivered the highest premiums. Finally, we present recommendations for UHT milk manufacturers. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.
2017-12-01
We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the simulated ground motions will be validated by comparison of simulated response spectra with recorded response spectra and with response spectra from ground motion prediction models. This research is sponsored by the Japan Nuclear Regulation Authority.
Huh, Joonsuk; Yung, Man-Hong
2017-08-07
Molecular vibroic spectroscopy, where the transitions involve non-trivial Bosonic correlation due to the Duschinsky Rotation, is strongly believed to be in a similar complexity class as Boson Sampling. At finite temperature, the problem is represented as a Boson Sampling experiment with correlated Gaussian input states. This molecular problem with temperature effect is intimately related to the various versions of Boson Sampling sharing the similar computational complexity. Here we provide a full description to this relation in the context of Gaussian Boson Sampling. We find a hierarchical structure, which illustrates the relationship among various Boson Sampling schemes. Specifically, we show that every instance of Gaussian Boson Sampling with an initial correlation can be simulated by an instance of Gaussian Boson Sampling without initial correlation, with only a polynomial overhead. Since every Gaussian state is associated with a thermal state, our result implies that every sampling problem in molecular vibronic transitions, at any temperature, can be simulated by Gaussian Boson Sampling associated with a product of vacuum modes. We refer such a generalized Gaussian Boson Sampling motivated by the molecular sampling problem as Vibronic Boson Sampling.
Schuster, Isabell; Krahé, Barbara; Toplu-Demirtaş, Ezgi
2016-01-01
In Turkey, there is a shortage of studies on the prevalence of sexual aggression among young adults. The present study examined sexual aggression victimization and perpetration since the age of 15 in a convenience sample of N = 1,376 college students (886 women) from four public universities in Ankara, Turkey. Prevalence rates for different coercive strategies, victim-perpetrator constellations, and sexual acts were measured with a Turkish version of the Sexual Aggression and Victimization Scale (SAV-S). Overall, 77.6% of women and 65.5% of men reported at least one instance of sexual aggression victimization, and 28.9% of men and 14.2% of women reported at least one instance of sexual aggression perpetration. Prevalence rates of sexual aggression victimization and perpetration were highest for current or former partners, followed by acquaintances/friends and strangers. Alcohol was involved in a substantial proportion of the reported incidents. The findings are the first to provide systematic evidence on sexual aggression perpetration and victimization among college students in Turkey, including both women and men. PMID:27485372
The feasibility of a modified shoe for multi-segment foot motion analysis: a preliminary study.
Halstead, J; Keenan, A M; Chapman, G J; Redmond, A C
2016-01-01
The majority of multi-segment kinematic foot studies have been limited to barefoot conditions, because shod conditions have the potential for confounding surface-mounted markers. The aim of this study was to investigate whether a shoe modified with a webbed upper can accommodate multi-segment foot marker sets without compromising kinematic measurements under barefoot and shod conditions. Thirty participants (15 controls and 15 participants with midfoot pain) underwent gait analysis in two conditions; barefoot and wearing a shoe (shod) in a random order. The shod condition employed a modified shoe (rubber plimsoll) with a webbed upper, allowing skin mounted reflective markers to be visualised through slits in the webbed material. Three dimensional foot kinematics were captured using the Oxford multi-segment foot model whilst participants walked at a self-selected speed. The foot pain group showed greater hindfoot eversion and less hindfoot dorsiflexion than controls in the barefoot condition and these differences were maintained when measured in the shod condition. Differences between the foot pain and control participants were also observed for walking speed in the barefoot and in the shod conditions. No significant differences between foot pain and control groups were demonstrated at the forefoot in either condition. Subtle differences between pain and control groups, which were found during barefoot walking are retained when wearing the modified shoe. The novel properties of the modified shoe offers a potential solution for the use of passive infrared based motion analysis for shod applications, for instance to investigate the kinematic effect of foot orthoses.
Multi-class segmentation of neuronal electron microscopy images using deep learning
NASA Astrophysics Data System (ADS)
Khobragade, Nivedita; Agarwal, Chirag
2018-03-01
Study of connectivity of neural circuits is an essential step towards a better understanding of functioning of the nervous system. With the recent improvement in imaging techniques, high-resolution and high-volume images are being generated requiring automated segmentation techniques. We present a pixel-wise classification method based on Bayesian SegNet architecture. We carried out multi-class segmentation on serial section Transmission Electron Microscopy (ssTEM) images of Drosophila third instar larva ventral nerve cord, labeling the four classes of neuron membranes, neuron intracellular space, mitochondria and glia / extracellular space. Bayesian SegNet was trained using 256 ssTEM images of 256 x 256 pixels and tested on 64 different ssTEM images of the same size, from the same serial stack. Due to high class imbalance, we used a class-balanced version of Bayesian SegNet by re-weighting each class based on their relative frequency. We achieved an overall accuracy of 93% and a mean class accuracy of 88% for pixel-wise segmentation using this encoder-decoder approach. On evaluating the segmentation results using similarity metrics like SSIM and Dice Coefficient, we obtained scores of 0.994 and 0.886 respectively. Additionally, we used the network trained using the 256 ssTEM images of Drosophila third instar larva for multi-class labeling of ISBI 2012 challenge ssTEM dataset.
Medical Image Segmentation by Combining Graph Cut and Oriented Active Appearance Models
Chen, Xinjian; Udupa, Jayaram K.; Bağcı, Ulaş; Zhuge, Ying; Yao, Jianhua
2017-01-01
In this paper, we propose a novel 3D segmentation method based on the effective combination of the active appearance model (AAM), live wire (LW), and graph cut (GC). The proposed method consists of three main parts: model building, initialization, and segmentation. In the model building part, we construct the AAM and train the LW cost function and GC parameters. In the initialization part, a novel algorithm is proposed for improving the conventional AAM matching method, which effectively combines the AAM and LW method, resulting in Oriented AAM (OAAM). A multi-object strategy is utilized to help in object initialization. We employ a pseudo-3D initialization strategy, and segment the organs slice by slice via multi-object OAAM method. For the segmentation part, a 3D shape constrained GC method is proposed. The object shape generated from the initialization step is integrated into the GC cost computation, and an iterative GC-OAAM method is used for object delineation. The proposed method was tested in segmenting the liver, kidneys, and spleen on a clinical CT dataset and also tested on the MICCAI 2007 grand challenge for liver segmentation training dataset. The results show the following: (a) An overall segmentation accuracy of true positive volume fraction (TPVF) > 94.3%, false positive volume fraction (FPVF) < 0.2% can be achieved. (b) The initialization performance can be improved by combining AAM and LW. (c) The multi-object strategy greatly facilitates the initialization. (d) Compared to the traditional 3D AAM method, the pseudo 3D OAAM method achieves comparable performance while running 12 times faster. (e) The performance of proposed method is comparable to the state of the art liver segmentation algorithm. The executable version of 3D shape constrained GC with user interface can be downloaded from website http://xinjianchen.wordpress.com/research/. PMID:22311862
VoxResNet: Deep voxelwise residual networks for brain segmentation from 3D MR images.
Chen, Hao; Dou, Qi; Yu, Lequan; Qin, Jing; Heng, Pheng-Ann
2018-04-15
Segmentation of key brain tissues from 3D medical images is of great significance for brain disease diagnosis, progression assessment and monitoring of neurologic conditions. While manual segmentation is time-consuming, laborious, and subjective, automated segmentation is quite challenging due to the complicated anatomical environment of brain and the large variations of brain tissues. We propose a novel voxelwise residual network (VoxResNet) with a set of effective training schemes to cope with this challenging problem. The main merit of residual learning is that it can alleviate the degradation problem when training a deep network so that the performance gains achieved by increasing the network depth can be fully leveraged. With this technique, our VoxResNet is built with 25 layers, and hence can generate more representative features to deal with the large variations of brain tissues than its rivals using hand-crafted features or shallower networks. In order to effectively train such a deep network with limited training data for brain segmentation, we seamlessly integrate multi-modality and multi-level contextual information into our network, so that the complementary information of different modalities can be harnessed and features of different scales can be exploited. Furthermore, an auto-context version of the VoxResNet is proposed by combining the low-level image appearance features, implicit shape information, and high-level context together for further improving the segmentation performance. Extensive experiments on the well-known benchmark (i.e., MRBrainS) of brain segmentation from 3D magnetic resonance (MR) images corroborated the efficacy of the proposed VoxResNet. Our method achieved the first place in the challenge out of 37 competitors including several state-of-the-art brain segmentation methods. Our method is inherently general and can be readily applied as a powerful tool to many brain-related studies, where accurate segmentation of brain structures is critical. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hatze, Herbert; Baca, Arnold
1993-01-01
The development of noninvasive techniques for the determination of biomechanical body segment parameters (volumes, masses, the three principal moments of inertia, the three local coordinates of the segmental mass centers, etc.) receives increasing attention from the medical sciences (e,.g., orthopaedic gait analysis), bioengineering, sport biomechanics, and the various space programs. In the present paper, a novel method is presented for determining body segment parameters rapidly and accurately. It is based on the video-image processing of four different body configurations and a finite mass-element human body model. The four video images of the subject in question are recorded against a black background, thus permitting the application of shape recognition procedures incorporating edge detection and calibration algorithms. In this way, a total of 181 object space dimensions of the subject's body segments can be reconstructed and used as anthropometric input data for the mathematical finite mass- element body model. The latter comprises 17 segments (abdomino-thoracic, head-neck, shoulders, upper arms, forearms, hands, abdomino-pelvic, thighs, lower legs, feet) and enables the user to compute all the required segment parameters for each of the 17 segments by means of the associated computer program. The hardware requirements are an IBM- compatible PC (1 MB memory) operating under MS-DOS or PC-DOS (Version 3.1 onwards) and incorporating a VGA-board with a feature connector for connecting it to a super video windows framegrabber board for which there must be available a 16-bit large slot. In addition, a VGA-monitor (50 - 70 Hz, horizontal scan rate at least 31.5 kHz), a common video camera and recorder, and a simple rectangular calibration frame are required. The advantage of the new method lies in its ease of application, its comparatively high accuracy, and in the rapid availability of the body segment parameters, which is particularly useful in clinical practice. An example of its practical application illustrates the technique.
Metaheuristics for the dynamic stochastic dial-a-ride problem with expected return transports.
Schilde, M; Doerner, K F; Hartl, R F
2011-12-01
The problem of transporting patients or elderly people has been widely studied in literature and is usually modeled as a dial-a-ride problem (DARP). In this paper we analyze the corresponding problem arising in the daily operation of the Austrian Red Cross. This nongovernmental organization is the largest organization performing patient transportation in Austria. The aim is to design vehicle routes to serve partially dynamic transportation requests using a fixed vehicle fleet. Each request requires transportation from a patient's home location to a hospital (outbound request) or back home from the hospital (inbound request). Some of these requests are known in advance. Some requests are dynamic in the sense that they appear during the day without any prior information. Finally, some inbound requests are stochastic. More precisely, with a certain probability each outbound request causes a corresponding inbound request on the same day. Some stochastic information about these return transports is available from historical data. The purpose of this study is to investigate, whether using this information in designing the routes has a significant positive effect on the solution quality. The problem is modeled as a dynamic stochastic dial-a-ride problem with expected return transports. We propose four different modifications of metaheuristic solution approaches for this problem. In detail, we test dynamic versions of variable neighborhood search (VNS) and stochastic VNS (S-VNS) as well as modified versions of the multiple plan approach (MPA) and the multiple scenario approach (MSA). Tests are performed using 12 sets of test instances based on a real road network. Various demand scenarios are generated based on the available real data. Results show that using the stochastic information on return transports leads to average improvements of around 15%. Moreover, improvements of up to 41% can be achieved for some test instances.
NASA Astrophysics Data System (ADS)
2013-01-01
Due to a production error, the article 'Corrigendum: Task-based evaluation of segmentation algorithms for diffusion-weighted MRI without using a gold standard' by Abhinav K Jha, Matthew A Kupinski, Jeffrey J Rodriguez, Renu M Stephen and Alison T Stopeck was duplicated and the article 'Corrigendum: Complete electrode model in EEG: relationship and differences to the point electrode model' by S Pursiainen, F Lucka and C H Wolters was omitted in the print version of Physics in Medicine & Biology, volume 58, issue 1. The online versions of both articles are not affected. The article 'Corrigendum: Complete electrode model in EEG: relationship and differences to the point electrode model' by S Pursiainen, F Lucka and C H Wolters will be included in the print version of this issue (Physics in Medicine & Biology, volume 58, issue 2.) We apologise unreservedly for this error. Jon Ruffle Publisher
Adaptive partially hidden Markov models with application to bilevel image coding.
Forchhammer, S; Rasmussen, T S
1999-01-01
Partially hidden Markov models (PHMMs) have previously been introduced. The transition and emission/output probabilities from hidden states, as known from the HMMs, are conditioned on the past. This way, the HMM may be applied to images introducing the dependencies of the second dimension by conditioning. In this paper, the PHMM is extended to multiple sequences with a multiple token version and adaptive versions of PHMM coding are presented. The different versions of the PHMM are applied to lossless bilevel image coding. To reduce and optimize the model cost and size, the contexts are organized in trees and effective quantization of the parameters is introduced. The new coding methods achieve results that are better than the JBIG standard on selected test images, although at the cost of increased complexity. By the minimum description length principle, the methods presented for optimizing the code length may apply as guidance for training (P)HMMs for, e.g., segmentation or recognition purposes. Thereby, the PHMM models provide a new approach to image modeling.
Socio-Culturally Oriented Plan Discovery Environment (SCOPE)
2005-05-01
U.S. intelligence methods (Dr. George Friedman ( 2003 ) Saddam Hussein and the Dollar War. THE STRATFOR WEEKLY 18 December) 8 2.2. Evidence... 2003 ). In the EAGLE setting, we are using a modified version of the fuzzy segmentation algorithm developed by Udupa and his associates to...based (Fu, et al, 2003 ) and a cognitive model based (Eilbert, et al., 2002) algorithms, and a method for combining the results. (The method for
Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations.
Zala, Sarah M; Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J
2017-01-01
House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4-12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a 'gold standard' reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community.
Kim, Ki-Tack; Lee, Sang-Hun; Suk, Kyung-Soo; Lee, Jung-Hee; Jeong, Bi-O
2010-06-01
The purpose of this study was to analyze the biomechanical effects of three different constrained types of an artificial disc on the implanted and adjacent segments in the lumbar spine using a finite element model (FEM). The created intact model was validated by comparing the flexion-extension response without pre-load with the corresponding results obtained from the published experimental studies. The validated intact lumbar model was tested after implantation of three artificial discs at L4-5. Each implanted model was subjected to a combination of 400 N follower load and 5 Nm of flexion/extension moments. ABAQUS version 6.5 (ABAQUS Inc., Providence, RI, USA) and FEMAP version 8.20 (Electronic Data Systems Corp., Plano, TX, USA) were used for meshing and analysis of geometry of the intact and implanted models. Under the flexion load, the intersegmental rotation angles of all the implanted models were similar to that of the intact model, but under the extension load, the values were greater than that of the intact model. The facet contact loads of three implanted models were greater than the loads observed with the intact model. Under the flexion load, three types of the implanted model at the L4-5 level showed the intersegmental rotation angle similar to the one measured with the intact model. Under the extension load, all of the artificial disc implanted models demonstrated an increased extension rotational angle at the operated level (L4-5), resulting in an increase under the facet contact load when compared with the adjacent segments. The increased facet load may lead to facet degeneration.
Buhimschi, Catalin S; Buhimschi, Irina A; Wehrum, Mark J; Molaskey-Jones, Sherry; Sfakianaki, Anna K; Pettker, Christian M; Thung, Stephen; Campbell, Katherine H; Dulay, Antonette T; Funai, Edmund F; Bahtiyar, Mert O
2011-10-01
To test the hypothesis that myometrial thickness predicts the success of external cephalic version. Abdominal ultrasonographic scans were performed in 114 consecutive pregnant women with breech singletons before an external cephalic version maneuver. Myometrial thickness was measured by a standardized protocol at three sites: the lower segment, midanterior wall, and the fundal uterine wall. Independent variables analyzed in conjunction with myometrial thickness were: maternal age, parity, body mass index, abdominal wall thickness, estimated fetal weight, amniotic fluid index, placental thickness and location, fetal spine position, breech type, and delivery outcomes such as final mode of delivery and birth weight. Successful version was associated with a thicker ultrasonographic fundal myometrium (unsuccessful: 6.7 [5.5-8.4] compared with successful: 7.4 [6.6-9.7] mm, P=.037). Multivariate regression analysis showed that increased fundal myometrial thickness, high amniotic fluid index, and nonfrank breech presentation were the strongest independent predictors of external cephalic version success (P<.001). A fundal myometrial thickness greater than 6.75 mm and an amniotic fluid index greater than 12 cm were each associated with successful external cephalic versions (fundal myometrial thickness: odds ratio [OR] 2.4, 95% confidence interval [CI] 1.1-5.2, P=.029; amniotic fluid index: OR 2.8, 95% CI 1.3-6.0, P=.008). Combining the two variables resulted in an absolute risk reduction for a failed version of 27.6% (95% CI 7.1-48.1) and a number needed to treat of four (95% CI 2.1-14.2). Fundal myometrial thickness and amniotic fluid index contribute to success of external cephalic version and their evaluation can be easily incorporated in algorithms before the procedure. III.
Automatic lumen segmentation in IVOCT images using binary morphological reconstruction
2013-01-01
Background Atherosclerosis causes millions of deaths, annually yielding billions in expenses round the world. Intravascular Optical Coherence Tomography (IVOCT) is a medical imaging modality, which displays high resolution images of coronary cross-section. Nonetheless, quantitative information can only be obtained with segmentation; consequently, more adequate diagnostics, therapies and interventions can be provided. Since it is a relatively new modality, many different segmentation methods, available in the literature for other modalities, could be successfully applied to IVOCT images, improving accuracies and uses. Method An automatic lumen segmentation approach, based on Wavelet Transform and Mathematical Morphology, is presented. The methodology is divided into three main parts. First, the preprocessing stage attenuates and enhances undesirable and important information, respectively. Second, in the feature extraction block, wavelet is associated with an adapted version of Otsu threshold; hence, tissue information is discriminated and binarized. Finally, binary morphological reconstruction improves the binary information and constructs the binary lumen object. Results The evaluation was carried out by segmenting 290 challenging images from human and pig coronaries, and rabbit iliac arteries; the outcomes were compared with the gold standards made by experts. The resultant accuracy was obtained: True Positive (%) = 99.29 ± 2.96, False Positive (%) = 3.69 ± 2.88, False Negative (%) = 0.71 ± 2.96, Max False Positive Distance (mm) = 0.1 ± 0.07, Max False Negative Distance (mm) = 0.06 ± 0.1. Conclusions In conclusion, by segmenting a number of IVOCT images with various features, the proposed technique showed to be robust and more accurate than published studies; in addition, the method is completely automatic, providing a new tool for IVOCT segmentation. PMID:23937790
Evolving geometrical heterogeneities of fault trace data
NASA Astrophysics Data System (ADS)
Wechsler, Neta; Ben-Zion, Yehuda; Christofferson, Shari
2010-08-01
We perform a systematic comparative analysis of geometrical fault zone heterogeneities using derived measures from digitized fault maps that are not very sensitive to mapping resolution. We employ the digital GIS map of California faults (version 2.0) and analyse the surface traces of active strike-slip fault zones with evidence of Quaternary and historic movements. Each fault zone is broken into segments that are defined as a continuous length of fault bounded by changes of angle larger than 1°. Measurements of the orientations and lengths of fault zone segments are used to calculate the mean direction and misalignment of each fault zone from the local plate motion direction, and to define several quantities that represent the fault zone disorder. These include circular standard deviation and circular standard error of segments, orientation of long and short segments with respect to the mean direction, and normal separation distances of fault segments. We examine the correlations between various calculated parameters of fault zone disorder and the following three potential controlling variables: cumulative slip, slip rate and fault zone misalignment from the plate motion direction. The analysis indicates that the circular standard deviation and circular standard error of segments decrease overall with increasing cumulative slip and increasing slip rate of the fault zones. The results imply that the circular standard deviation and error, quantifying the range or dispersion in the data, provide effective measures of the fault zone disorder, and that the cumulative slip and slip rate (or more generally slip rate normalized by healing rate) represent the fault zone maturity. The fault zone misalignment from plate motion direction does not seem to play a major role in controlling the fault trace heterogeneities. The frequency-size statistics of fault segment lengths can be fitted well by an exponential function over the entire range of observations.
Kushibar, Kaisar; Valverde, Sergi; González-Villà, Sandra; Bernal, Jose; Cabezas, Mariano; Oliver, Arnau; Lladó, Xavier
2018-06-15
Sub-cortical brain structure segmentation in Magnetic Resonance Images (MRI) has attracted the interest of the research community for a long time as morphological changes in these structures are related to different neurodegenerative disorders. However, manual segmentation of these structures can be tedious and prone to variability, highlighting the need for robust automated segmentation methods. In this paper, we present a novel convolutional neural network based approach for accurate segmentation of the sub-cortical brain structures that combines both convolutional and prior spatial features for improving the segmentation accuracy. In order to increase the accuracy of the automated segmentation, we propose to train the network using a restricted sample selection to force the network to learn the most difficult parts of the structures. We evaluate the accuracy of the proposed method on the public MICCAI 2012 challenge and IBSR 18 datasets, comparing it with different traditional and deep learning state-of-the-art methods. On the MICCAI 2012 dataset, our method shows an excellent performance comparable to the best participant strategy on the challenge, while performing significantly better than state-of-the-art techniques such as FreeSurfer and FIRST. On the IBSR 18 dataset, our method also exhibits a significant increase in the performance with respect to not only FreeSurfer and FIRST, but also comparable or better results than other recent deep learning approaches. Moreover, our experiments show that both the addition of the spatial priors and the restricted sampling strategy have a significant effect on the accuracy of the proposed method. In order to encourage the reproducibility and the use of the proposed method, a public version of our approach is available to download for the neuroimaging community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rundle, J.; Rundle, P.; Donnellan, A.; Li, P.
2003-12-01
We consider the problem of the complex dynamics of earthquake fault systems, and whether numerical simulations can be used to define an ensemble forecasting technology similar to that used in weather and climate research. To effectively carry out such a program, we need 1) a topological realistic model to simulate the fault system; 2) data sets to constrain the model parameters through a systematic program of data assimilation; 3) a computational technology making use of modern paradigms of high performance and parallel computing systems; and 4) software to visualize and analyze the results. In particular, we focus attention of a new version of our code Virtual California (version 2001) in which we model all of the major strike slip faults extending throughout California, from the Mexico-California border to the Mendocino Triple Junction. We use the historic data set of earthquakes larger than magnitude M > 6 to define the frictional properties of all 654 fault segments (degrees of freedom) in the model. Previous versions of Virtual California had used only 215 fault segments to model the strike slip faults in southern California. To compute the dynamics and the associated surface deformation, we use message passing as implemented in the MPICH standard distribution on a small Beowulf cluster consisting of 10 cpus. We are also planning to run the code on significantly larger machines so that we can begin to examine much finer spatial scales of resolution, and to assess scaling properties of the code. We present results of simulations both as static images and as mpeg movies, so that the dynamical aspects of the computation can be assessed by the viewer. We also compute a variety of statistics from the simulations, including magnitude-frequency relations, and compare these with data from real fault systems.
The Karlin-McGregor formula for a variant of a discrete version of Walsh's spider
NASA Astrophysics Data System (ADS)
Grünbaum, F. Alberto
2009-10-01
We consider a variant of a discrete space version of Walsh's spider, see Walsh (1978 Temps Locaux, Asterisque vol 52-53 (Paris: Soc. Math. de France)) as well as Evans and Sowers (2003 Ann. Probab. 31 486-527 and its references). This process can be seen as an instance of a quasi-birth-and-death process, a class of random walks for which the classical theory of Karlin and McGregor can be nicely adapted as in Dette, Reuther, Studden and Zygmunt (2006 SIAM J. Matrix Anal. Appl. 29 117-42), Grünbaum (2007 Probability, Geometry and Integrable Systems ed Pinsky and Birnir vol 55 (Berkeley, CA: MSRI publication) pp. 241-60, see also arXiv math PR/0703375), Grünbaum (2007 Dagstuhl Seminar Proc. 07461 on Numerical Methods in Structured Markov Chains ed Bini), Grünbaum (2008 Proceedings of IWOTA) and Grünbaum and de la Iglesia (2008 SIAM J. Matrix Anal. Appl. 30 741-63). We give here a weight matrix that makes the corresponding matrix-valued orthogonal polynomials orthogonal to each other. We also determine the polynomials themselves and thus obtain all the ingredients to apply a matrix-valued version of the Karlin-McGregor formula. Dedicated to Jack Schwartz, who passed away on March 2, 2009.
An anti-neutrino detector to monitor nuclear reactor's power and fuel composition
NASA Astrophysics Data System (ADS)
Battaglieri, M.; DeVita, R.; Firpo, G.; Neuhold, P.; Osipenko, M.; Piombo, D.; Ricco, G.; Ripani, M.; Taiuti, M.
2010-05-01
In this contribution, we present the expected performance of a new detector to measure the absolute energy-integrated flux and the energy spectrum of anti-neutrinos emitted by a nuclear power plant. The number of detected anti-neutrino is a direct measure of the power while from the energy spectrum is possible to infer the evolution in time of the core isotopic composition. The proposed method should be sensitive to a sudden change in the core burn-up as caused, for instance, by a fraudulent subtraction of plutonium. The detector, a 130×100×100 cm3 cube with 1 m3 active volume, made by plastic scintillator wrapped in thin Gd foils, is segmented in 50 independent optical channels read, side by side, by a pair of 3 in. photomultipliers. Anti-neutrino interacts with hydrogen contained in the plastic scintillator via the neutron inverse β- decay ( ν¯p→e+n). The high segmentation of the detector allows to reduce the background from other reactions by detecting independent hits for the positron, the two photons emitted in the e+e- annihilation and the neutron.
Cruise control for segmented flow.
Abolhasani, Milad; Singh, Mayank; Kumacheva, Eugenia; Günther, Axel
2012-11-21
Capitalizing on the benefits of microscale segmented flows, e.g., enhanced mixing and reduced sample dispersion, so far requires specialist training and accommodating a few experimental inconveniences. For instance, microscale gas-liquid flows in many current setups take at least 10 min to stabilize and iterative manual adjustments are needed to achieve or maintain desired mixing or residence times. Here, we report a cruise control strategy that overcomes these limitations and allows microscale gas-liquid (bubble) and liquid-liquid (droplet) flow conditions to be rapidly "adjusted" and maintained. Using this strategy we consistently establish bubble and droplet flows with dispersed phase (plug) velocities of 5-300 mm s(-1), plug lengths of 0.6-5 mm and continuous phase (slug) lengths of 0.5-3 mm. The mixing times (1-5 s), mass transfer times (33-250 ms) and residence times (3-300 s) can therefore be directly imposed by dynamically controlling the supply of the dispersed and the continuous liquids either from external pumps or from local pressurized reservoirs. In the latter case, no chip-external pumps, liquid-perfused tubes or valves are necessary while unwanted dead volumes are significantly reduced.
Recent developments in guided wave travel time tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zon, Tim van; Volker, Arno
The concept of predictive maintenance using permanent sensors that monitor the integrity of an installation is an interesting addition to the current method of periodic inspections. Guided wave tomography had been developed to create a map of the wall thickness using the travel times of guided waves. It can be used for both monitoring and for inspection of pipe-segments that are difficult to access, for instance at the location of pipe-supports. An important outcome of the tomography is the minimum remaining wall thickness, as this is critical in the scheduling of a replacement of the pipe-segment. In order to improvemore » the sizing accuracy we have improved the tomography scheme. A number of major improvements have been realized allowing to extend the application envelope to pipes with a larger wall thickness and to larger distances between the transducer rings. Simulation results indicate that the sizing accuracy has improved and that is now possible to have a spacing of 8 meter between the source-ring and the receiver-ring. Additionally a reduction of the number of sensors required might be possible as well.« less
Mirvis, D M
1988-11-01
Patients with acute inferior myocardial infarction commonly have ST segment depression in the anterior precordial leads. This may reflect either reciprocal changes from the inferior ST elevation or primary ST depression from additional anterior subendocardial ischemia. From a biophysical perspective reciprocal changes should be uniformly anticipated from basic dipole theory. Detection will vary with the size, location, orientation, and electrical intensity of the lesion and with the ECG lead system deployed to register the anterior changes. Alternatively, acute occlusion of the right coronary artery may produce ischemia in the anterior left ventricular wall supplied by a stenotic anterior descending coronary artery. Anterior ischemia may result from the abnormal hemodynamics or the reduced collateral flow produced by acute right coronary artery occlusion. Thus both mechanisms are based on sound physiologic principles. A review of the clinical literature suggests that such patients represent a heterogeneous group. In some instances coexistent anterior ischemia is present, whereas in others the anterior ST depression is the passive reflection of inferior ST elevation augmented in many cases by a large infarct size or more extensive posterobasal or septal involvement.
Motion compensated shape error concealment.
Schuster, Guido M; Katsaggelos, Aggelos K
2006-02-01
The introduction of Video Objects (VOs) is one of the innovations of MPEG-4. The alpha-plane of a VO defines its shape at a given instance in time and hence determines the boundary of its texture. In packet-based networks, shape, motion, and texture are subject to loss. While there has been considerable attention paid to the concealment of texture and motion errors, little has been done in the field of shape error concealment. In this paper we propose a post-processing shape error concealment technique that uses the motion compensated boundary information of the previously received alpha-plane. The proposed approach is based on matching received boundary segments in the current frame to the boundary in the previous frame. This matching is achieved by finding a maximally smooth motion vector field. After the current boundary segments are matched to the previous boundary, the missing boundary pieces are reconstructed by motion compensation. Experimental results demonstrating the performance of the proposed motion compensated shape error concealment method, and comparing it with the previously proposed weighted side matching method are presented.
Best Merge Region Growing Segmentation with Integrated Non-Adjacent Region Object Aggregation
NASA Technical Reports Server (NTRS)
Tilton, James C.; Tarabalka, Yuliya; Montesano, Paul M.; Gofman, Emanuel
2012-01-01
Best merge region growing normally produces segmentations with closed connected region objects. Recognizing that spectrally similar objects often appear in spatially separate locations, we present an approach for tightly integrating best merge region growing with non-adjacent region object aggregation, which we call Hierarchical Segmentation or HSeg. However, the original implementation of non-adjacent region object aggregation in HSeg required excessive computing time even for moderately sized images because of the required intercomparison of each region with all other regions. This problem was previously addressed by a recursive approximation of HSeg, called RHSeg. In this paper we introduce a refined implementation of non-adjacent region object aggregation in HSeg that reduces the computational requirements of HSeg without resorting to the recursive approximation. In this refinement, HSeg s region inter-comparisons among non-adjacent regions are limited to regions of a dynamically determined minimum size. We show that this refined version of HSeg can process moderately sized images in about the same amount of time as RHSeg incorporating the original HSeg. Nonetheless, RHSeg is still required for processing very large images due to its lower computer memory requirements and amenability to parallel processing. We then note a limitation of RHSeg with the original HSeg for high spatial resolution images, and show how incorporating the refined HSeg into RHSeg overcomes this limitation. The quality of the image segmentations produced by the refined HSeg is then compared with other available best merge segmentation approaches. Finally, we comment on the unique nature of the hierarchical segmentations produced by HSeg.
NASA Astrophysics Data System (ADS)
Rueda, Sylvia; Udupa, Jayaram K.
2011-03-01
Landmark based statistical object modeling techniques, such as Active Shape Model (ASM), have proven useful in medical image analysis. Identification of the same homologous set of points in a training set of object shapes is the most crucial step in ASM, which has encountered challenges such as (C1) defining and characterizing landmarks; (C2) ensuring homology; (C3) generalizing to n > 2 dimensions; (C4) achieving practical computations. In this paper, we propose a novel global-to-local strategy that attempts to address C3 and C4 directly and works in Rn. The 2D version starts from two initial corresponding points determined in all training shapes via a method α, and subsequently by subdividing the shapes into connected boundary segments by a line determined by these points. A shape analysis method β is applied on each segment to determine a landmark on the segment. This point introduces more pairs of points, the lines defined by which are used to further subdivide the boundary segments. This recursive boundary subdivision (RBS) process continues simultaneously on all training shapes, maintaining synchrony of the level of recursion, and thereby keeping correspondence among generated points automatically by the correspondence of the homologous shape segments in all training shapes. The process terminates when no subdividing lines are left to be considered that indicate (as per method β) that a point can be selected on the associated segment. Examples of α and β are presented based on (a) distance; (b) Principal Component Analysis (PCA); and (c) the novel concept of virtual landmarks.
Structural Implications of Fluorescence Quenching in the Shaker K+ Channel
Cha, Albert; Bezanilla, Francisco
1998-01-01
When attached to specific sites near the S4 segment of the nonconducting (W434F) Shaker potassium channel, the fluorescent probe tetramethylrhodamine maleimide undergoes voltage-dependent changes in intensity that correlate with the movement of the voltage sensor (Mannuzzu, L.M., M.M. Moronne, and E.Y. Isacoff. 1996. Science. 271:213–216; Cha, A., and F. Bezanilla. 1997. Neuron. 19:1127–1140). The characteristics of this voltage-dependent fluorescence quenching are different in a conducting version of the channel with a different pore substitution (T449Y). Blocking the pore of the T449Y construct with either tetraethylammonium or agitoxin removes a fluorescence component that correlates with the voltage dependence but not the kinetics of ionic activation. This pore-mediated modulation of the fluorescence quenching near the S4 segment suggests that the fluorophore is affected by the state of the external pore. In addition, this modulation may reflect conformational changes associated with channel opening that are prevented by tetraethylammonium or agitoxin. Studies of pH titration, collisional quenchers, and anisotropy indicate that fluorophores attached to residues near the S4 segment are constrained by a nearby region of protein. The mechanism of fluorescence quenching near the S4 segment does not involve either reorientation of the fluorophore or a voltage-dependent excitation shift and is different from the quenching mechanism observed at a site near the S2 segment. Taken together, these results suggest that the extracellular portion of the S4 segment resides in an aqueous protein vestibule and is influenced by the state of the external pore. PMID:9758859
NASA Astrophysics Data System (ADS)
Peleshko, V. A.
2016-06-01
The deviator constitutive relation of the proposed theory of plasticity has a three-term form (the stress, stress rate, and strain rate vectors formed from the deviators are collinear) and, in the specialized (applied) version, in addition to the simple loading function, contains four dimensionless constants of the material determined from experiments along a two-link strain trajectory with an orthogonal break. The proposed simple mechanism is used to calculate the constants of themodel for four metallic materials that significantly differ in the composition and in the mechanical properties; the obtained constants do not deviate much from their average values (over the four materials). The latter are taken as universal constants in the engineering version of the model, which thus requires only one basic experiment, i. e., a simple loading test. If the material exhibits the strengthening property in cyclic circular deformation, then the model contains an additional constant determined from the experiment along a strain trajectory of this type. (In the engineering version of the model, the cyclic strengthening effect is not taken into account, which imposes a certain upper bound on the difference between the length of the strain trajectory arc and the module of the strain vector.) We present the results of model verification using the experimental data available in the literature about the combined loading along two- and multi-link strain trajectories with various lengths of links and angles of breaks, with plane curvilinear segments of various constant and variable curvature, and with three-dimensional helical segments of various curvature and twist. (All in all, we use more than 80 strain programs; the materials are low- andmedium-carbon steels, brass, and stainless steel.) These results prove that the model can be used to describe the process of arbitrary active (in the sense of nonnegative capacity of the shear) combine loading and final unloading of originally quasi-isotropic elastoplastic materials. In practical calculations, in the absence of experimental data about the properties of a material under combined loading, the use of the engineering version of the model is quite acceptable. The simple identification, wide verifiability, and the availability of a software implementation of the method for solving initial-boundary value problems permit treating the proposed theory as an applied theory.
The inverse problem of the calculus of variations for discrete systems
NASA Astrophysics Data System (ADS)
Barbero-Liñán, María; Farré Puiggalí, Marta; Ferraro, Sebastián; Martín de Diego, David
2018-05-01
We develop a geometric version of the inverse problem of the calculus of variations for discrete mechanics and constrained discrete mechanics. The geometric approach consists of using suitable Lagrangian and isotropic submanifolds. We also provide a transition between the discrete and the continuous problems and propose variationality as an interesting geometric property to take into account in the design and computer simulation of numerical integrators for constrained systems. For instance, nonholonomic mechanics is generally non variational but some special cases admit an alternative variational description. We apply some standard nonholonomic integrators to such an example to study which ones conserve this property.
De Winter, Joeri; Wagemans, Johan
2008-01-01
Attneave (1954 Psychological Review 61 183-193) demonstrated that a line drawing of a sleeping cat can still be identified when the smoothly curved contours are replaced by straight-line segments connecting the positive maxima and negative minima of contour curvature. Using the set of line drawings by Snodgrass and Vanderwart (1980 Journal of Experimental Psychology: Human Learning and Memory 6 174-215) we made outline versions (with known curvature values along the contour) that can still be identified and that can be used to test Attneave's demonstration more systematically and more thoroughly. In five experiments (with 444 subjects in total), we tested identifiability of straight-line versions of 184 stimuli with different selections of points to be connected (using 24 to 28 subjects per stimulus per condition). Straight-line versions connecting curvature extrema were easier to identify than those based on inflections (where curvature changes sign), and those connecting salient points (determined by 161 independent subjects) were easier than those connecting midpoints. However, identification varied considerably between objects: some were almost always identifiable and others almost never, regardless of the selection criterion, whereas identifiability depended on the specific shape attributes preserved in the straight-line version of the outline in other objects. Results are discussed in relation to Attneave's original hypotheses as well as in the light of more recent theories on shape perception and object identification.
1993-01-01
elements in the transaction set. A convention is usually developed before any computer EDI sys - tems development work and serves as a design document when...Segment For If 1 Comment 1 etc. Nabe 1: This ba noe. NOW Sae 0 am U’ of to (nmbered. Commer A: Thim 6a manuww COMMENTS we not put am fti 1-dw-d 0
Passive Fingerprinting Of Computer Network Reconnaissance Tools
2009-09-01
v6 for version 6 MITM : Man-In-The-Middle Attack MSS: Maximum Segment Size NOP: No Operation Performed NPS: Naval Postgraduate School OS...specific, or man-in-the- middle ( MITM ) attacks. Depending on the attacker’s position to access the targeted network, the attacker may be able to...identification numbers. Both are ordinarily supposed to be initialized as a random number to make it difficult for an attacker to perform an injection MITM
NASA Ames potential flow analysis (POTFAN) geometry program (POTGEM), version 1
NASA Technical Reports Server (NTRS)
Medan, R. T.; Bullock, R. B.
1976-01-01
A computer program known as POTGEM is reported which has been developed as an independent segment of a three-dimensional linearized, potential flow analysis system and which is used to generate a panel point description of arbitrary, three-dimensional bodies from convenient engineering descriptions consisting of equations and/or tables. Due to the independent, modular nature of the program, it may be used to generate corner points for other computer programs.
2015-07-02
defense acquisitions may depend less on the extent to which provisions of the bill make substantive changes to acquisitions...1 Because the House Armed Services Committee’s focus on small business predates the current reform effort, and because small... business provisions also affect only a specific segment of the industrial base, not the overall acquisition system, such sections were excluded from
NASA Technical Reports Server (NTRS)
Ding, Feng; Fang, Fan; Hearty, Thomas J.; Theobald, Michael; Vollmer, Bruce; Lynnes, Christopher
2014-01-01
The Atmospheric Infrared Sounder (AIRS) mission is entering its 13th year of global observations of the atmospheric state, including temperature and humidity profiles, outgoing long-wave radiation, cloud properties, and trace gases. Thus AIRS data have been widely used, among other things, for short-term climate research and observational component for model evaluation. One instance is the fifth phase of the Coupled Model Intercomparison Project (CMIP5) which uses AIRS version 5 data in the climate model evaluation. The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is the home of processing, archiving, and distribution services for data from the AIRS mission. The GES DISC, in collaboration with the AIRS Project, released data from the version 6 algorithm in early 2013. The new algorithm represents a significant improvement over previous versions in terms of greater stability, yield, and quality of products. The ongoing Earth System Grid for next generation climate model research project, a collaborative effort of GES DISC and NASA JPL, will bring temperature and humidity profiles from AIRS version 6. The AIRS version 6 product adds a new "TqJoint" data group, which contains data for a common set of observations across water vapor and temperature at all atmospheric levels and is suitable for climate process studies. How different may the monthly temperature and humidity profiles in "TqJoint" group be from the "Standard" group where temperature and water vapor are not always valid at the same time? This study aims to answer the question by comprehensively comparing the temperature and humidity profiles from the "TqJoint" group and the "Standard" group. The comparison includes mean differences at different levels globally and over land and ocean. We are also working on examining the sampling differences between the "TqJoint" and "Standard" group using MERRA data.
Use of graph algorithms in the processing and analysis of images with focus on the biomedical data.
Zdimalova, M; Roznovjak, R; Weismann, P; El Falougy, H; Kubikova, E
2017-01-01
Image segmentation is a known problem in the field of image processing. A great number of methods based on different approaches to this issue was created. One of these approaches utilizes the findings of the graph theory. Our work focuses on segmentation using shortest paths in a graph. Specifically, we deal with methods of "Intelligent Scissors," which use Dijkstra's algorithm to find the shortest paths. We created a new software in Microsoft Visual Studio 2013 integrated development environment Visual C++ in the language C++/CLI. We created a format application with a graphical users development environment for system Windows, with using the platform .Net (version 4.5). The program was used for handling and processing the original medical data. The major disadvantage of the method of "Intelligent Scissors" is the computational time length of Dijkstra's algorithm. However, after the implementation of a more efficient priority queue, this problem could be alleviated. The main advantage of this method we see in training that enables to adapt to a particular kind of edge, which we need to segment. The user involvement has a significant influence on the process of segmentation, which enormously aids to achieve high-quality results (Fig. 7, Ref. 13).
Extracting oil palm crown from WorldView-2 satellite image
NASA Astrophysics Data System (ADS)
Korom, A.; Phua, M.-H.; Hirata, Y.; Matsuura, T.
2014-02-01
Oil palm (OP) is the most commercial crop in Malaysia. Estimating the crowns is important for biomass estimation from high resolution satellite (HRS) image. This study examined extraction of individual OP crown from a WorldView-2 image using twofold algorithms, i.e., masking of Non-OP pixels and detection of individual OP crown based on the watershed segmentation of greyscale images. The study site was located in Beluran district, central Sabah, where matured OPs with the age ranging from 15 to 25 years old have been planted. We examined two compound vegetation indices of (NDVI+1)*DVI and NDII for masking non-OP crown areas. Using kappa statistics, an optimal threshold value was set with the highest accuracy at 90.6% for differentiating OP crown areas from Non-OP areas. After the watershed segmentation of OP crown areas with additional post-procedures, about 77% of individual OP crowns were successfully detected in comparison to the manual based delineation. Shape and location of each crown segment was then assessed based on a modified version of the goodness measures of Möller et al which was 0.3, indicating an acceptable CSGM (combined segmentation goodness measures) agreements between the automated and manually delineated crowns (perfect case is '1').
Tam, Roger C; Traboulsee, Anthony; Riddehough, Andrew; Li, David K B
2012-01-01
The change in T 1-hypointense lesion ("black hole") volume is an important marker of pathological progression in multiple sclerosis (MS). Black hole boundaries often have low contrast and are difficult to determine accurately and most (semi-)automated segmentation methods first compute the T 2-hyperintense lesions, which are a superset of the black holes and are typically more distinct, to form a search space for the T 1w lesions. Two main potential sources of measurement noise in longitudinal black hole volume computation are partial volume and variability in the T 2w lesion segmentation. A paired analysis approach is proposed herein that uses registration to equalize partial volume and lesion mask processing to combine T 2w lesion segmentations across time. The scans of 247 MS patients are used to compare a selected black hole computation method with an enhanced version incorporating paired analysis, using rank correlation to a clinical variable (MS functional composite) as the primary outcome measure. The comparison is done at nine different levels of intensity as a previous study suggests that darker black holes may yield stronger correlations. The results demonstrate that paired analysis can strongly improve longitudinal correlation (from -0.148 to -0.303 in this sample) and may produce segmentations that are more sensitive to clinically relevant changes.
An enhanced digital line graph design
Guptill, Stephen C.
1990-01-01
In response to increasing information demands on its digital cartographic data, the U.S. Geological Survey has designed an enhanced version of the Digital Line Graph, termed Digital Line Graph - Enhanced (DLG-E). In the DLG-E model, the phenomena represented by geographic and cartographic data are termed entities. Entities represent individual phenomena in the real world. A feature is an abstraction of a set of entities, with the feature description encompassing only selected properties of the entities (typically the properties that have been portrayed cartographically on a map). Buildings, bridges, roads, streams, grasslands, and counties are examples of features. A feature instance, that is, one occurrence of a feature, is described in the digital environment by feature objects and spatial objects. A feature object identifies a feature instance and its nonlocational attributes. Nontopological relationships are associated with feature objects. The locational aspects of the feature instance are represented by spatial objects. Four spatial objects (points, nodes, chains, and polygons) and their topological relationships are defined. To link the locational and nonlocational aspects of the feature instance, a given feature object is associated with (or is composed of) a set of spatial objects. These objects, attributes, and relationships are the components of the DLG-E data model. To establish a domain of features for DLG-E, an approach using a set of classes, or views, of spatial entities was adopted. The five views that were developed are cover, division, ecosystem, geoposition, and morphology. The views are exclusive; each view is a self-contained analytical approach to the entire range of world features. Because each view is independent of the others, a single point on the surface of the Earth can be represented under multiple views. Under the five views, over 200 features were identified and defined. This set constitutes an initial domain of DLG-E features.
The importance of having an appropriate relational data segmentation in ATLAS
NASA Astrophysics Data System (ADS)
Dimitrov, G.
2015-12-01
In this paper we describe specific technical solutions put in place in various database applications of the ATLAS experiment at LHC where we make use of several partitioning techniques available in Oracle 11g. With the broadly used range partitioning and its option of automatic interval partitioning we add our own logic in PLSQL procedures and scheduler jobs to sustain data sliding windows in order to enforce various data retention policies. We also make use of the new Oracle 11g reference partitioning in the Nightly Build System to achieve uniform data segmentation. However the most challenging issue was to segment the data of the new ATLAS Distributed Data Management system (Rucio), which resulted in tens of thousands list type partitions and sub-partitions. Partition and sub-partition management, index strategy, statistics gathering and queries execution plan stability are important factors when choosing an appropriate physical model for the application data management. The so-far accumulated knowledge and analysis on the new Oracle 12c version features that could be beneficial will be shared with the audience.
Isaksen, Jonas; Leber, Remo; Schmid, Ramun; Schmid, Hans-Jakob; Generali, Gianluca; Abächerli, Roger
2017-02-01
The first-order high-pass filter (AC coupling) has previously been shown to affect the ECG for higher cut-off frequencies. We seek to find a systematic deviation in computer measurements of the electrocardiogram when the AC coupling with a 0.05 Hz first-order high-pass filter is used. The standard 12-lead electrocardiogram from 1248 patients and the automated measurements of their DC and AC coupled version were used. We expect a large unipolar QRS-complex to produce a deviation in the opposite direction in the ST-segment. We found a strong correlation between the QRS integral and the offset throughout the ST-segment. The coefficient for J amplitude deviation was found to be -0.277 µV/(µV⋅s). Potential dangerous alterations to the diagnostically important ST-segment were found. Medical professionals and software developers for electrocardiogram interpretation programs should be aware of such high-pass filter effects since they could be misinterpreted as pathophysiology or some pathophysiology could be masked by these effects. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Refinement of ground reference data with segmented image data
NASA Technical Reports Server (NTRS)
Robinson, Jon W.; Tilton, James C.
1991-01-01
One of the ways to determine ground reference data (GRD) for satellite remote sensing data is to photo-interpret low altitude aerial photographs and then digitize the cover types on a digitized tablet and register them to 7.5 minute U.S.G.S. maps (that were themselves digitized). The resulting GRD can be registered to the satellite image or, vice versa. Unfortunately, there are many opportunities for error when using digitizing tablet and the resolution of the edges for the GRD depends on the spacing of the points selected on the digitizing tablet. One of the consequences of this is that when overlaid on the image, errors and missed detail in the GRD become evident. An approach is discussed for correcting these errors and adding detail to the GRD through the use of a highly interactive, visually oriented process. This process involves the use of overlaid visual displays of the satellite image data, the GRD, and a segmentation of the satellite image data. Several prototype programs were implemented which provide means of taking a segmented image and using the edges from the reference data to mask out these segment edges that are beyond a certain distance from the reference data edges. Then using the reference data edges as a guide, those segment edges that remain and that are judged not to be image versions of the reference edges are manually marked and removed. The prototype programs that were developed and the algorithmic refinements that facilitate execution of this task are described.
NASA Astrophysics Data System (ADS)
Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.
2016-06-01
This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.
Classification and Weakly Supervised Pain Localization using Multiple Segment Representation.
Sikka, Karan; Dhall, Abhinav; Bartlett, Marian Stewart
2014-10-01
Automatic pain recognition from videos is a vital clinical application and, owing to its spontaneous nature, poses interesting challenges to automatic facial expression recognition (AFER) research. Previous pain vs no-pain systems have highlighted two major challenges: (1) ground truth is provided for the sequence, but the presence or absence of the target expression for a given frame is unknown, and (2) the time point and the duration of the pain expression event(s) in each video are unknown. To address these issues we propose a novel framework (referred to as MS-MIL) where each sequence is represented as a bag containing multiple segments, and multiple instance learning (MIL) is employed to handle this weakly labeled data in the form of sequence level ground-truth. These segments are generated via multiple clustering of a sequence or running a multi-scale temporal scanning window, and are represented using a state-of-the-art Bag of Words (BoW) representation. This work extends the idea of detecting facial expressions through 'concept frames' to 'concept segments' and argues through extensive experiments that algorithms such as MIL are needed to reap the benefits of such representation. The key advantages of our approach are: (1) joint detection and localization of painful frames using only sequence-level ground-truth, (2) incorporation of temporal dynamics by representing the data not as individual frames but as segments, and (3) extraction of multiple segments, which is well suited to signals with uncertain temporal location and duration in the video. Extensive experiments on UNBC-McMaster Shoulder Pain dataset highlight the effectiveness of the approach by achieving competitive results on both tasks of pain classification and localization in videos. We also empirically evaluate the contributions of different components of MS-MIL. The paper also includes the visualization of discriminative facial patches, important for pain detection, as discovered by our algorithm and relates them to Action Units that have been associated with pain expression. We conclude the paper by demonstrating that MS-MIL yields a significant improvement on another spontaneous facial expression dataset, the FEEDTUM dataset.
Second quantization in bit-string physics
NASA Technical Reports Server (NTRS)
Noyes, H. Pierre
1993-01-01
Using a new fundamental theory based on bit-strings, a finite and discrete version of the solutions of the free one particle Dirac equation as segmented trajectories with steps of length h/mc along the forward and backward light cones executed at velocity +/- c are derived. Interpreting the statistical fluctuations which cause the bends in these segmented trajectories as emission and absorption of radiation, these solutions are analogous to a fermion propagator in a second quantized theory. This allows us to interpret the mass parameter in the step length as the physical mass of the free particle. The radiation in interaction with it has the usual harmonic oscillator structure of a second quantized theory. How these free particle masses can be generated gravitationally using the combinatorial hierarchy sequence (3,10,137,2(sup 127) + 136), and some of the predictive consequences are sketched.
Brain tumor image segmentation using kernel dictionary learning.
Jeon Lee; Seung-Jun Kim; Rong Chen; Herskovits, Edward H
2015-08-01
Automated brain tumor image segmentation with high accuracy and reproducibility holds a big potential to enhance the current clinical practice. Dictionary learning (DL) techniques have been applied successfully to various image processing tasks recently. In this work, kernel extensions of the DL approach are adopted. Both reconstructive and discriminative versions of the kernel DL technique are considered, which can efficiently incorporate multi-modal nonlinear feature mappings based on the kernel trick. Our novel discriminative kernel DL formulation allows joint learning of a task-driven kernel-based dictionary and a linear classifier using a K-SVD-type algorithm. The proposed approaches were tested using real brain magnetic resonance (MR) images of patients with high-grade glioma. The obtained preliminary performances are competitive with the state of the art. The discriminative kernel DL approach is seen to reduce computational burden without much sacrifice in performance.
NASA Technical Reports Server (NTRS)
Osgood, Cathy; Williams, Kevin; Gentry, Philip; Brownfield, Dana; Hallstrom, John; Stuit, Tim
2012-01-01
Orbit Software Suite is used to support a variety of NASA/DM (Dependable Multiprocessor) mission planning and analysis activities on the IPS (Intrusion Prevention System) platform. The suite of Orbit software tools (Orbit Design and Orbit Dynamics) resides on IPS/Linux workstations, and is used to perform mission design and analysis tasks corresponding to trajectory/ launch window, rendezvous, and proximity operations flight segments. A list of tools in Orbit Software Suite represents tool versions established during/after the Equipment Rehost-3 Project.
Rocket Motor Microphone Investigation
NASA Technical Reports Server (NTRS)
Pilkey, Debbie; Herrera, Eric; Gee, Kent L.; Giraud, Jerom H.; Young, Devin J.
2010-01-01
At ATK's facility in Utah, large full-scale solid rocket motors are tested. The largest is a five-segment version of the reusable solid rocket motor, which is for use on the Ares I launch vehicle. As a continuous improvement project, ATK and BYU investigated the use of microphones on these static tests, the vibration and temperature to which the instruments are subjected, and in particular the use of vent tubes and the effects these vents have at low frequencies.
Astronaut Heidemarie M. Stefanyshyn-Piper During STS-115 Training
NASA Technical Reports Server (NTRS)
2002-01-01
Attired in a training version of the Extravehicular Mobility Unit (EMU) space suit, STS-115 astronaut and mission specialist, Heidemarie M. Stefanyshyn-Piper, is about to begin a training session in the Neutral Buoyancy Laboratory (NBL) near Johnson Space Center in preparation for the STS-115 mission. Launched on September 9, 2006, the STS-115 mission continued assembly of the International Space Station (ISS) with the installation of the truss segments P3 and P4.
Astronaut Heidemarie M. Stefanyshyn-Piper During STS-115 Training
NASA Technical Reports Server (NTRS)
2002-01-01
Attired in a training version of the Extravehicular Mobility Unit (EMU) space suit, STS-115 astronaut and mission specialist, Heidemarie M. Stefanyshyn-Piper, is submerged into the waters of the Neutral Buoyancy Laboratory (NBL) near Johnson Space Center for training in preparation for the STS-115 mission. Launched on September 9, 2006, the STS-115 mission continued assembly of the International Space Station (ISS) with the installation of the truss segments P3 and P4.
Navigation/Prop Software Suite
NASA Technical Reports Server (NTRS)
Bruchmiller, Tomas; Tran, Sanh; Lee, Mathew; Bucker, Scott; Bupane, Catherine; Bennett, Charles; Cantu, Sergio; Kwong, Ping; Propst, Carolyn
2012-01-01
Navigation (Nav)/Prop software is used to support shuttle mission analysis, production, and some operations tasks. The Nav/Prop suite containing configuration items (CIs) resides on IPS/Linux workstations. It features lifecycle documents, and data files used for shuttle navigation and propellant analysis for all flight segments. This suite also includes trajectory server, archive server, and RAT software residing on MCC/Linux workstations. Navigation/Prop represents tool versions established during or after IPS Equipment Rehost-3 or after the MCC Rehost.
Bar-Yosef, Omer; Rotman, Yaron; Nelken, Israel
2002-10-01
The responses of neurons to natural sounds and simplified natural sounds were recorded in the primary auditory cortex (AI) of halothane-anesthetized cats. Bird chirps were used as the base natural stimuli. They were first presented within the original acoustic context (at least 250 msec of sounds before and after each chirp). The first simplification step consisted of extracting a short segment containing just the chirp from the longer segment. For the second step, the chirp was cleaned of its accompanying background noise. Finally, each chirp was replaced by an artificial version that had approximately the same frequency trajectory but with constant amplitude. Neurons had a wide range of different response patterns to these stimuli, and many neurons had late response components in addition, or instead of, their onset responses. In general, every simplification step had a substantial influence on the responses. Neither the extracted chirp nor the clean chirp evoked a similar response to the chirp presented within its acoustic context. The extracted chirp evoked different responses than its clean version. The artificial chirps evoked stronger responses with a shorter latency than the corresponding clean chirp because of envelope differences. These results illustrate the sensitivity of neurons in AI to small perturbations of their acoustic input. In particular, they pose a challenge to models based on linear summation of energy within a spectrotemporal receptive field.
Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations
Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J.
2017-01-01
House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4–12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a ‘gold standard’ reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community. PMID:28727808
Kan, S L; Yang, B; Ning, G Z; Gao, S J; Sun, J C; Feng, S Q
2016-12-01
Objective: To compare the benefits and harms of cervical disc arthroplasty (CDA) with anterior cervical discectomy and fusion(ACDF) for symptomatic cervical disc disease at mid- to long-term follow-up. Methods: Electronic searches were made in PubMed, EMBASE, and the Cochrane Library for randomized controlled trials with at least 48 moths follow-up.Outcomes were reported as relative risk or standardized mean difference.Meta-analysis was carried out using Revman version 5.3 and Stata version 12.0. Results: Seven trials were included, involving 2 302 participants.The results of this meta-analysis indicated that CDA brought about fewer secondary surgical procedures, lower neck disability index (NDI) scores, lower neck and arm pain scores, greater SF-36 Physical Component Summary (PCS) and Mental Component Summary(MCS) scores, greater range of motion (ROM) at the operative level and less superior adjacent-segment degeneration( P <0.05) than ACDF.CDA was not statistically different from ACDF in inferior adjacent-segment degeneration, neurological success, and adverse events ( P >0.05). Conclusions: CDA can significantly reduce the rates of secondary surgical procedures compared with ACDF.Meanwhile, CDA is superior or equivalent to ACDF in other aspects.As some studies without double-blind are included and some potential biases exites, more randomized controlled trials with high quality are required to get more reliable conclusions.
WRIST: A WRist Image Segmentation Toolkit for carpal bone delineation from MRI.
Foster, Brent; Joshi, Anand A; Borgese, Marissa; Abdelhafez, Yasser; Boutin, Robert D; Chaudhari, Abhijit J
2018-01-01
Segmentation of the carpal bones from 3D imaging modalities, such as magnetic resonance imaging (MRI), is commonly performed for in vivo analysis of wrist morphology, kinematics, and biomechanics. This crucial task is typically carried out manually and is labor intensive, time consuming, subject to high inter- and intra-observer variability, and may result in topologically incorrect surfaces. We present a method, WRist Image Segmentation Toolkit (WRIST), for 3D semi-automated, rapid segmentation of the carpal bones of the wrist from MRI. In our method, the boundary of the bones were iteratively found using prior known anatomical constraints and a shape-detection level set. The parameters of the method were optimized using a training dataset of 48 manually segmented carpal bones and evaluated on 112 carpal bones which included both healthy participants without known wrist conditions and participants with thumb basilar osteoarthritis (OA). Manual segmentation by two expert human observers was considered as a reference. On the healthy subject dataset we obtained a Dice overlap of 93.0 ± 3.8, Jaccard Index of 87.3 ± 6.2, and a Hausdorff distance of 2.7 ± 3.4 mm, while on the OA dataset we obtained a Dice overlap of 90.7 ± 8.6, Jaccard Index of 83.0 ± 10.6, and a Hausdorff distance of 4.0 ± 4.4 mm. The short computational time of 20.8 s per bone (or 5.1 s per bone in the parallelized version) and the high agreement with the expert observers gives WRIST the potential to be utilized in musculoskeletal research. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Tech Briefs, September 2006
NASA Technical Reports Server (NTRS)
2006-01-01
Topics covered include: Improving Thermomechanical Properties of SiC/SiC Composites; Aerogel/Particle Composites for Thermoelectric Devices; Patches for Repairing Ceramics and Ceramic- Matrix Composites; Lower-Conductivity Ceramic Materials for Thermal-Barrier Coatings; An Alternative for Emergency Preemption of Traffic Lights; Vehicle Transponder for Preemption of Traffic Lights; Automated Announcements of Approaching Emergency Vehicles; Intersection Monitor for Traffic-Light-Preemption System; Full-Duplex Digital Communication on a Single Laser Beam; Stabilizing Microwave Frequency of a Photonic Oscillator; Microwave Oscillators Based on Nonlinear WGM Resonators; Pointing Reference Scheme for Free-Space Optical Communications Systems; High-Level Performance Modeling of SAR Systems; Spectral Analysis Tool 6.2 for Windows; Multi-Platform Avionics Simulator; Silicon-Based Optical Modulator with Ferroelectric Layer; Multiplexing Transducers Based on Tunnel-Diode Oscillators; Scheduling with Automated Resolution of Conflicts; Symbolic Constraint Maintenance Grid; Discerning Trends in Performance Across Multiple Events; Magnetic Field Solver; Computing for Aiming a Spaceborne Bistatic- Radar Transmitter; 4-Vinyl-1,3-Dioxolane-2-One as an Additive for Li-Ion Cells; Probabilistic Prediction of Lifetimes of Ceramic Parts; STRANAL-PMC Version 2.0; Micromechanics and Piezo Enhancements of HyperSizer; Single-Phase Rare-Earth Oxide/Aluminum Oxide Glasses; Tilt/Tip/Piston Manipulator with Base-Mounted Actuators; Measurement of Model Noise in a Hard-Wall Wind Tunnel; Loci-STREAM Version 0.9; The Synergistic Engineering Environment; Reconfigurable Software for Controlling Formation Flying; More About the Tetrahedral Unstructured Software System; Computing Flows Using Chimera and Unstructured Grids; Avoiding Obstructions in Aiming a High-Gain Antenna; Analyzing Aeroelastic Stability of a Tilt-Rotor Aircraft; Tracking Positions and Attitudes of Mars Rovers; Stochastic Evolutionary Algorithms for Planning Robot Paths; Compressible Flow Toolbox; Rapid Aeroelastic Analysis of Blade Flutter in Turbomachines; General Flow-Solver Code for Turbomachinery Applications; Code for Multiblock CFD and Heat-Transfer Computations; Rotating-Pump Design Code; Covering a Crucible with Metal Containing Channels; Repairing Fractured Bones by Use of Bioabsorbable Composites; Kalman Filter for Calibrating a Telescope Focal Plane; Electronic Absolute Cartesian Autocollimator; Fiber-Optic Gratings for Lidar Measurements of Water Vapor; Simulating Responses of Gravitational-Wave Instrumentation; SOFTC: A Software Correlator for VLBI; Progress in Computational Simulation of Earthquakes; Database of Properties of Meteors; Computing Spacecraft Solar-Cell Damage by Charged Particles; Thermal Model of a Current-Carrying Wire in a Vacuum; Program for Analyzing Flows in a Complex Network; Program Predicts Performance of Optical Parametric Oscillators; Processing TES Level-1B Data; Automated Camera Calibration; Tracking the Martian CO2 Polar Ice Caps in Infrared Images; Processing TES Level-2 Data; SmaggIce Version 1.8; Solving the Swath Segment Selection Problem; The Spatial Standard Observer; Less-Complex Method of Classifying MPSK; Improvement in Recursive Hierarchical Segmentation of Data; Using Heaps in Recursive Hierarchical Segmentation of Data; Tool for Statistical Analysis and Display of Landing Sites; Automated Assignment of Proposals to Reviewers; Array-Pattern-Match Compiler for Opportunistic Data Analysis; Pre-Processor for Compression of Multispectral Image Data; Compressing Image Data While Limiting the Effects of Data Losses; Flight Operations Analysis Tool; Improvement in Visual Target Tracking for a Mobile Robot; Software for Simulating Air Traffic; Automated Vectorization of Decision-Based Algorithms; Grayscale Optical Correlator Workbench; "One-Stop Shopping" for Ocean Remote-Sensing and Model Data; State Analysis Database Tool; Generating CAHV and CAHVOmages with Shadows in ROAMS; Improving UDP/IP Transmission Without Increasing Congestion; FORTRAN Versions of Reformulated HFGMC Codes; Program for Editing Spacecraft Command Sequences; Flight-Tested Prototype of BEAM Software; Mission Scenario Development Workbench; Marsviewer; Tool for Analysis and Reduction of Scientific Data; ASPEN Version 3.0; Secure Display of Space-Exploration Images; Digital Front End for Wide-Band VLBI Science Receiver; Multifunctional Tanks for Spacecraft; Lightweight, Segmented, Mostly Silicon Telescope Mirror; Assistant for Analyzing Tropical-Rain-Mapping Radar Data; and Anion-Intercalating Cathodes for High-Energy- Density Cells.
Proactive Alleviation Procedure to Handle Black Hole Attack and Its Version
Babu, M. Rajesh; Dian, S. Moses; Chelladurai, Siva; Palaniappan, Mathiyalagan
2015-01-01
The world is moving towards a new realm of computing such as Internet of Things. The Internet of Things, however, envisions connecting almost all objects within the world to the Internet by recognizing them as smart objects. In doing so, the existing networks which include wired, wireless, and ad hoc networks should be utilized. Moreover, apart from other networks, the ad hoc network is full of security challenges. For instance, the MANET (mobile ad hoc network) is susceptible to various attacks in which the black hole attacks and its versions do serious damage to the entire MANET infrastructure. The severity of this attack increases, when the compromised MANET nodes work in cooperation with each other to make a cooperative black hole attack. Therefore this paper proposes an alleviation procedure which consists of timely mandate procedure, hole detection algorithm, and sensitive guard procedure to detect the maliciously behaving nodes. It has been observed that the proposed procedure is cost-effective and ensures QoS guarantee by assuring resource availability thus making the MANET appropriate for Internet of Things. PMID:26495430
Proactive Alleviation Procedure to Handle Black Hole Attack and Its Version.
Babu, M Rajesh; Dian, S Moses; Chelladurai, Siva; Palaniappan, Mathiyalagan
2015-01-01
The world is moving towards a new realm of computing such as Internet of Things. The Internet of Things, however, envisions connecting almost all objects within the world to the Internet by recognizing them as smart objects. In doing so, the existing networks which include wired, wireless, and ad hoc networks should be utilized. Moreover, apart from other networks, the ad hoc network is full of security challenges. For instance, the MANET (mobile ad hoc network) is susceptible to various attacks in which the black hole attacks and its versions do serious damage to the entire MANET infrastructure. The severity of this attack increases, when the compromised MANET nodes work in cooperation with each other to make a cooperative black hole attack. Therefore this paper proposes an alleviation procedure which consists of timely mandate procedure, hole detection algorithm, and sensitive guard procedure to detect the maliciously behaving nodes. It has been observed that the proposed procedure is cost-effective and ensures QoS guarantee by assuring resource availability thus making the MANET appropriate for Internet of Things.
NASA Technical Reports Server (NTRS)
Riley, Gary
1991-01-01
The C Language Integrated Production System (CLIPS) is a forward chaining rule based language developed by NASA. CLIPS was designed specifically to provide high portability, low cost, and easy integration with external systems. The current release of CLIPS, version 4.3, is being used by over 2500 users throughout the public and private community. The primary addition to the next release of CLIPS, version 5.0, will be the CLIPS Object Oriented Language (COOL). The major capabilities of COOL are: class definition with multiple inheritance and no restrictions on the number, types, or cardinality of slots; message passing which allows procedural code bundled with an object to be executed; and query functions which allow groups of instances to be examined and manipulated. In addition to COOL, numerous other enhancements were added to CLIPS including: generic functions (which allow different pieces of procedural code to be executed depending upon the types or classes of the arguments); integer and double precision data type support; multiple conflict resolution strategies; global variables; logical dependencies; type checking on facts; full ANSI compiler support; and incremental reset for rules.
NASA Astrophysics Data System (ADS)
Moernaut, Jasper; Daele, Maarten Van; Heirman, Katrien; Fontijn, Karen; Strasser, Michael; Pino, Mario; Urrutia, Roberto; De Batist, Marc
2014-03-01
Understanding the long-term earthquake recurrence pattern at subduction zones requires continuous paleoseismic records with excellent temporal and spatial resolution and stable threshold conditions. South central Chilean lakes are typically characterized by laminated sediments providing a quasi-annual resolution. Our sedimentary data show that lacustrine turbidite sequences accurately reflect the historical record of large interplate earthquakes (among others the 2010 and 1960 events). Furthermore, we found that a turbidite's spatial extent and thickness are a function of the local seismic intensity and can be used for reconstructing paleo-intensities. Consequently, our multilake turbidite record aids in pinpointing magnitudes, rupture locations, and extent of past subduction earthquakes in south central Chile. Comparison of the lacustrine turbidite records with historical reports, a paleotsunami/subsidence record, and a marine megaturbidite record demonstrates that the Valdivia Segment is characterized by a variable rupture mode over the last 900 years including (i) full ruptures (Mw ~9.5: 1960, 1575, 1319 ± 9, 1127 ± 44), (ii) ruptures covering half of the Valdivia Segment (Mw ~9: 1837), and (iii) partial ruptures of much smaller coseismic slip and extent (Mw ~7.5-8: 1737, 1466 ± 4). Also, distant or smaller local earthquakes can leave a specific sedimentary imprint which may resolve subtle differences in seismic intensity values. For instance, the 2010 event at the Maule Segment produced higher seismic intensities toward southeastern localities compared to previous megathrust ruptures of similar size and extent near Concepción.
Menezes, Pedro Monteiro; Cook, Timothy Wayne; Cavalini, Luciana Tricai
2016-01-01
To present the technical background and the development of a procedure that enriches the semantics of Health Level Seven version 2 (HL7v2) messages for software-intensive systems in telemedicine trauma care. This study followed a multilevel model-driven approach for the development of semantically interoperable health information systems. The Pre-Hospital Trauma Life Support (PHTLS) ABCDE protocol was adopted as the use case. A prototype application embedded the semantics into an HL7v2 message as an eXtensible Markup Language (XML) file, which was validated against an XML schema that defines constraints on a common reference model. This message was exchanged with a second prototype application, developed on the Mirth middleware, which was also used to parse and validate both the original and the hybrid messages. Both versions of the data instance (one pure XML, one embedded in the HL7v2 message) were equally validated and the RDF-based semantics recovered by the receiving side of the prototype from the shared XML schema. This study demonstrated the semantic enrichment of HL7v2 messages for intensive-software telemedicine systems for trauma care, by validating components of extracts generated in various computing environments. The adoption of the method proposed in this study ensures the compliance of the HL7v2 standard in Semantic Web technologies.
Comprehensive cluster analysis with Transitivity Clustering.
Wittkop, Tobias; Emig, Dorothea; Truss, Anke; Albrecht, Mario; Böcker, Sebastian; Baumbach, Jan
2011-03-01
Transitivity Clustering is a method for the partitioning of biological data into groups of similar objects, such as genes, for instance. It provides integrated access to various functions addressing each step of a typical cluster analysis. To facilitate this, Transitivity Clustering is accessible online and offers three user-friendly interfaces: a powerful stand-alone version, a web interface, and a collection of Cytoscape plug-ins. In this paper, we describe three major workflows: (i) protein (super)family detection with Cytoscape, (ii) protein homology detection with incomplete gold standards and (iii) clustering of gene expression data. This protocol guides the user through the most important features of Transitivity Clustering and takes ∼1 h to complete.
Detecting affiliation in colaughter across 24 societies
Bryant, Gregory A.; Fessler, Daniel M. T.; Clint, Edward; Aarøe, Lene; Apicella, Coren L.; Petersen, Michael Bang; Bickham, Shaneikiah T.; Bolyanatz, Alexander; Chavez, Brenda; De Smet, Delphine; Díaz, Cinthya; Fančovičová, Jana; Fux, Michal; Giraldo-Perez, Paulina; Hu, Anning; Kamble, Shanmukh V.; Kameda, Tatsuya; Li, Norman P.; Luberti, Francesca R.; Prokop, Pavol; Quintelier, Katinka; Scelza, Brooke A.; Shin, Hyun Jung; Soler, Montserrat; Stieger, Stefan; van den Hende, Ellis A.; Viciana-Asensio, Hugo; Yildizhan, Saliha Elif; Yong, Jose C.; Yuditha, Tessa; Zhou, Yi
2016-01-01
Laughter is a nonverbal vocal expression that often communicates positive affect and cooperative intent in humans. Temporally coincident laughter occurring within groups is a potentially rich cue of affiliation to overhearers. We examined listeners’ judgments of affiliation based on brief, decontextualized instances of colaughter between either established friends or recently acquainted strangers. In a sample of 966 participants from 24 societies, people reliably distinguished friends from strangers with an accuracy of 53–67%. Acoustic analyses of the individual laughter segments revealed that, across cultures, listeners’ judgments were consistently predicted by voicing dynamics, suggesting perceptual sensitivity to emotionally triggered spontaneous production. Colaughter affords rapid and accurate appraisals of affiliation that transcend cultural and linguistic boundaries, and may constitute a universal means of signaling cooperative relationships. PMID:27071114
Verbeeck, N; Pillet, J C; Prospert, E; McLntyre, D; Lamy, S
2013-01-01
Renal transplantation is the choice treatment of end-stage renal disease. When it is not indicated or not immediately feasible, hemodialysis must be performed, preferably via a native arteriovenous fistula in the forearm. A pre-anastomotic occlusion of this type of fistula is often accompanied by a thrombosis of its draining vein. In some instances, the venous segment may remain permeable thanks to the development of arterial collateral pathways and may even allow efficient dialysis without any clinical syndrome of distal steal. We present the echo-Doppler, magnetic and angiographic characteristics of three of these collateralized shunts that have remained functional, in one of the cases following a percutaneous dilation.
[The organization of system of information support of regional health care].
Konovalov, A A
2014-01-01
The comparative analysis was implemented concerning versions of architecture of segment of unified public information system of health care within the framework of the regional program of modernization of Nizhniy Novgorod health care system. The author proposed means of increasing effectiveness of public investments on the basis of analysis of aggregate value of ownership of information system. The evaluation is given concerning running up to target program indicators and dynamics of basic indicators of informatization of institutions of oblast health care system.
Techniques for interpretation of geoid anomalies
NASA Technical Reports Server (NTRS)
Chapman, M. E.
1979-01-01
For purposes of geological interpretation, techniques are developed to compute directly the geoid anomaly over models of density within the earth. Ideal bodies such as line segments, vertical sheets, and rectangles are first used to calculate the geoid anomaly. Realistic bodies are modeled with formulas for two-dimensional polygons and three-dimensional polyhedra. By using Fourier transform methods the two-dimensional geoid is seen to be a filtered version of the gravity field, in which the long-wavelength components are magnified and the short-wavelength components diminished.
1987-01-01
with non-emotional mate- rial . . . . P5. Students who are able to choose from a ’ menu ’ of topics to provide the general con- text of the exercise...smaller version of the videodisc encoded digitally and capable of storing vast numbers of still frames and text files, presents yet another opportunity for...37. En el restaurante , Ramiro pide . a. chorizo y tinto. b. sardinas y vino. c. tortilla y vino. 38. Cuando es t comiendo en el restaurante , Ramiro
Astronaut Heidemarie M. Stefanyshyn-Piper During STS-115 Training
NASA Technical Reports Server (NTRS)
2005-01-01
Wearing a training version of the shuttle launch and entry suit, STS-115 astronaut and mission specialist, Heidemarie M. Stefanyshyn-Piper, puts the final touches on her suit donning process prior to the start of a water survival training session in the Neutral Buoyancy Laboratory (NBL) near Johnson Space Center. Launched on September 9, 2006, the STS-115 mission continued assembly of the International Space Station (ISS) with the installation of the truss segments P3 and P4.
2014-07-09
operations, in addition to laser - or microwave-driven logic gates. Essential shuttling operations are splitting and merging of linear ion crystals. It is...from stray charges, laser induced charging of the trap [19], trap geometry imperfections or residual ponderomotive forces along the trap axis. The...transfer expressed as the mean phonon number Δ ω¯ = n E / f . We distinguish several regimes of laser –ion interaction: (i) if the vibrational
Campbell, Ian C.; Coudrillier, Baptiste; Mensah, Johanne; Abel, Richard L.; Ethier, C. Ross
2015-01-01
The lamina cribrosa (LC) is a tissue in the posterior eye with a complex trabecular microstructure. This tissue is of great research interest, as it is likely the initial site of retinal ganglion cell axonal damage in glaucoma. Unfortunately, the LC is difficult to access experimentally, and thus imaging techniques in tandem with image processing have emerged as powerful tools to study the microstructure and biomechanics of this tissue. Here, we present a staining approach to enhance the contrast of the microstructure in micro-computed tomography (micro-CT) imaging as well as a comparison between tissues imaged with micro-CT and second harmonic generation (SHG) microscopy. We then apply a modified version of Frangi's vesselness filter to automatically segment the connective tissue beams of the LC and determine the orientation of each beam. This approach successfully segmented the beams of a porcine optic nerve head from micro-CT in three dimensions and SHG microscopy in two dimensions. As an application of this filter, we present finite-element modelling of the posterior eye that suggests that connective tissue volume fraction is the major driving factor of LC biomechanics. We conclude that segmentation with Frangi's filter is a powerful tool for future image-driven studies of LC biomechanics. PMID:25589572
Wind Evaluation Breadboard electronics and software
NASA Astrophysics Data System (ADS)
Núñez, Miguel; Reyes, Marcos; Viera, Teodora; Zuluaga, Pablo
2008-07-01
WEB, the Wind Evaluation Breadboard, is an Extremely Large Telescope Primary Mirror simulator, developed with the aim of quantifying the ability of a segmented primary mirror to cope with wind disturbances. This instrument supported by the European Community (Framework Programme 6, ELT Design Study), is developed by ESO, IAC, MEDIA-ALTRAN, JUPASA and FOGALE. The WEB is a bench of about 20 tons and 7 meter diameter emulating a segmented primary mirror and its cell, with 7 hexagonal segments simulators, including electromechanical support systems. In this paper we present the WEB central control electronics and the software development which has to interface with: position actuators, auxiliary slave actuators, edge sensors, azimuth ring, elevation actuator, meteorological station and air balloons enclosure. The set of subsystems to control is a reduced version of a real telescope segmented primary mirror control system with high real time performance but emphasizing on development time efficiency and flexibility, because WEB is a test bench. The paper includes a detailed description of hardware and software, paying special attention to real time performance. The Hardware is composed of three computers and the Software architecture has been divided in three intercommunicated applications and they have been implemented using Labview over Windows XP and Pharlap ETS real time operating system. The edge sensors and position actuators close loop has a sampling and commanding frequency of 1KHz.
Enhanced Sensitivity to Subphonemic Segments in Dyslexia: A New Instance of Allophonic Perception
Serniclaes, Willy; Seck, M’ballo
2018-01-01
Although dyslexia can be individuated in many different ways, it has only three discernable sources: a visual deficit that affects the perception of letters, a phonological deficit that affects the perception of speech sounds, and an audio-visual deficit that disturbs the association of letters with speech sounds. However, the very nature of each of these core deficits remains debatable. The phonological deficit in dyslexia, which is generally attributed to a deficit of phonological awareness, might result from a specific mode of speech perception characterized by the use of allophonic (i.e., subphonemic) units. Here we will summarize the available evidence and present new data in support of the “allophonic theory” of dyslexia. Previous studies have shown that the dyslexia deficit in the categorical perception of phonemic features (e.g., the voicing contrast between /t/ and /d/) is due to the enhanced sensitivity to allophonic features (e.g., the difference between two variants of /d/). Another consequence of allophonic perception is that it should also give rise to an enhanced sensitivity to allophonic segments, such as those that take place within a consonant cluster. This latter prediction is validated by the data presented in this paper. PMID:29587419
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garikapati, Venu; Astroza, Sebastian; Pendyala, Ram M.
Travel model systems often adopt a single decision structure that links several activity-travel choices together. The single decision structure is then used to predict activity-travel choices, with those downstream in the decision-making chain influenced by those upstream in the sequence. The adoption of a singular sequential causal structure to depict relationships among activity-travel choices in travel demand model systems ignores the possibility that some choices are made jointly as a bundle as well as the possible presence of structural heterogeneity in the population with respect to decision-making processes. As different segments in the population may adopt and follow different causalmore » decision-making mechanisms when making selected choices jointly, it would be of value to develop simultaneous equations model systems relating multiple endogenous choice variables that are able to identify population subgroups following alternative causal decision structures. Because the segments are not known a priori, they are considered latent and determined endogenously within a joint modeling framework proposed in this paper. The methodology is applied to a national mobility survey data set to identify population segments that follow different causal structures relating residential location choice, vehicle ownership, and car-share and mobility service usage. It is found that the model revealing three distinct latent segments best describes the data, confirming the efficacy of the modeling approach and the existence of structural heterogeneity in decision-making in the population. Future versions of activity-travel model systems should strive to incorporate such structural heterogeneity to better reflect varying decision processes across population subgroups.« less
Neves, Felipe Silva; Leandro, Danielle Aparecida Barbosa; Silva, Fabiana Almeida da; Netto, Michele Pereira; Oliveira, Renata Maria Souza; Cândido, Ana Paula Carlos
2015-01-01
To analyze the predictive capacity of the vertical segmental tetrapolar bioimpedance apparatus in the detection of excess weight in adolescents, using tetrapolar bioelectrical impedance as a reference. This was a cross-sectional study conducted with 411 students aged between 10 and 14 years, of both genders, enrolled in public and private schools, selected by a simple and stratified random sampling process according to the gender, age, and proportion in each institution. The sample was evaluated by the anthropometric method and underwent a body composition analysis using vertical bipolar, horizontal tetrapolar, and vertical segmental tetrapolar assessment. The ROC curve was constructed based on calculations of sensitivity and specificity for each point of the different possible measurements of body fat. The statistical analysis used Student's t-test, Pearson's correlation coefficient, and McNemar's chi-squared test. Subsequently, the variables were interpreted using SPSS software, version 17.0. Of the total sample, 53.7% were girls and 46.3%, boys. Of the total, 20% and 12.5% had overweight and obesity, respectively. The body segment measurement charts showed high values of sensitivity and specificity and high areas under the ROC curve, ranging from 0.83 to 0.95 for girls and 0.92 to 0.98 for boys, suggesting a slightly higher performance for the male gender. Body fat percentage was the most efficient criterion to detect overweight, while the trunk segmental fat was the least accurate indicator. The apparatus demonstrated good performance to predict excess weight. Copyright © 2015 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
An SPM12 extension for multiple sclerosis lesion segmentation
NASA Astrophysics Data System (ADS)
Roura, Eloy; Oliver, Arnau; Cabezas, Mariano; Valverde, Sergi; Pareto, Deborah; Vilanova, Joan C.; Ramió-Torrentà, Lluís.; Rovira, Àlex; Lladó, Xavier
2016-03-01
Purpose: Magnetic resonance imaging is nowadays the hallmark to diagnose multiple sclerosis (MS), characterized by white matter lesions. Several approaches have been recently presented to tackle the lesion segmentation problem, but none of them have been accepted as a standard tool in the daily clinical practice. In this work we present yet another tool able to automatically segment white matter lesions outperforming the current-state-of-the-art approaches. Methods: This work is an extension of Roura et al. [1], where external and platform dependent pre-processing libraries (brain extraction, noise reduction and intensity normalization) were required to achieve an optimal performance. Here we have updated and included all these required pre-processing steps into a single framework (SPM software). Therefore, there is no need of external tools to achieve the desired segmentation results. Besides, we have changed the working space from T1w to FLAIR, reducing interpolation errors produced in the registration process from FLAIR to T1w space. Finally a post-processing constraint based on shape and location has been added to reduce false positive detections. Results: The evaluation of the tool has been done on 24 MS patients. Qualitative and quantitative results are shown with both approaches in terms of lesion detection and segmentation. Conclusion: We have simplified both installation and implementation of the approach, providing a multiplatform tool1 integrated into the SPM software, which relies only on using T1w and FLAIR images. We have reduced with this new version the computation time of the previous approach while maintaining the performance.
Khanal, Laxman; Shah, Sandip; Koirala, Sarun
2017-03-01
Length of long bones is taken as an important contributor for estimating one of the four elements of forensic anthropology i.e., stature of the individual. Since physical characteristics of the individual differ among different groups of population, population specific studies are needed for estimating the total length of femur from its segment measurements. Since femur is not always recovered intact in forensic cases, it was the aim of this study to derive regression equations from measurements of proximal and distal fragments in Nepalese population. A cross-sectional study was done among 60 dry femora (30 from each side) without sex determination in anthropometry laboratory. Along with maximum femoral length, four proximal and four distal segmental measurements were measured following the standard method with the help of osteometric board, measuring tape and digital Vernier's caliper. Bones with gross defects were excluded from the study. Measured values were recorded separately for right and left side. Statistical Package for Social Science (SPSS version 11.5) was used for statistical analysis. The value of segmental measurements were different between right and left side but statistical difference was not significant except for depth of medial condyle (p=0.02). All the measurements were positively correlated and found to have linear relationship with the femoral length. With the help of regression equation, femoral length can be calculated from the segmental measurements; and then femoral length can be used to calculate the stature of the individual. The data collected may contribute in the analysis of forensic bone remains in study population.
On the Structure of Earth Science Data Collections
NASA Astrophysics Data System (ADS)
Barkstrom, B. R.
2009-12-01
While there has been substantial work in the IT community regarding metadata and file identifier schemas, there appears to be relatively little work on the organization of the file collections that constitute the preponderance of Earth science data. One symptom of this difficulty appears in nomenclature describing collections: the terms `Data Product,' `Data Set,' and `Version' are overlaid with multiple meanings between communities. A particularly important aspect of this lack of standardization appears when the community attempts to developa schema for data file identifiers. There are four candidate families of identifiers: ● Randomly assigned identifiers, such as GUIDs or UUIDs, ● Segmented numerical identifiers, such as OIDs or the prefixes for DOIs, ● Extensible URL-based identifiers, such as URNs, PURL, ARK, and similar schemas, ● Text-based identifiers based on citations for papers and books, such as those suggested for the International Polar Year (IPY) citations. Unfortunately, these schema families appear to be devoid of content based on the actual structures of Earth science data collections. In this paper, we consider an organization based on an industrial production paradigm that appears to provide the preponderance of Earth science data from satellites and in situ observations. This paradigm produces a hierarchical collection structure, similar to one discussed in Barkstrom [2003: Lecture Notes in Computer Science, 2649, pp. 118-133]. In this organization, three key collection types are ● a Data Product, which is a collection of files that have similar key parameters and included data time interval, ● a Data Set, which is a collection of files within a Data Product that comes from a specified set of Data Sources, ● a Data Set Version, which is a collection of files within a Data Set for which the data producer has attempted to ensure error homogeneity. Within a Data Set Version, files appear as a time series of instances that may be identified by the starting time of the data in the file. For data intended for climate uses, it seems appropriate to state this time in terms of Astronomical Julian Date, which is a long-standing international standard that provides continuity between current observations and paleo-climatic observations. Because this collection structure is hierarchical, it could be used by either of the two hierarchical identifier schema families, although it is probably easier to use with the OID/DOI family. This hierarchical collection structure fits into the hierarchical structure of Archival Information Packages (AIPs) identified in the Open Archival Information Systems (OAIS) Reference Model. In that model, AIPs are subdivided into Archival Information Units (AIUs), which describe individual files, or Archival Information Collections (AICs). The latter can be hierarchically nested, leading to an OAIS RM-consistent collection structure that does not appear clearly in other metadata standards. This paper will also discuss the connection between these collection categories and other metadata, as well as the possible need for other organizational schemas to capture the full range of Earth science data collection structures.
Video Comprehensibility and Attention in Very Young Children
Pempek, Tiffany A.; Kirkorian, Heather L.; Richards, John E.; Anderson, Daniel R.; Lund, Anne F.; Stevens, Michael
2010-01-01
Earlier research established that preschool children pay less attention to television that is sequentially or linguistically incomprehensible. This study determines the youngest age for which this effect can be found. One-hundred and three 6-, 12-, 18-, and 24-month-olds’ looking and heart rate were recorded while they watched Teletubbies, a television program designed for very young children. Comprehensibility was manipulated by either randomly ordering shots or reversing dialogue to become backward speech. Infants watched one normal segment and one distorted version of the same segment. Only 24-month-olds, and to some extent 18-month-olds, distinguished between normal and distorted video by looking for longer durations towards the normal stimuli. The results suggest that it may not be until the middle of the second year that children demonstrate the earliest beginnings of comprehension of video as it is currently produced. PMID:20822238
Cache and energy efficient algorithms for Nussinov's RNA Folding.
Zhao, Chunchun; Sahni, Sartaj
2017-12-06
An RNA folding/RNA secondary structure prediction algorithm determines the non-nested/pseudoknot-free structure by maximizing the number of complementary base pairs and minimizing the energy. Several implementations of Nussinov's classical RNA folding algorithm have been proposed. Our focus is to obtain run time and energy efficiency by reducing the number of cache misses. Three cache-efficient algorithms, ByRow, ByRowSegment and ByBox, for Nussinov's RNA folding are developed. Using a simple LRU cache model, we show that the Classical algorithm of Nussinov has the highest number of cache misses followed by the algorithms Transpose (Li et al.), ByRow, ByRowSegment, and ByBox (in this order). Extensive experiments conducted on four computational platforms-Xeon E5, AMD Athlon 64 X2, Intel I7 and PowerPC A2-using two programming languages-C and Java-show that our cache efficient algorithms are also efficient in terms of run time and energy. Our benchmarking shows that, depending on the computational platform and programming language, either ByRow or ByBox give best run time and energy performance. The C version of these algorithms reduce run time by as much as 97.2% and energy consumption by as much as 88.8% relative to Classical and by as much as 56.3% and 57.8% relative to Transpose. The Java versions reduce run time by as much as 98.3% relative to Classical and by as much as 75.2% relative to Transpose. Transpose achieves run time and energy efficiency at the expense of memory as it takes twice the memory required by Classical. The memory required by ByRow, ByRowSegment, and ByBox is the same as that of Classical. As a result, using the same amount of memory, the algorithms proposed by us can solve problems up to 40% larger than those solvable by Transpose.
Wooten, H. Omar; Green, Olga; Li, Harold H.; Liu, Shi; Li, Xiaoling; Rodriguez, Vivian; Mutic, Sasa; Kashani, Rojano
2016-01-01
The aims of this study were to develop a method for automatic and immediate verification of treatment delivery after each treatment fraction in order to detect and correct errors, and to develop a comprehensive daily report which includes delivery verification results, daily image‐guided radiation therapy (IGRT) review, and information for weekly physics reviews. After systematically analyzing the requirements for treatment delivery verification and understanding the available information from a commercial MRI‐guided radiotherapy treatment machine, we designed a procedure to use 1) treatment plan files, 2) delivery log files, and 3) beam output information to verify the accuracy and completeness of each daily treatment delivery. The procedure verifies the correctness of delivered treatment plan parameters including beams, beam segments and, for each segment, the beam‐on time and MLC leaf positions. For each beam, composite primary fluence maps are calculated from the MLC leaf positions and segment beam‐on time. Error statistics are calculated on the fluence difference maps between the plan and the delivery. A daily treatment delivery report is designed to include all required information for IGRT and weekly physics reviews including the plan and treatment fraction information, daily beam output information, and the treatment delivery verification results. A computer program was developed to implement the proposed procedure of the automatic delivery verification and daily report generation for an MRI guided radiation therapy system. The program was clinically commissioned. Sensitivity was measured with simulated errors. The final version has been integrated into the commercial version of the treatment delivery system. The method automatically verifies the EBRT treatment deliveries and generates the daily treatment reports. Already in clinical use for over one year, it is useful to facilitate delivery error detection, and to expedite physician daily IGRT review and physicist weekly chart review. PACS number(s): 87.55.km PMID:27167269
Coronagraphic Wavefront Control for the ATLAST-9.2m Telescope
NASA Technical Reports Server (NTRS)
Lyon, RIchard G.; Oegerle, William R.; Feinberg, Lee D.; Bolcar, Matthew R.; Dean, Bruce H.; Mosier, Gary E.; Postman, Marc
2010-01-01
The Advanced Technology for Large Aperture Space Telescope (ATLAST) concept was assessed as one of the NASA Astrophysics Strategic Mission Concepts (ASMC) studies. Herein we discuss the 9.2-meter diameter segmented aperture version and its wavefront sensing and control (WFSC) with regards to coronagraphic detection and spectroscopic characterization of exoplanets. The WFSC would consist of at least two levels of sensing and control: (i) an outer coarser level of sensing and control to phase and control the segments and secondary mirror in a manner similar to the James Webb Space Telescope but operating at higher temporal bandwidth, and (ii) an inner, coronagraphic instrument based, fine level of sensing and control for both amplitude and wavefront errors operating at higher temporal bandwidths. The outer loop would control rigid-body actuators on the primary and secondary mirrors while the inner loop would control one or more segmented deformable mirror to suppress the starlight within the coronagraphic field-of view. Herein we discuss the visible nulling coronagraph (VNC) and the requirements it levies on wavefront sensing and control and show the results of closed-loop simulations to assess performance and evaluate the trade space of system level stability versus control bandwidth.
3D marker-controlled watershed for kidney segmentation in clinical CT exams.
Wieclawek, Wojciech
2018-02-27
Image segmentation is an essential and non trivial task in computer vision and medical image analysis. Computed tomography (CT) is one of the most accessible medical examination techniques to visualize the interior of a patient's body. Among different computer-aided diagnostic systems, the applications dedicated to kidney segmentation represent a relatively small group. In addition, literature solutions are verified on relatively small databases. The goal of this research is to develop a novel algorithm for fully automated kidney segmentation. This approach is designed for large database analysis including both physiological and pathological cases. This study presents a 3D marker-controlled watershed transform developed and employed for fully automated CT kidney segmentation. The original and the most complex step in the current proposition is an automatic generation of 3D marker images. The final kidney segmentation step is an analysis of the labelled image obtained from marker-controlled watershed transform. It consists of morphological operations and shape analysis. The implementation is conducted in a MATLAB environment, Version 2017a, using i.a. Image Processing Toolbox. 170 clinical CT abdominal studies have been subjected to the analysis. The dataset includes normal as well as various pathological cases (agenesis, renal cysts, tumors, renal cell carcinoma, kidney cirrhosis, partial or radical nephrectomy, hematoma and nephrolithiasis). Manual and semi-automated delineations have been used as a gold standard. Wieclawek Among 67 delineated medical cases, 62 cases are 'Very good', whereas only 5 are 'Good' according to Cohen's Kappa interpretation. The segmentation results show that mean values of Sensitivity, Specificity, Dice, Jaccard, Cohen's Kappa and Accuracy are 90.29, 99.96, 91.68, 85.04, 91.62 and 99.89% respectively. All 170 medical cases (with and without outlines) have been classified by three independent medical experts as 'Very good' in 143-148 cases, as 'Good' in 15-21 cases and as 'Moderate' in 6-8 cases. An automatic kidney segmentation approach for CT studies to compete with commonly known solutions was developed. The algorithm gives promising results, that were confirmed during validation procedure done on a relatively large database, including 170 CTs with both physiological and pathological cases.
SSTAR, a Stand-Alone Easy-To-Use Antimicrobial Resistance Gene Predictor.
de Man, Tom J B; Limbago, Brandi M
2016-01-01
We present the easy-to-use Sequence Search Tool for Antimicrobial Resistance, SSTAR. It combines a locally executed BLASTN search against a customizable database with an intuitive graphical user interface for identifying antimicrobial resistance (AR) genes from genomic data. Although the database is initially populated from a public repository of acquired resistance determinants (i.e., ARG-ANNOT), it can be customized for particular pathogen groups and resistance mechanisms. For instance, outer membrane porin sequences associated with carbapenem resistance phenotypes can be added, and known intrinsic mechanisms can be included. Unique about this tool is the ability to easily detect putative new alleles and truncated versions of existing AR genes. Variants and potential new alleles are brought to the attention of the user for further investigation. For instance, SSTAR is able to identify modified or truncated versions of porins, which may be of great importance in carbapenemase-negative carbapenem-resistant Enterobacteriaceae. SSTAR is written in Java and is therefore platform independent and compatible with both Windows and Unix operating systems. SSTAR and its manual, which includes a simple installation guide, are freely available from https://github.com/tomdeman-bio/Sequence-Search-Tool-for-Antimicrobial-Resistance-SSTAR-. IMPORTANCE Whole-genome sequencing (WGS) is quickly becoming a routine method for identifying genes associated with antimicrobial resistance (AR). However, for many microbiologists, the use and analysis of WGS data present a substantial challenge. We developed SSTAR, software with a graphical user interface that enables the identification of known AR genes from WGS and has the unique capacity to easily detect new variants of known AR genes, including truncated protein variants. Current software solutions do not notify the user when genes are truncated and, therefore, likely nonfunctional, which makes phenotype predictions less accurate. SSTAR users can apply any AR database of interest as a reference comparator and can manually add genes that impact resistance, even if such genes are not resistance determinants per se (e.g., porins and efflux pumps).
Rapid One-step Enzymatic Synthesis and All-aqueous Purification of Trehalose Analogues.
Meints, Lisa M; Poston, Anne W; Piligian, Brent F; Olson, Claire D; Badger, Katherine S; Woodruff, Peter J; Swarts, Benjamin M
2017-02-17
Chemically modified versions of trehalose, or trehalose analogues, have applications in biology, biotechnology, and pharmaceutical science, among other fields. For instance, trehalose analogues bearing detectable tags have been used to detect Mycobacterium tuberculosis and may have applications as tuberculosis diagnostic imaging agents. Hydrolytically stable versions of trehalose are also being pursued due to their potential for use as non-caloric sweeteners and bioprotective agents. Despite the appeal of this class of compounds for various applications, their potential remains unfulfilled due to the lack of a robust route for their production. Here, we report a detailed protocol for the rapid and efficient one-step biocatalytic synthesis of trehalose analogues that bypasses the problems associated with chemical synthesis. By utilizing the thermostable trehalose synthase (TreT) enzyme from Thermoproteus tenax, trehalose analogues can be generated in a single step from glucose analogues and uridine diphosphate glucose in high yield (up to quantitative conversion) in 15-60 min. A simple and rapid non-chromatographic purification protocol, which consists of spin dialysis and ion exchange, can deliver many trehalose analogues of known concentration in aqueous solution in as little as 45 min. In cases where unreacted glucose analogue still remains, chromatographic purification of the trehalose analogue product can be performed. Overall, this method provides a "green" biocatalytic platform for the expedited synthesis and purification of trehalose analogues that is efficient and accessible to non-chemists. To exemplify the applicability of this method, we describe a protocol for the synthesis, all-aqueous purification, and administration of a trehalose-based click chemistry probe to mycobacteria, all of which took less than 1 hour and enabled fluorescence detection of mycobacteria. In the future, we envision that, among other applications, this protocol may be applied to the rapid synthesis of trehalose-based probes for tuberculosis diagnostics. For instance, short-lived radionuclide-modified trehalose analogues (e.g., 18 F-modified trehalose) could be used for advanced clinical imaging modalities such as positron emission tomography-computed tomography (PET-CT).
Metaheuristics for the dynamic stochastic dial-a-ride problem with expected return transports
Schilde, M.; Doerner, K.F.; Hartl, R.F.
2011-01-01
The problem of transporting patients or elderly people has been widely studied in literature and is usually modeled as a dial-a-ride problem (DARP). In this paper we analyze the corresponding problem arising in the daily operation of the Austrian Red Cross. This nongovernmental organization is the largest organization performing patient transportation in Austria. The aim is to design vehicle routes to serve partially dynamic transportation requests using a fixed vehicle fleet. Each request requires transportation from a patient's home location to a hospital (outbound request) or back home from the hospital (inbound request). Some of these requests are known in advance. Some requests are dynamic in the sense that they appear during the day without any prior information. Finally, some inbound requests are stochastic. More precisely, with a certain probability each outbound request causes a corresponding inbound request on the same day. Some stochastic information about these return transports is available from historical data. The purpose of this study is to investigate, whether using this information in designing the routes has a significant positive effect on the solution quality. The problem is modeled as a dynamic stochastic dial-a-ride problem with expected return transports. We propose four different modifications of metaheuristic solution approaches for this problem. In detail, we test dynamic versions of variable neighborhood search (VNS) and stochastic VNS (S-VNS) as well as modified versions of the multiple plan approach (MPA) and the multiple scenario approach (MSA). Tests are performed using 12 sets of test instances based on a real road network. Various demand scenarios are generated based on the available real data. Results show that using the stochastic information on return transports leads to average improvements of around 15%. Moreover, improvements of up to 41% can be achieved for some test instances. PMID:23543641
EarthCollab, building geoscience-centric implementations of the VIVO semantic software suite
NASA Astrophysics Data System (ADS)
Rowan, L. R.; Gross, M. B.; Mayernik, M. S.; Daniels, M. D.; Krafft, D. B.; Kahn, H. J.; Allison, J.; Snyder, C. B.; Johns, E. M.; Stott, D.
2017-12-01
EarthCollab, an EarthCube Building Block project, is extending an existing open-source semantic web application, VIVO, to enable the exchange of information about scientific researchers and resources across institutions. EarthCollab is a collaboration between UNAVCO, a geodetic facility and consortium that supports diverse research projects informed by geodesy, The Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory, and Cornell University. VIVO has been implemented by more than 100 universities and research institutions to highlight research and institutional achievements. This presentation will discuss benefits and drawbacks of working with and extending open source software. Some extensions include plotting georeferenced objects on a map, a mobile-friendly theme, integration of faceting via Elasticsearch, extending the VIVO ontology to capture geoscience-centric objects and relationships, and the ability to cross-link between VIVO instances. Most implementations of VIVO gather information about a single organization. The EarthCollab project created VIVO extensions to enable cross-linking of VIVO instances to reduce the amount of duplicate information about the same people and scientific resources and to enable dynamic linking of related information across VIVO installations. As the list of customizations grows, so does the effort required to maintain compatibility between the EarthCollab forks and the main VIVO code. For example, dozens of libraries and dependencies were updated prior to the VIVO v1.10 release, which introduced conflicts in the EarthCollab cross-linking code. The cross-linking code has been developed to enable sharing of data across different versions of VIVO, however, using a JSON output schema standardized across versions. We will outline lessons learned in working with VIVO and its open source dependencies, which include Jena, Solr, Freemarker, and jQuery and discuss future work by EarthCollab, which includes refining the cross-linking VIVO capabilities by continued integration of persistent and unique identifiers to enable automated lookup and matching across institutional VIVOs.
Modeling and validating HL7 FHIR profiles using semantic web Shape Expressions (ShEx).
Solbrig, Harold R; Prud'hommeaux, Eric; Grieve, Grahame; McKenzie, Lloyd; Mandel, Joshua C; Sharma, Deepak K; Jiang, Guoqian
2017-03-01
HL7 Fast Healthcare Interoperability Resources (FHIR) is an emerging open standard for the exchange of electronic healthcare information. FHIR resources are defined in a specialized modeling language. FHIR instances can currently be represented in either XML or JSON. The FHIR and Semantic Web communities are developing a third FHIR instance representation format in Resource Description Framework (RDF). Shape Expressions (ShEx), a formal RDF data constraint language, is a candidate for describing and validating the FHIR RDF representation. Create a FHIR to ShEx model transformation and assess its ability to describe and validate FHIR RDF data. We created the methods and tools that generate the ShEx schemas modeling the FHIR to RDF specification being developed by HL7 ITS/W3C RDF Task Force, and evaluated the applicability of ShEx in the description and validation of FHIR to RDF transformations. The ShEx models contributed significantly to workgroup consensus. Algorithmic transformations from the FHIR model to ShEx schemas and FHIR example data to RDF transformations were incorporated into the FHIR build process. ShEx schemas representing 109 FHIR resources were used to validate 511 FHIR RDF data examples from the Standards for Trial Use (STU 3) Ballot version. We were able to uncover unresolved issues in the FHIR to RDF specification and detect 10 types of errors and root causes in the actual implementation. The FHIR ShEx representations have been included in the official FHIR web pages for the STU 3 Ballot version since September 2016. ShEx can be used to define and validate the syntax of a FHIR resource, which is complementary to the use of RDF Schema (RDFS) and Web Ontology Language (OWL) for semantic validation. ShEx proved useful for describing a standard model of FHIR RDF data. The combination of a formal model and a succinct format enabled comprehensive review and automated validation. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Mandrà, Salvatore; Giacomo Guerreschi, Gian; Aspuru-Guzik, Alán
2016-07-01
We present an exact quantum algorithm for solving the Exact Satisfiability problem, which belongs to the important NP-complete complexity class. The algorithm is based on an intuitive approach that can be divided into two parts: the first step consists in the identification and efficient characterization of a restricted subspace that contains all the valid assignments of the Exact Satisfiability; while the second part performs a quantum search in such restricted subspace. The quantum algorithm can be used either to find a valid assignment (or to certify that no solution exists) or to count the total number of valid assignments. The query complexities for the worst-case are respectively bounded by O(\\sqrt{{2}n-{M\\prime }}) and O({2}n-{M\\prime }), where n is the number of variables and {M}\\prime the number of linearly independent clauses. Remarkably, the proposed quantum algorithm results to be faster than any known exact classical algorithm to solve dense formulas of Exact Satisfiability. As a concrete application, we provide the worst-case complexity for the Hamiltonian cycle problem obtained after mapping it to a suitable Occupation problem. Specifically, we show that the time complexity for the proposed quantum algorithm is bounded by O({2}n/4) for 3-regular undirected graphs, where n is the number of nodes. The same worst-case complexity holds for (3,3)-regular bipartite graphs. As a reference, the current best classical algorithm has a (worst-case) running time bounded by O({2}31n/96). Finally, when compared to heuristic techniques for Exact Satisfiability problems, the proposed quantum algorithm is faster than the classical WalkSAT and Adiabatic Quantum Optimization for random instances with a density of constraints close to the satisfiability threshold, the regime in which instances are typically the hardest to solve. The proposed quantum algorithm can be straightforwardly extended to the generalized version of the Exact Satisfiability known as Occupation problem. The general version of the algorithm is presented and analyzed.
Aust, Ulrike; Braunöder, Elisabeth
2015-02-01
The present experiment investigated pigeons' and humans' processing styles-local or global-in an exemplar-based visual categorization task in which category membership of every stimulus had to be learned individually, and in a rule-based task in which category membership was defined by a perceptual rule. Group Intact was trained with the original pictures (providing both intact local and global information), Group Scrambled was trained with scrambled versions of the same pictures (impairing global information), and Group Blurred was trained with blurred versions (impairing local information). Subsequently, all subjects were tested for transfer to the 2 untrained presentation modes. Humans outperformed pigeons regarding learning speed and accuracy as well as transfer performance and showed good learning irrespective of group assignment, whereas the pigeons of Group Blurred needed longer to learn the training tasks than the pigeons of Groups Intact and Scrambled. Also, whereas humans generalized equally well to any novel presentation mode, pigeons' transfer from and to blurred stimuli was impaired. Both species showed faster learning and, for the most part, better transfer in the rule-based than in the exemplar-based task, but there was no evidence of the used processing mode depending on the type of task (exemplar- or rule-based). Whereas pigeons relied on local information throughout, humans did not show a preference for either processing level. Additional tests with grayscale versions of the training stimuli, with versions that were both blurred and scrambled, and with novel instances of the rule-based task confirmed and further extended these findings. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Music in the moment? Revisiting the effect of large scale structures.
Lalitte, P; Bigand, E
2006-12-01
The psychological relevance of large-scale musical structures has been a matter of debate in the music community. This issue was investigated with a method that allows assessing listeners' detection of musical incoherencies in normal and scrambled versions of popular and contemporary music pieces. Musical excerpts were segmented into 28 or 29 chunks. In the scrambled version, the temporal order of these chunks was altered with the constraint that the transitions between two chunks never created local acoustical and musical disruptions. Participants were required (1) to detect on-line incoherent linking of chunks, (2) to rate aesthetic quality of pieces, and (3) to evaluate their overall coherence. The findings indicate a moderate sensitivity to large-scale musical structures for popular and contemporary music in both musically trained and untrained listeners. These data are discussed in light of current models of music cognition.
ADS: A FORTRAN program for automated design synthesis: Version 1.10
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1985-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.
Lerner-Ellis, Jordan; Wang, Marina; White, Shana; Lebo, Matthew S
2015-07-01
The Canadian Open Genetics Repository is a collaborative effort for the collection, storage, sharing and robust analysis of variants reported by medical diagnostics laboratories across Canada. As clinical laboratories adopt modern genomics technologies, the need for this type of collaborative framework is increasingly important. A survey to assess existing protocols for variant classification and reporting was delivered to clinical genetics laboratories across Canada. Based on feedback from this survey, a variant assessment tool was made available to all laboratories. Each participating laboratory was provided with an instance of GeneInsight, a software featuring versioning and approval processes for variant assessments and interpretations and allowing for variant data to be shared between instances. Guidelines were established for sharing data among clinical laboratories and in the final outreach phase, data will be made readily available to patient advocacy groups for general use. The survey demonstrated the need for improved standardisation and data sharing across the country. A variant assessment template was made available to the community to aid with standardisation. Instances of the GeneInsight tool were provided to clinical diagnostic laboratories across Canada for the purpose of uploading, transferring, accessing and sharing variant data. As an ongoing endeavour and a permanent resource, the Canadian Open Genetics Repository aims to serve as a focal point for the collaboration of Canadian laboratories with other countries in the development of tools that take full advantage of laboratory data in diagnosing, managing and treating genetic diseases. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
NASA Astrophysics Data System (ADS)
Kingsbury, Lana K.; Atcheson, Paul D.
2004-10-01
The Northrop-Grumman/Ball/Kodak team is building the JWST observatory that will be launched in 2011. To develop the flight wavefront sensing and control (WFS&C) algorithms and software, Ball is designing and building a 1 meter diameter, functionally accurate version of the JWST optical telescope element (OTE). This testbed telescope (TBT) will incorporate the same optical element control capability as the flight OTE. The secondary mirror will be controlled by a 6 degree of freedom (dof) hexapod and each of the 18 segmented primary mirror assemblies will have 6 dof hexapod control as well as radius of curvature adjustment capability. In addition to the highly adjustable primary and secondary mirrors, the TBT will include a rigid tertiary mirror, 2 fold mirrors (to direct light into the TBT) and a very stable supporting structure. The total telescope system configured residual wavefront error will be better than 175 nm RMS double pass. The primary and secondary mirror hexapod assemblies enable 5 nm piston resolution, 0.0014 arcsec tilt resolution, 100 nm translation resolution, and 0.04497 arcsec clocking resolution. The supporting structure (specifically the secondary mirror support structure) is designed to ensure that the primary mirror segments will not change their despace position relative to the secondary mirror (spaced > 1 meter apart) by greater than 500 nm within a one hour period of ambient clean room operation.
IMU-to-Segment Assignment and Orientation Alignment for the Lower Body Using Deep Learning
2018-01-01
Human body motion analysis based on wearable inertial measurement units (IMUs) receives a lot of attention from both the research community and the and industrial community. This is due to the significant role in, for instance, mobile health systems, sports and human computer interaction. In sensor based activity recognition, one of the major issues for obtaining reliable results is the sensor placement/assignment on the body. For inertial motion capture (joint kinematics estimation) and analysis, the IMU-to-segment (I2S) assignment and alignment are central issues to obtain biomechanical joint angles. Existing approaches for I2S assignment usually rely on hand crafted features and shallow classification approaches (e.g., support vector machines), with no agreement regarding the most suitable features for the assignment task. Moreover, estimating the complete orientation alignment of an IMU relative to the segment it is attached to using a machine learning approach has not been shown in literature so far. This is likely due to the high amount of training data that have to be recorded to suitably represent possible IMU alignment variations. In this work, we propose online approaches for solving the assignment and alignment tasks for an arbitrary amount of IMUs with respect to a biomechanical lower body model using a deep learning architecture and windows of 128 gyroscope and accelerometer data samples. For this, we combine convolutional neural networks (CNNs) for local filter learning with long-short-term memory (LSTM) recurrent networks as well as generalized recurrent units (GRUs) for learning time dynamic features. The assignment task is casted as a classification problem, while the alignment task is casted as a regression problem. In this framework, we demonstrate the feasibility of augmenting a limited amount of real IMU training data with simulated alignment variations and IMU data for improving the recognition/estimation accuracies. With the proposed approaches and final models we achieved 98.57% average accuracy over all segments for the I2S assignment task (100% when excluding left/right switches) and an average median angle error over all segments and axes of 2.91° for the I2S alignment task. PMID:29351262
Kohlberg, Gavriel D; Mancuso, Dean M; Chari, Divya A; Lalwani, Anil K
2015-01-01
Enjoyment of music remains an elusive goal following cochlear implantation. We test the hypothesis that reengineering music to reduce its complexity can enhance the listening experience for the cochlear implant (CI) listener. Normal hearing (NH) adults (N = 16) and CI listeners (N = 9) evaluated a piece of country music on three enjoyment modalities: pleasantness, musicality, and naturalness. Participants listened to the original version along with 20 modified, less complex, versions created by including subsets of the musical instruments from the original song. NH participants listened to the segments both with and without CI simulation processing. Compared to the original song, modified versions containing only 1-3 instruments were less enjoyable to the NH listeners but more enjoyable to the CI listeners and the NH listeners with CI simulation. Excluding vocals and including rhythmic instruments improved enjoyment for NH listeners with CI simulation but made no difference for CI listeners. Reengineering a piece of music to reduce its complexity has the potential to enhance music enjoyment for the cochlear implantee. Thus, in addition to improvements in software and hardware, engineering music specifically for the CI listener may be an alternative means to enhance their listening experience.
NASA Astrophysics Data System (ADS)
Gurney, K. R.; Liang, J.; Patarasuk, R.; O'Keeffe, D.; Newman, S.; Rao, P.; Hutchins, M.; Huang, J.
2016-12-01
The Los Angeles Basin represents one of the largest metropolitan areas in the United States and is home to the Megacity Carbon Project, a multi-institutional effort led by NASA JPL to understand the total carbon budget of the Los Angeles Basin. A key component of that effort is the Hestia bottom-up fossil fuel CO2 emissions data product, which quantifies FFCO2 every hour to the spatial scale of individual buildings and road segments. This data product has undergone considerable revision in the last year and the version 2.0 data product is now complete covering the 2011-2014 time period. In this presentation, we highlight the advances in the Hestia version 2.0 including the improvements to onroad, building and industrial emissions. We make comparisons to the independently reported GHG reporting program of the EPA and to in-situ atmospheric measurement of CO2 at two monotiring locations in Pasadena and Palos Verdes. We provide an analysis of the socioeconomic drivers of emissions in the building and onroad transportation sectors across the domain highlighting hotspots of emissions and spatially-specific opportunities for reductions.
gemcWeb: A Cloud Based Nuclear Physics Simulation Software
NASA Astrophysics Data System (ADS)
Markelon, Sam
2017-09-01
gemcWeb allows users to run nuclear physics simulations from the web. Being completely device agnostic, scientists can run simulations from anywhere with an Internet connection. Having a full user system, gemcWeb allows users to revisit and revise their projects, and share configurations and results with collaborators. gemcWeb is based on simulation software gemc, which is based on standard GEant4. gemcWeb requires no C++, gemc, or GEant4 knowledge. Using a simple but powerful GUI allows users to configure their project from geometries and configurations stored on the deployment server. Simulations are then run on the server, with results being posted to the user, and then securely stored. Python based and open-source, the main version of gemcWeb is hosted internally at Jefferson National Labratory and used by the CLAS12 and Electron-Ion Collider Project groups. However, as the software is open-source, and hosted as a GitHub repository, an instance can be deployed on the open web, or any institution's intra-net. An instance can be configured to host experiments specific to an institution, and the code base can be modified by any individual or group. Special thanks to: Maurizio Ungaro, PhD., creator of gemc; Markus Diefenthaler, PhD., advisor; and Kyungseon Joo, PhD., advisor.
Learning to remember by learning to speak.
Ettlinger, Marc; Lanter, Jennifer; Van Pay, Craig K
2014-02-01
Does the language we speak affect the way we think, and if so, how? Previous researchers have considered this question by exploring the cognitive abilities of speakers of different languages. In the present study, we looked for evidence of linguistic relativity within a language and within participants by looking at memory recall for monolingual children ages 3-5 years old. At this age, children use grammatical markers with variable fluency depending on ease of articulation: Children produce the correct plural more often for vowel-final words (e.g., shoes) than plosive-final words (e.g., socks) and for plosive-final words more often than sibilant-final words (e.g., dresses). We examined whether these phonological principles governing plural production also influence children's recall of the plurality of seen objects. Fifty children were shown pictures of familiar objects presented as either singular or multiple instances. After a break, they were required to indicate whether they saw the singular- or multiple-instance version of each picture. Results show that children's memory for object plurality does depend on the phonology of the word. Subsequent tests of each child's production ability showed a correlation between a child's memory and his or her ability to articulate novel plurals with the same phonological properties. That is, what children can say impacts what they can remember.
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.; Rienecker, Michele; Borovikov, Anna Y.; Suarez, Max
1999-01-01
A massively parallel ensemble Kalman filter (EnKF)is used to assimilate temperature data from the TOGA/TAO array and altimetry from TOPEX/POSEIDON into a Pacific basin version of the NASA Seasonal to Interannual Prediction Project (NSIPP)ls quasi-isopycnal ocean general circulation model. The EnKF is an approximate Kalman filter in which the error-covariance propagation step is modeled by the integration of multiple instances of a numerical model. An estimate of the true error covariances is then inferred from the distribution of the ensemble of model state vectors. This inplementation of the filter takes advantage of the inherent parallelism in the EnKF algorithm by running all the model instances concurrently. The Kalman filter update step also occurs in parallel by having each processor process the observations that occur in the region of physical space for which it is responsible. The massively parallel data assimilation system is validated by withholding some of the data and then quantifying the extent to which the withheld information can be inferred from the assimilation of the remaining data. The distributions of the forecast and analysis error covariances predicted by the ENKF are also examined.
Asymptotics of eigenvalues and eigenvectors of Toeplitz matrices
NASA Astrophysics Data System (ADS)
Böttcher, A.; Bogoya, J. M.; Grudsky, S. M.; Maximenko, E. A.
2017-11-01
Analysis of the asymptotic behaviour of the spectral characteristics of Toeplitz matrices as the dimension of the matrix tends to infinity has a history of over 100 years. For instance, quite a number of versions of Szegő's theorem on the asymptotic behaviour of eigenvalues and of the so-called strong Szegő theorem on the asymptotic behaviour of the determinants of Toeplitz matrices are known. Starting in the 1950s, the asymptotics of the maximum and minimum eigenvalues were actively investigated. However, investigation of the individual asymptotics of all the eigenvalues and eigenvectors of Toeplitz matrices started only quite recently: the first papers on this subject were published in 2009-2010. A survey of this new field is presented here. Bibliography: 55 titles.
NASA Technical Reports Server (NTRS)
1997-01-01
Toy designers at Hasbro, Inc. wanted to create a foam glider that a child could fly with little knowledge of aeronautics. But early in its development, the Areo Nerf gliders had one critical problem: they didn't fly so well. Through NASA's Northeast Regional Technology Transfer Center, Hasbro was linked with aeronautical experts at Langley Research Center. The engineers provided information about how wing design and shape are integral to a glider's performance. The Hasbro designers received from NASA not only technical guidance but a hands-on tutorial on the physics of designing and flying gliders. Several versions of the Nerf glider were realized from the collaboration. For instance, the Super Soaring Glider can make long-range, high performance flights while the Ultra-Stunt Glider is ideal for performing aerial acrobatics.
Mean-Field-Game Model for Botnet Defense in Cyber-Security
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolokoltsov, V. N., E-mail: v.kolokoltsov@warwick.ac.uk; Bensoussan, A.
We initiate the analysis of the response of computer owners to various offers of defence systems against a cyber-hacker (for instance, a botnet attack), as a stochastic game of a large number of interacting agents. We introduce a simple mean-field game that models their behavior. It takes into account both the random process of the propagation of the infection (controlled by the botner herder) and the decision making process of customers. Its stationary version turns out to be exactly solvable (but not at all trivial) under an additional natural assumption that the execution time of the decisions of the customersmore » (say, switch on or out the defence system) is much faster that the infection rates.« less
A difference-matrix metaheuristic for intensity map segmentation in step-and-shoot IMRT delivery.
Gunawardena, Athula D A; D'Souza, Warren D; Goadrich, Laura D; Meyer, Robert R; Sorensen, Kelly J; Naqvi, Shahid A; Shi, Leyuan
2006-05-21
At an intermediate stage of radiation treatment planning for IMRT, most commercial treatment planning systems for IMRT generate intensity maps that describe the grid of beamlet intensities for each beam angle. Intensity map segmentation of the matrix of individual beamlet intensities into a set of MLC apertures and corresponding intensities is then required in order to produce an actual radiation delivery plan for clinical use. Mathematically, this is a very difficult combinatorial optimization problem, especially when mechanical limitations of the MLC lead to many constraints on aperture shape, and setup times for apertures make the number of apertures an important factor in overall treatment time. We have developed, implemented and tested on clinical cases a metaheuristic (that is, a method that provides a framework to guide the repeated application of another heuristic) that efficiently generates very high-quality (low aperture number) segmentations. Our computational results demonstrate that the number of beam apertures and monitor units in the treatment plans resulting from our approach is significantly smaller than the corresponding values for treatment plans generated by the heuristics embedded in a widely use commercial system. We also contrast the excellent results of our fast and robust metaheuristic with results from an 'exact' method, branch-and-cut, which attempts to construct optimal solutions, but, within clinically acceptable time limits, generally fails to produce good solutions, especially for intensity maps with more than five intensity levels. Finally, we show that in no instance is there a clinically significant change of quality associated with our more efficient plans.
Antony, Bhavna Josephine; Kim, Byung-Jin; Lang, Andrew; Carass, Aaron; Prince, Jerry L; Zack, Donald J
2017-01-01
The use of spectral-domain optical coherence tomography (SD-OCT) is becoming commonplace for the in vivo longitudinal study of murine models of ophthalmic disease. Longitudinal studies, however, generate large quantities of data, the manual analysis of which is very challenging due to the time-consuming nature of generating delineations. Thus, it is of importance that automated algorithms be developed to facilitate accurate and timely analysis of these large datasets. Furthermore, as the models target a variety of diseases, the associated structural changes can also be extremely disparate. For instance, in the light damage (LD) model, which is frequently used to study photoreceptor degeneration, the outer retina appears dramatically different from the normal retina. To address these concerns, we have developed a flexible graph-based algorithm for the automated segmentation of mouse OCT volumes (ASiMOV). This approach incorporates a machine-learning component that can be easily trained for different disease models. To validate ASiMOV, the automated results were compared to manual delineations obtained from three raters on healthy and BALB/cJ mice post LD. It was also used to study a longitudinal LD model, where five control and five LD mice were imaged at four timepoints post LD. The total retinal thickness and the outer retina (comprising the outer nuclear layer, and inner and outer segments of the photoreceptors) were unchanged the day after the LD, but subsequently thinned significantly (p < 0.01). The retinal nerve fiber-ganglion cell complex and the inner plexiform layers, however, remained unchanged for the duration of the study.
Lang, Andrew; Carass, Aaron; Prince, Jerry L.; Zack, Donald J.
2017-01-01
The use of spectral-domain optical coherence tomography (SD-OCT) is becoming commonplace for the in vivo longitudinal study of murine models of ophthalmic disease. Longitudinal studies, however, generate large quantities of data, the manual analysis of which is very challenging due to the time-consuming nature of generating delineations. Thus, it is of importance that automated algorithms be developed to facilitate accurate and timely analysis of these large datasets. Furthermore, as the models target a variety of diseases, the associated structural changes can also be extremely disparate. For instance, in the light damage (LD) model, which is frequently used to study photoreceptor degeneration, the outer retina appears dramatically different from the normal retina. To address these concerns, we have developed a flexible graph-based algorithm for the automated segmentation of mouse OCT volumes (ASiMOV). This approach incorporates a machine-learning component that can be easily trained for different disease models. To validate ASiMOV, the automated results were compared to manual delineations obtained from three raters on healthy and BALB/cJ mice post LD. It was also used to study a longitudinal LD model, where five control and five LD mice were imaged at four timepoints post LD. The total retinal thickness and the outer retina (comprising the outer nuclear layer, and inner and outer segments of the photoreceptors) were unchanged the day after the LD, but subsequently thinned significantly (p < 0.01). The retinal nerve fiber-ganglion cell complex and the inner plexiform layers, however, remained unchanged for the duration of the study. PMID:28817571
Synthetic earthquake catalogs simulating seismic activity in the Corinth Gulf, Greece, fault system
NASA Astrophysics Data System (ADS)
Console, Rodolfo; Carluccio, Roberto; Papadimitriou, Eleftheria; Karakostas, Vassilis
2015-01-01
The characteristic earthquake hypothesis is the basis of time-dependent modeling of earthquake recurrence on major faults. However, the characteristic earthquake hypothesis is not strongly supported by observational data. Few fault segments have long historical or paleoseismic records of individually dated ruptures, and when data and parameter uncertainties are allowed for, the form of the recurrence distribution is difficult to establish. This is the case, for instance, of the Corinth Gulf Fault System (CGFS), for which documents about strong earthquakes exist for at least 2000 years, although they can be considered complete for M ≥ 6.0 only for the latest 300 years, during which only few characteristic earthquakes are reported for individual fault segments. The use of a physics-based earthquake simulator has allowed the production of catalogs lasting 100,000 years and containing more than 500,000 events of magnitudes ≥ 4.0. The main features of our simulation algorithm are (1) an average slip rate released by earthquakes for every single segment in the investigated fault system, (2) heuristic procedures for rupture growth and stop, leading to a self-organized earthquake magnitude distribution, (3) the interaction between earthquake sources, and (4) the effect of minor earthquakes in redistributing stress. The application of our simulation algorithm to the CGFS has shown realistic features in time, space, and magnitude behavior of the seismicity. These features include long-term periodicity of strong earthquakes, short-term clustering of both strong and smaller events, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the higher-magnitude range.
Rönnberg, Tuomas; Jääskeläinen, Kirsi; Blot, Guillaume; Parviainen, Ville; Vaheri, Antti; Renkonen, Risto; Bouloy, Michele; Plyusnin, Alexander
2012-01-01
Hantaviruses (Bunyaviridae) are negative-strand RNA viruses with a tripartite genome. The small (S) segment encodes the nucleocapsid protein and, in some hantaviruses, also the nonstructural protein (NSs). The aim of this study was to find potential cellular partners for the hantaviral NSs protein. Toward this aim, yeast two-hybrid (Y2H) screening of mouse cDNA library was performed followed by a search for potential NSs protein counterparts via analyzing a cellular interactome. The resulting interaction network was shown to form logical, clustered structures. Furthermore, several potential binding partners for the NSs protein, for instance ACBD3, were identified and, to prove the principle, interaction between NSs and ACBD3 proteins was demonstrated biochemically.
Cook, Timothy Wayne; Cavalini, Luciana Tricai
2016-01-01
Objectives To present the technical background and the development of a procedure that enriches the semantics of Health Level Seven version 2 (HL7v2) messages for software-intensive systems in telemedicine trauma care. Methods This study followed a multilevel model-driven approach for the development of semantically interoperable health information systems. The Pre-Hospital Trauma Life Support (PHTLS) ABCDE protocol was adopted as the use case. A prototype application embedded the semantics into an HL7v2 message as an eXtensible Markup Language (XML) file, which was validated against an XML schema that defines constraints on a common reference model. This message was exchanged with a second prototype application, developed on the Mirth middleware, which was also used to parse and validate both the original and the hybrid messages. Results Both versions of the data instance (one pure XML, one embedded in the HL7v2 message) were equally validated and the RDF-based semantics recovered by the receiving side of the prototype from the shared XML schema. Conclusions This study demonstrated the semantic enrichment of HL7v2 messages for intensive-software telemedicine systems for trauma care, by validating components of extracts generated in various computing environments. The adoption of the method proposed in this study ensures the compliance of the HL7v2 standard in Semantic Web technologies. PMID:26893947
Worthington, Amber K; Parrott, Roxanne L; Smith, Rachel A
2018-04-01
A growing number of genetic tests are included in diagnostic protocols associated with many common conditions. A positive diagnosis associated with the presence of some gene versions in many instances predicts a range of possible outcomes, and the uncertainty linked to such results contributes to the need to understand varied responses and plan strategic communication. Uncertainty in illness theory (UIT; Mishel, 1988, 1990) guided the investigation of efforts to feel in control and hopeful regarding genetic testing and diagnosis for alpha-1 antitrypsin deficiency (AATD). Participants included 137 individuals with AATD recruited from the Alpha-1 Research Registry who were surveyed about their subjective numeracy, anxiety about math, spirituality, perceptions of illness unpredictability, negative affect regarding genetic testing, and coping strategies about a diagnosis. Results revealed that experiencing more fear and worry contributed both directly and indirectly to affect-management coping strategies, operating through individual perceptions of illness unpredictability. The inability to predict the symptoms and course of events related to a genetic illness and anxiety regarding math heightened fear and worry. Spirituality lessened both illness unpredictability and negative affective responses to a diagnosis. Results affirm the importance of clinician and counselor efforts to incorporate attention to patient spirituality. They also illustrate the complexity associated with strategic efforts to plan communication about the different versions of a gene's effects on well-being, when some versions align with mild health effects and others with severe effects.
NASA Technical Reports Server (NTRS)
Pindera, Marek-Jerzy; Bednarcyk, Brett A.
1997-01-01
An efficient implementation of the generalized method of cells micromechanics model is presented that allows analysis of periodic unidirectional composites characterized by repeating unit cells containing thousands of subcells. The original formulation, given in terms of Hill's strain concentration matrices that relate average subcell strains to the macroscopic strains, is reformulated in terms of the interfacial subcell tractions as the basic unknowns. This is accomplished by expressing the displacement continuity equations in terms of the stresses and then imposing the traction continuity conditions directly. The result is a mixed formulation wherein the unknown interfacial subcell traction components are related to the macroscopic strain components. Because the stress field throughout the repeating unit cell is piece-wise uniform, the imposition of traction continuity conditions directly in the displacement continuity equations, expressed in terms of stresses, substantially reduces the number of unknown subcell traction (and stress) components, and thus the size of the system of equations that must be solved. Further reduction in the size of the system of continuity equations is obtained by separating the normal and shear traction equations in those instances where the individual subcells are, at most, orthotropic. The reformulated version facilitates detailed analysis of the impact of the fiber cross-section geometry and arrangement on the response of multi-phased unidirectional composites with and without evolving damage. Comparison of execution times obtained with the original and reformulated versions of the generalized method of cells demonstrates the new version's efficiency.
DIANA-microT web server v5.0: service integration into miRNA functional analysis workflows.
Paraskevopoulou, Maria D; Georgakilas, Georgios; Kostoulas, Nikos; Vlachos, Ioannis S; Vergoulis, Thanasis; Reczko, Martin; Filippidis, Christos; Dalamagas, Theodore; Hatzigeorgiou, A G
2013-07-01
MicroRNAs (miRNAs) are small endogenous RNA molecules that regulate gene expression through mRNA degradation and/or translation repression, affecting many biological processes. DIANA-microT web server (http://www.microrna.gr/webServer) is dedicated to miRNA target prediction/functional analysis, and it is being widely used from the scientific community, since its initial launch in 2009. DIANA-microT v5.0, the new version of the microT server, has been significantly enhanced with an improved target prediction algorithm, DIANA-microT-CDS. It has been updated to incorporate miRBase version 18 and Ensembl version 69. The in silico-predicted miRNA-gene interactions in Homo sapiens, Mus musculus, Drosophila melanogaster and Caenorhabditis elegans exceed 11 million in total. The web server was completely redesigned, to host a series of sophisticated workflows, which can be used directly from the on-line web interface, enabling users without the necessary bioinformatics infrastructure to perform advanced multi-step functional miRNA analyses. For instance, one available pipeline performs miRNA target prediction using different thresholds and meta-analysis statistics, followed by pathway enrichment analysis. DIANA-microT web server v5.0 also supports a complete integration with the Taverna Workflow Management System (WMS), using the in-house developed DIANA-Taverna Plug-in. This plug-in provides ready-to-use modules for miRNA target prediction and functional analysis, which can be used to form advanced high-throughput analysis pipelines.
DIANA-microT web server v5.0: service integration into miRNA functional analysis workflows
Paraskevopoulou, Maria D.; Georgakilas, Georgios; Kostoulas, Nikos; Vlachos, Ioannis S.; Vergoulis, Thanasis; Reczko, Martin; Filippidis, Christos; Dalamagas, Theodore; Hatzigeorgiou, A.G.
2013-01-01
MicroRNAs (miRNAs) are small endogenous RNA molecules that regulate gene expression through mRNA degradation and/or translation repression, affecting many biological processes. DIANA-microT web server (http://www.microrna.gr/webServer) is dedicated to miRNA target prediction/functional analysis, and it is being widely used from the scientific community, since its initial launch in 2009. DIANA-microT v5.0, the new version of the microT server, has been significantly enhanced with an improved target prediction algorithm, DIANA-microT-CDS. It has been updated to incorporate miRBase version 18 and Ensembl version 69. The in silico-predicted miRNA–gene interactions in Homo sapiens, Mus musculus, Drosophila melanogaster and Caenorhabditis elegans exceed 11 million in total. The web server was completely redesigned, to host a series of sophisticated workflows, which can be used directly from the on-line web interface, enabling users without the necessary bioinformatics infrastructure to perform advanced multi-step functional miRNA analyses. For instance, one available pipeline performs miRNA target prediction using different thresholds and meta-analysis statistics, followed by pathway enrichment analysis. DIANA-microT web server v5.0 also supports a complete integration with the Taverna Workflow Management System (WMS), using the in-house developed DIANA-Taverna Plug-in. This plug-in provides ready-to-use modules for miRNA target prediction and functional analysis, which can be used to form advanced high-throughput analysis pipelines. PMID:23680784
Intra-adrenal Aldosterone Secretion: Segmental Adrenal Venous Sampling for Localization.
Satani, Nozomi; Ota, Hideki; Seiji, Kazumasa; Morimoto, Ryo; Kudo, Masataka; Iwakura, Yoshitsugu; Ono, Yoshikiyo; Nezu, Masahiro; Omata, Kei; Ito, Sadayoshi; Satoh, Fumitoshi; Takase, Kei
2016-01-01
To use segmental adrenal venous sampling (AVS) (S-AVS) of effluent tributaries (a version of AVS that, in addition to helping identify aldosterone hypersecretion, also enables the evaluation of intra-adrenal hormone distribution) to detect and localize intra-adrenal aldosterone secretion. The institutional review board approved this study, and all patients provided informed consent. S-AVS was performed in 65 patients with primary aldosteronism (34 men; mean age, 50.9 years ± 11 [standard deviation]). A microcatheter was inserted in first-degree tributary veins. Unilateral aldosterone hypersecretion at the adrenal central vein was determined according to the lateralization index after cosyntropin stimulation. Excess aldosterone secretion at the adrenal tributary vein was considered to be present when the aldosterone/cortisol ratio from this vein exceeded that from the external iliac vein; suppressed secretion was indicated by the opposite pattern. Categoric variables were expressed as numbers and percentages; continuous variables were expressed as means ± standard errors of the mean. The AVS success rate, indicated by a selectivity index of 5 or greater, was 98% (64 of 65). The mean numbers of sampled tributaries on the left and right sides were 2.11 and 1.02, respectively. The following diagnoses were made on the basis of S-AVS results: unilateral aldosterone hypersecretion in 30 patients, bilateral hypersecretion without suppressed segments in 22 patients, and bilateral hypersecretion with at least one suppressed segment in 12 patients. None of the patients experienced severe complications. S-AVS could be used to identify heterogeneous intra-adrenal aldosterone secretion. Patients who have bilateral aldosterone-producing adenomas can be treated with adrenal-sparing surgery or other minimally invasive local therapies if any suppressed segment is identified at S-AVS. © RSNA, 2015.
Valente, João; Vieira, Pedro M; Couto, Carlos; Lima, Carlos S
2018-02-01
Poor brain extraction in Magnetic Resonance Imaging (MRI) has negative consequences in several types of brain post-extraction such as tissue segmentation and related statistical measures or pattern recognition algorithms. Current state of the art algorithms for brain extraction work on weighted T1 and T2, being not adequate for non-whole brain images such as the case of T2*FLASH@7T partial volumes. This paper proposes two new methods that work directly in T2*FLASH@7T partial volumes. The first is an improvement of the semi-automatic threshold-with-morphology approach adapted to incomplete volumes. The second method uses an improved version of a current implementation of the fuzzy c-means algorithm with bias correction for brain segmentation. Under high inhomogeneity conditions the performance of the first method degrades, requiring user intervention which is unacceptable. The second method performed well for all volumes, being entirely automatic. State of the art algorithms for brain extraction are mainly semi-automatic, requiring a correct initialization by the user and knowledge of the software. These methods can't deal with partial volumes and/or need information from atlas which is not available in T2*FLASH@7T. Also, combined volumes suffer from manipulations such as re-sampling which deteriorates significantly voxel intensity structures making segmentation tasks difficult. The proposed method can overcome all these difficulties, reaching good results for brain extraction using only T2*FLASH@7T volumes. The development of this work will lead to an improvement of automatic brain lesions segmentation in T2*FLASH@7T volumes, becoming more important when lesions such as cortical Multiple-Sclerosis need to be detected. Copyright © 2017 Elsevier B.V. All rights reserved.
Mishra, Rakesh; Jayaraman, Murali; Roland, Bartholomew P.; Landrum, Elizabeth; Fullam, Timothy; Kodali, Ravindra; Thakur, Ashwani K.; Arduini, Irene; Wetzel, Ronald
2011-01-01
Although oligomeric intermediates are transiently formed in almost all known amyloid assembly reactions, their mechanistic roles are poorly understood. Recently we demonstrated a critical role for the 17 amino acid N-terminal segment (httNT) of huntingtin (htt) in oligomer-mediated amyloid assembly of htt N-terminal fragments. In this mechanism, the httNT segment forms the α-helix rich core of the oligomers, leaving most or all of each polyglutamine (polyQ) segment disordered and solvent-exposed. Nucleation of amyloid structure occurs within this local high concentration of disordered polyQ. Here we demonstrate the kinetic importance of httNT self-assembly by describing inhibitory httNT-containing peptides that appear to work by targeting nucleation within the oligomer fraction. These molecules inhibit amyloid nucleation by forming mixed oligomers with the httNT domains of polyQ-containing htt N-terminal fragments. In one class of inhibitor, nucleation is passively suppressed due to the reduced local concentration of polyQ within the mixed oligomer. In the other class, nucleation is actively suppressed by a proline-rich polyQ segment covalently attached to httNT. Studies with D-amino acid and scrambled sequence versions of httNT suggest that inhibition activity is strongly linked to the propensity of inhibitory peptides to make amphipathic α-helices. HttNT derivatives with C-terminal cell penetrating peptide segments, also exhibit excellent inhibitory activity. The httNT-based peptides described here, especially those with protease-resistant D-amino acids and/or with cell penetrating sequences, may prove useful as lead therapeutics for inhibiting nucleation of amyloid formation in Huntington’s disease. PMID:22178478
Qi, Xin; Xing, Fuyong; Foran, David J.; Yang, Lin
2013-01-01
Automated image analysis of histopathology specimens could potentially provide support for early detection and improved characterization of breast cancer. Automated segmentation of the cells comprising imaged tissue microarrays (TMA) is a prerequisite for any subsequent quantitative analysis. Unfortunately, crowding and overlapping of cells present significant challenges for most traditional segmentation algorithms. In this paper, we propose a novel algorithm which can reliably separate touching cells in hematoxylin stained breast TMA specimens which have been acquired using a standard RGB camera. The algorithm is composed of two steps. It begins with a fast, reliable object center localization approach which utilizes single-path voting followed by mean-shift clustering. Next, the contour of each cell is obtained using a level set algorithm based on an interactive model. We compared the experimental results with those reported in the most current literature. Finally, performance was evaluated by comparing the pixel-wise accuracy provided by human experts with that produced by the new automated segmentation algorithm. The method was systematically tested on 234 image patches exhibiting dense overlap and containing more than 2200 cells. It was also tested on whole slide images including blood smears and tissue microarrays containing thousands of cells. Since the voting step of the seed detection algorithm is well suited for parallelization, a parallel version of the algorithm was implemented using graphic processing units (GPU) which resulted in significant speed-up over the C/C++ implementation. PMID:22167559
Medical image segmentation by combining graph cuts and oriented active appearance models.
Chen, Xinjian; Udupa, Jayaram K; Bagci, Ulas; Zhuge, Ying; Yao, Jianhua
2012-04-01
In this paper, we propose a novel method based on a strategic combination of the active appearance model (AAM), live wire (LW), and graph cuts (GCs) for abdominal 3-D organ segmentation. The proposed method consists of three main parts: model building, object recognition, and delineation. In the model building part, we construct the AAM and train the LW cost function and GC parameters. In the recognition part, a novel algorithm is proposed for improving the conventional AAM matching method, which effectively combines the AAM and LW methods, resulting in the oriented AAM (OAAM). A multiobject strategy is utilized to help in object initialization. We employ a pseudo-3-D initialization strategy and segment the organs slice by slice via a multiobject OAAM method. For the object delineation part, a 3-D shape-constrained GC method is proposed. The object shape generated from the initialization step is integrated into the GC cost computation, and an iterative GC-OAAM method is used for object delineation. The proposed method was tested in segmenting the liver, kidneys, and spleen on a clinical CT data set and also on the MICCAI 2007 Grand Challenge liver data set. The results show the following: 1) The overall segmentation accuracy of true positive volume fraction TPVF > 94.3% and false positive volume fraction can be achieved; 2) the initialization performance can be improved by combining the AAM and LW; 3) the multiobject strategy greatly facilitates initialization; 4) compared with the traditional 3-D AAM method, the pseudo-3-D OAAM method achieves comparable performance while running 12 times faster; and 5) the performance of the proposed method is comparable to state-of-the-art liver segmentation algorithm. The executable version of the 3-D shape-constrained GC method with a user interface can be downloaded from http://xinjianchen.wordpress.com/research/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogunovic, Hrvoje; Pozo, Jose Maria; Villa-Uriol, Maria Cruz
Purpose: To evaluate the suitability of an improved version of an automatic segmentation method based on geodesic active regions (GAR) for segmenting cerebral vasculature with aneurysms from 3D x-ray reconstruction angiography (3DRA) and time of flight magnetic resonance angiography (TOF-MRA) images available in the clinical routine. Methods: Three aspects of the GAR method have been improved: execution time, robustness to variability in imaging protocols, and robustness to variability in image spatial resolutions. The improved GAR was retrospectively evaluated on images from patients containing intracranial aneurysms in the area of the Circle of Willis and imaged with two modalities: 3DRA andmore » TOF-MRA. Images were obtained from two clinical centers, each using different imaging equipment. Evaluation included qualitative and quantitative analyses of the segmentation results on 20 images from 10 patients. The gold standard was built from 660 cross-sections (33 per image) of vessels and aneurysms, manually measured by interventional neuroradiologists. GAR has also been compared to an interactive segmentation method: isointensity surface extraction (ISE). In addition, since patients had been imaged with the two modalities, we performed an intermodality agreement analysis with respect to both the manual measurements and each of the two segmentation methods. Results: Both GAR and ISE differed from the gold standard within acceptable limits compared to the imaging resolution. GAR (ISE) had an average accuracy of 0.20 (0.24) mm for 3DRA and 0.27 (0.30) mm for TOF-MRA, and had a repeatability of 0.05 (0.20) mm. Compared to ISE, GAR had a lower qualitative error in the vessel region and a lower quantitative error in the aneurysm region. The repeatability of GAR was superior to manual measurements and ISE. The intermodality agreement was similar between GAR and the manual measurements. Conclusions: The improved GAR method outperformed ISE qualitatively as well as quantitatively and is suitable for segmenting 3DRA and TOF-MRA images from clinical routine.« less
BOSS: context-enhanced search for biomedical objects
2012-01-01
Background There exist many academic search solutions and most of them can be put on either ends of spectrum: general-purpose search and domain-specific "deep" search systems. The general-purpose search systems, such as PubMed, offer flexible query interface, but churn out a list of matching documents that users have to go through the results in order to find the answers to their queries. On the other hand, the "deep" search systems, such as PPI Finder and iHOP, return the precompiled results in a structured way. Their results, however, are often found only within some predefined contexts. In order to alleviate these problems, we introduce a new search engine, BOSS, Biomedical Object Search System. Methods Unlike the conventional search systems, BOSS indexes segments, rather than documents. A segment refers to a Maximal Coherent Semantic Unit (MCSU) such as phrase, clause or sentence that is semantically coherent in the given context (e.g., biomedical objects or their relations). For a user query, BOSS finds all matching segments, identifies the objects appearing in those segments, and aggregates the segments for each object. Finally, it returns the ranked list of the objects along with their matching segments. Results The working prototype of BOSS is available at http://boss.korea.ac.kr. The current version of BOSS has indexed abstracts of more than 20 million articles published during last 16 years from 1996 to 2011 across all science disciplines. Conclusion BOSS fills the gap between either ends of the spectrum by allowing users to pose context-free queries and by returning a structured set of results. Furthermore, BOSS exhibits the characteristic of good scalability, just as with conventional document search engines, because it is designed to use a standard document-indexing model with minimal modifications. Considering the features, BOSS notches up the technological level of traditional solutions for search on biomedical information. PMID:22595092
NASA Technical Reports Server (NTRS)
Davis, J. E.; Medan, R. T.
1977-01-01
This segment of the POTFAN system is used to generate right hand sides (boundary conditions) of the system of equations associated with the flow field under consideration. These specified flow boundary conditions are encountered in the oblique derivative boundary value problem (boundary value problem of the third kind) and contain the Neumann boundary condition as a special case. Arbitrary angle of attack and/or sideslip and/or rotation rates may be specified, as well as an arbitrary, nonuniform external flow field and the influence of prescribed singularity distributions.
STS-97 (4A) EVA training in NBL pool
2000-10-23
JSC2000-07082 (October 2000)--- Wearing a training version of the shuttle extravehicular mobility unit (EMU) space suit, astronaut Joseph R. Tanner, STS-97 mission specialist, simulates a space walk underwater in the giant Neutral Buoyancy Laboratory (NBL). Tanner was there, along with astronaut Carlos I. Noriega, to rehearse one of three scheduled space walks to make additions to the International Space Station (ISS). The five-man crew in early December will deliver the P6 Integrated Truss Segment, which includes the first US Solar arrays and a power distribution system.
Connected word recognition using a cascaded neuro-computational model
NASA Astrophysics Data System (ADS)
Hoya, Tetsuya; van Leeuwen, Cees
2016-10-01
We propose a novel framework for processing a continuous speech stream that contains a varying number of words, as well as non-speech periods. Speech samples are segmented into word-tokens and non-speech periods. An augmented version of an earlier-proposed, cascaded neuro-computational model is used for recognising individual words within the stream. Simulation studies using both a multi-speaker-dependent and speaker-independent digit string database show that the proposed method yields a recognition performance comparable to that obtained by a benchmark approach using hidden Markov models with embedded training.
Methods for detection of ataxia telangiectasia mutations
Gatti, Richard A.
2005-10-04
The present invention is directed to a method of screening large, complex, polyexonic eukaryotic genes such as the ATM gene for mutations and polymorphisms by an improved version of single strand conformation polymorphism (SSCP) electrophoresis that allows electrophoresis of two or three amplified segments in a single lane. The present invention also is directed to new mutations and polymorphisms in the ATM gene that are useful in performing more accurate screening of human DNA samples for mutations and in distinguishing mutations from polymorphisms, thereby improving the efficiency of automated screening methods.
1993-03-01
translation package. All trading partners who plan to exchange the 210 with DoD can use this document as a reference for the development of their...for a DoD trading partner to map and translate a Transaction Set 210. All trading partners who plan to exchange the Transaction Set 210 can use this...10.7.6 930315 DEPARTIMIT OF DOW MOTOR CARRER INVOICE 1DB CONdVIealON 210.003020 TABLE 10.7-2 SEGMENT HIERARCHY DoD MODEL FOR TRANSACTION SET 210 MOTOR
Distributed MRI reconstruction using Gadgetron-based cloud computing.
Xue, Hui; Inati, Souheil; Sørensen, Thomas Sangild; Kellman, Peter; Hansen, Michael S
2015-03-01
To expand the open source Gadgetron reconstruction framework to support distributed computing and to demonstrate that a multinode version of the Gadgetron can be used to provide nonlinear reconstruction with clinically acceptable latency. The Gadgetron framework was extended with new software components that enable an arbitrary number of Gadgetron instances to collaborate on a reconstruction task. This cloud-enabled version of the Gadgetron was deployed on three different distributed computing platforms ranging from a heterogeneous collection of commodity computers to the commercial Amazon Elastic Compute Cloud. The Gadgetron cloud was used to provide nonlinear, compressed sensing reconstruction on a clinical scanner with low reconstruction latency (eg, cardiac and neuroimaging applications). The proposed setup was able to handle acquisition and 11 -SPIRiT reconstruction of nine high temporal resolution real-time, cardiac short axis cine acquisitions, covering the ventricles for functional evaluation, in under 1 min. A three-dimensional high-resolution brain acquisition with 1 mm(3) isotropic pixel size was acquired and reconstructed with nonlinear reconstruction in less than 5 min. A distributed computing enabled Gadgetron provides a scalable way to improve reconstruction performance using commodity cluster computing. Nonlinear, compressed sensing reconstruction can be deployed clinically with low image reconstruction latency. © 2014 Wiley Periodicals, Inc.
Solving Multiple Isolated, Interleaved, and Blended Tasks through Modular Neuroevolution.
Schrum, Jacob; Miikkulainen, Risto
2016-01-01
Many challenging sequential decision-making problems require agents to master multiple tasks. For instance, game agents may need to gather resources, attack opponents, and defend against attacks. Learning algorithms can thus benefit from having separate policies for these tasks, and from knowing when each one is appropriate. How well this approach works depends on how tightly coupled the tasks are. Three cases are identified: Isolated tasks have distinct semantics and do not interact, interleaved tasks have distinct semantics but do interact, and blended tasks have regions where semantics from multiple tasks overlap. Learning across multiple tasks is studied in this article with Modular Multiobjective NEAT, a neuroevolution framework applied to three variants of the challenging Ms. Pac-Man video game. In the standard blended version of the game, a surprising, highly effective machine-discovered task division surpasses human-specified divisions, achieving the best scores to date in this game. In isolated and interleaved versions of the game, human-specified task divisions are also successful, though the best scores are surprisingly still achieved by machine discovery. Modular neuroevolution is thus shown to be capable of finding useful, unexpected task divisions better than those apparent to a human designer.
JEnsembl: a version-aware Java API to Ensembl data systems.
Paterson, Trevor; Law, Andy
2012-11-01
The Ensembl Project provides release-specific Perl APIs for efficient high-level programmatic access to data stored in various Ensembl database schema. Although Perl scripts are perfectly suited for processing large volumes of text-based data, Perl is not ideal for developing large-scale software applications nor embedding in graphical interfaces. The provision of a novel Java API would facilitate type-safe, modular, object-orientated development of new Bioinformatics tools with which to access, analyse and visualize Ensembl data. The JEnsembl API implementation provides basic data retrieval and manipulation functionality from the Core, Compara and Variation databases for all species in Ensembl and EnsemblGenomes and is a platform for the development of a richer API to Ensembl datasources. The JEnsembl architecture uses a text-based configuration module to provide evolving, versioned mappings from database schema to code objects. A single installation of the JEnsembl API can therefore simultaneously and transparently connect to current and previous database instances (such as those in the public archive) thus facilitating better analysis repeatability and allowing 'through time' comparative analyses to be performed. Project development, released code libraries, Maven repository and documentation are hosted at SourceForge (http://jensembl.sourceforge.net).
The Registration and Segmentation of Heterogeneous Laser Scanning Data
NASA Astrophysics Data System (ADS)
Al-Durgham, Mohannad M.
Light Detection And Ranging (LiDAR) mapping has been emerging over the past few years as a mainstream tool for the dense acquisition of three dimensional point data. Besides the conventional mapping missions, LiDAR systems have proven to be very useful for a wide spectrum of applications such as forestry, structural deformation analysis, urban mapping, and reverse engineering. The wide application scope of LiDAR lead to the development of many laser scanning technologies that are mountable on multiple platforms (i.e., airborne, mobile terrestrial, and tripod mounted), this caused variations in the characteristics and quality of the generated point clouds. As a result of the increased popularity and diversity of laser scanners, one should address the heterogeneous LiDAR data post processing (i.e., registration and segmentation) problems adequately. Current LiDAR integration techniques do not take into account the varying nature of laser scans originating from various platforms. In this dissertation, the author proposes a methodology designed particularly for the registration and segmentation of heterogeneous LiDAR data. A data characterization and filtering step is proposed to populate the points' attributes and remove non-planar LiDAR points. Then, a modified version of the Iterative Closest Point (ICP), denoted by the Iterative Closest Projected Point (ICPP) is designed for the registration of heterogeneous scans to remove any misalignments between overlapping strips. Next, a region-growing-based heterogeneous segmentation algorithm is developed to ensure the proper extraction of planar segments from the point clouds. Validation experiments show that the proposed heterogeneous registration can successfully align airborne and terrestrial datasets despite the great differences in their point density and their noise level. In addition, similar testes have been conducted to examine the heterogeneous segmentation and it is shown that one is able to identify common planar features in airborne and terrestrial data without resampling or manipulating the data in any way. The work presented in this dissertation provides a framework for the registration and segmentation of airborne and terrestrial laser scans which has a positive impact on the completeness of the scanned feature. Therefore, the derived products from these point clouds have higher accuracy as seen in the full manuscript.
Horror Image Recognition Based on Context-Aware Multi-Instance Learning.
Li, Bing; Xiong, Weihua; Wu, Ou; Hu, Weiming; Maybank, Stephen; Yan, Shuicheng
2015-12-01
Horror content sharing on the Web is a growing phenomenon that can interfere with our daily life and affect the mental health of those involved. As an important form of expression, horror images have their own characteristics that can evoke extreme emotions. In this paper, we present a novel context-aware multi-instance learning (CMIL) algorithm for horror image recognition. The CMIL algorithm identifies horror images and picks out the regions that cause the sensation of horror in these horror images. It obtains contextual cues among adjacent regions in an image using a random walk on a contextual graph. Borrowing the strength of the fuzzy support vector machine (FSVM), we define a heuristic optimization procedure based on the FSVM to search for the optimal classifier for the CMIL. To improve the initialization of the CMIL, we propose a novel visual saliency model based on the tensor analysis. The average saliency value of each segmented region is set as its initial fuzzy membership in the CMIL. The advantage of the tensor-based visual saliency model is that it not only adaptively selects features, but also dynamically determines fusion weights for saliency value combination from different feature subspaces. The effectiveness of the proposed CMIL model is demonstrated by its use in horror image recognition on two large-scale image sets collected from the Internet.
Competition in the domain of wireless networks security
NASA Astrophysics Data System (ADS)
Bednarczyk, Mariusz
2017-04-01
Wireless networks are very popular and have found wide spread usage amongst various segments, also in military environment. The deployment of wireless infrastructures allow to reduce the time it takes to install and dismantle communications networks. With wireless, users are more mobile and can easily get access to the network resources all the time. However, wireless technologies like WiFi or Bluetooth have security issues that hackers have extensively exploited over the years. In the paper several serious security flaws in wireless technologies are presented. Most of them enable to get access to the internal networks and easily carry out man-in-the-middle attacks. Very often, they are used to launch massive denial of service attacks that target the physical infrastructure as well as the RF spectrum. For instance, there are well known instances of Bluetooth connection spoofing in order to steal WiFi password stored in the mobile device. To raise the security awareness and protect wireless networks against an adversary attack, an analysis of attack methods and tools over time is presented in the article. The particular attention is paid to the severity, possible targets as well as the ability to persist in the context of protective measures. Results show that an adversary can take complete control of the victims' mobile device features if the users forget to use simple safety principles.
Dripping from Rough Multi-Segmented Fracture Sets into Unsaturated Rock Underground Excavations
NASA Astrophysics Data System (ADS)
Cesano, D.; Bagtzoglou, A. C.
2001-05-01
The aim of this paper is to present a probabilistic analytical formulation of unsaturated flow through a single rough multi-segmented fracture, with the ultimate goal to provide a numerical platform with which to perform calculations on the dripping initiation time and to explain the fast flow-paths detected and reported by Fabryka-Martin et al. (1996). To accomplish this, an enhanced version of the Wang and Narasimhan model (Wang and Narasimhan, 1985; 1993), the Enhanced Wang and Narasimhan Model (EWNM), has been used. In the EWNM, a fracture is formed by a finite number of connected fracture segments of given strike and dip. These parameters are sampled from hypothetical probability density functions. Unsaturated water flow occurs in these fracture segments, and in order for dripping to occur it is assumed that local saturation conditions exist at the surface and the tunnel level, where dripping occurs. The current version of the EWNM ignores transient flow processes, and thus it assumes the flow system being at equilibrium. The fracture segments are considered as rough fractures, with their roughness characterized by an aperture distribution function that can be derived from real field data. The roughness along each fracture segment is considered to be constant, leading to a constant effective aperture, and it is randomly assigned. An effective flow area is also included in the model, which accounts for three-dimensional variations of the fracture area that can be possibly occupied by water. The model takes into account the possibility that the fracture crosses multiple layers, each of which can have a different configuration in the values of the input parameters. Monte Carlo simulations calculate average times for water to flow from the top to the bottom of the fracture for a specified number of random realizations. The random component of the realizations comprises the different geometric configurations of the fracture flow path, while the value of all the input parameters and the statistical distribution they honor are kept constant from realization to realization. This travel time, called the dripping initiation time, is the cumulative sum of the time it takes for the water to drip through all fracture segments and eventually reach the tunnel. Based on the results of a sensitivity analysis, three different scenarios of input parameters were used to test the validity of the model with the fast flow-paths detected and reported in the Fabryka-Martin et al. (1996) study. The three scenarios differed from each other for the response of the dripping initiation times. These three different parameter configurations were then tested at three different depths. Each depth represented a different location where fast-flow has been detected at Yucca Mountain and reported by Fabryka-Martin et al. (1996). The first depth is considered representative of a location in correspondence to the Bow Ridge Fault. The second location represents a network of steep fractures and cooling joints with large variability in dip reaching the ESF at a depth of 180 meters. The third location, which is probably connected to the Diabolous Ridge Fault, is 290 meters deep and the flow path is low-dipping. Monte Carlo simulations were run for each configuration at each depth to calculate average dripping initiation times, so that results from 9 scenarios were produced. The final conclusion is that the model is able to produce results quite consistent with the Fabryka-Martin et al. (1996) study.
Davies, Louise; Donnelly, Kyla Z; Goodman, Daisy J; Ogrinc, Greg
2016-01-01
Background The Standards for Quality Improvement Reporting Excellence (SQUIRE) Guideline was published in 2008 (SQUIRE 1.0) and was the first publication guideline specifically designed to advance the science of healthcare improvement. Advances in the discipline of improvement prompted us to revise it. We adopted a novel approach to the revision by asking end-users to ‘road test’ a draft version of SQUIRE 2.0. The aim was to determine whether they understood and implemented the guidelines as intended by the developers. Methods Forty-four participants were assigned a manuscript section (ie, introduction, methods, results, discussion) and asked to use the draft Guidelines to guide their writing process. They indicated the text that corresponded to each SQUIRE item used and submitted it along with a confidential survey. The survey examined usability of the Guidelines using Likert-scaled questions and participants’ interpretation of key concepts in SQUIRE using open-ended questions. On the submitted text, we evaluated concordance between participants’ item usage/interpretation and the developers’ intended application. For the survey, the Likert-scaled responses were summarised using descriptive statistics and the open-ended questions were analysed by content analysis. Results Consistent with the SQUIRE Guidelines’ recommendation that not every item be included, less than one-third (n=14) of participants applied every item in their section in full. Of the 85 instances when an item was partially used or was omitted, only 7 (8.2%) of these instances were due to participants not understanding the item. Usage of Guideline items was highest for items most similar to standard scientific reporting (ie, ‘Specific aim of the improvement’ (introduction), ‘Description of the improvement’ (methods) and ‘Implications for further studies’ (discussion)) and lowest (<20% of the time) for those unique to healthcare improvement (ie, ‘Assessment methods for context factors that contributed to success or failure’ and ‘Costs and strategic trade-offs’). Items unique to healthcare improvement, specifically ‘Evolution of the improvement’, ‘Context elements that influenced the improvement’, ‘The logic on which the improvement was based’, ‘Process and outcome measures’, demonstrated poor concordance between participants’ interpretation and developers’ intended application. Conclusions User testing of a draft version of SQUIRE 2.0 revealed which items have poor concordance between developer intent and author usage, which will inform final editing of the Guideline and development of supporting supplementary materials. It also identified the items that require special attention when teaching about scholarly writing in healthcare improvement. PMID:26263916
NASA Astrophysics Data System (ADS)
Barra, Beatrice; El Hadji, Sara; De Momi, Elena; Ferrigno, Giancarlo; Cardinale, Francesco; Baselli, Giuseppe
2017-03-01
Several neurosurgical procedures, such as Artero Venous Malformations (AVMs), aneurysm embolizations and StereoElectroEncephaloGraphy (SEEG) require accurate reconstruction of the cerebral vascular tree, as well as the classification of arteries and veins, in order to increase the safety of the intervention. Segmentation of arteries and veins from 4D CT perfusion scans has already been proposed in different studies. Nonetheless, such procedures require long acquisition protocols and the radiation dose given to the patient is not negligible. Hence, space is open to approaches attempting to recover the dynamic information from standard Contrast Enhanced Cone Beam Computed Tomography (CE-CBCT) scans. The algorithm proposed by our team is called ART 3.5 D. It is a novel algorithm based on the postprocessing of both the angiogram and the raw data of a standard Digital Subtraction Angiography from a CBCT (DSACBCT) allowing arteries and veins segmentation and labeling without requiring any additional radiation exposure for the patient and neither lowering the resolution. In addition, while in previous versions of the algorithm just the distinction of arteries and veins was considered, here the capillary phase simulation and identification is introduced, in order to increase further information useful for more precise vasculature segmentation.
Automatic firearm class identification from cartridge cases
NASA Astrophysics Data System (ADS)
Kamalakannan, Sridharan; Mann, Christopher J.; Bingham, Philip R.; Karnowski, Thomas P.; Gleason, Shaun S.
2011-03-01
We present a machine vision system for automatic identification of the class of firearms by extracting and analyzing two significant properties from spent cartridge cases, namely the Firing Pin Impression (FPI) and the Firing Pin Aperture Outline (FPAO). Within the framework of the proposed machine vision system, a white light interferometer is employed to image the head of the spent cartridge cases. As a first step of the algorithmic procedure, the Primer Surface Area (PSA) is detected using a circular Hough transform. Once the PSA is detected, a customized statistical region-based parametric active contour model is initialized around the center of the PSA and evolved to segment the FPI. Subsequently, the scaled version of the segmented FPI is used to initialize a customized Mumford-Shah based level set model in order to segment the FPAO. Once the shapes of FPI and FPAO are extracted, a shape-based level set method is used in order to compare these extracted shapes to an annotated dataset of FPIs and FPAOs from varied firearm types. A total of 74 cartridge case images non-uniformly distributed over five different firearms are processed using the aforementioned scheme and the promising nature of the results (95% classification accuracy) demonstrate the efficacy of the proposed approach.
Mazzaferri, Javier; Larrivée, Bruno; Cakir, Bertan; Sapieha, Przemyslaw; Costantino, Santiago
2018-03-02
Preclinical studies of vascular retinal diseases rely on the assessment of developmental dystrophies in the oxygen induced retinopathy rodent model. The quantification of vessel tufts and avascular regions is typically computed manually from flat mounted retinas imaged using fluorescent probes that highlight the vascular network. Such manual measurements are time-consuming and hampered by user variability and bias, thus a rapid and objective method is needed. Here, we introduce a machine learning approach to segment and characterize vascular tufts, delineate the whole vasculature network, and identify and analyze avascular regions. Our quantitative retinal vascular assessment (QuRVA) technique uses a simple machine learning method and morphological analysis to provide reliable computations of vascular density and pathological vascular tuft regions, devoid of user intervention within seconds. We demonstrate the high degree of error and variability of manual segmentations, and designed, coded, and implemented a set of algorithms to perform this task in a fully automated manner. We benchmark and validate the results of our analysis pipeline using the consensus of several manually curated segmentations using commonly used computer tools. The source code of our implementation is released under version 3 of the GNU General Public License ( https://www.mathworks.com/matlabcentral/fileexchange/65699-javimazzaf-qurva ).
Zhang, Xiaodong; Jing, Shasha; Gao, Peiyi; Xue, Jing; Su, Lu; Li, Weiping; Ren, Lijie; Hu, Qingmao
2016-01-01
Segmentation of infarcts at hyperacute stage is challenging as they exhibit substantial variability which may even be hard for experts to delineate manually. In this paper, a sparse representation based classification method is explored. For each patient, four volumetric data items including three volumes of diffusion weighted imaging and a computed asymmetry map are employed to extract patch features which are then fed to dictionary learning and classification based on sparse representation. Elastic net is adopted to replace the traditional L 0 -norm/ L 1 -norm constraints on sparse representation to stabilize sparse code. To decrease computation cost and to reduce false positives, regions-of-interest are determined to confine candidate infarct voxels. The proposed method has been validated on 98 consecutive patients recruited within 6 hours from onset. It is shown that the proposed method could handle well infarcts with intensity variability and ill-defined edges to yield significantly higher Dice coefficient (0.755 ± 0.118) than the other two methods and their enhanced versions by confining their segmentations within the regions-of-interest (average Dice coefficient less than 0.610). The proposed method could provide a potential tool to quantify infarcts from diffusion weighted imaging at hyperacute stage with accuracy and speed to assist the decision making especially for thrombolytic therapy.
Rodríguez, Erika E.; Hernández-Lemus, Enrique; Itzá-Ortiz, Benjamín A.; Jiménez, Ismael; Rudomín, Pablo
2011-01-01
The analysis of the interaction and synchronization of relatively large ensembles of neurons is fundamental for the understanding of complex functions of the nervous system. It is known that the temporal synchronization of neural ensembles is involved in the generation of specific motor, sensory or cognitive processes. Also, the intersegmental coherence of spinal spontaneous activity may indicate the existence of synaptic neural pathways between different pairs of lumbar segments. In this study we present a multichannel version of the detrended fluctuation analysis method (mDFA) to analyze the correlation dynamics of spontaneous spinal activity (SSA) from time series analysis. This method together with the classical detrended fluctuation analysis (DFA) were used to find out whether the SSA recorded in one or several segments in the spinal cord of the anesthetized cat occurs either in a random or in an organized manner. Our results are consistent with a non-random organization of the sets of neurons involved in the generation of spontaneous cord dorsum potentials (CDPs) recorded either from one lumbar segment (DFA- mean = 1.040.09) or simultaneously from several lumbar segments (mDFA- mean = 1.010.06), where = 0.5 indicates randomness while 0.5 indicates long-term correlations. To test the sensitivity of the mDFA method we also examined the effects of small spinal lesions aimed to partially interrupt connectivity between neighboring lumbosacral segments. We found that the synchronization and correlation between the CDPs recorded from the L5 and L6 segments in both sides of the spinal cord were reduced when a lesion comprising the left dorsal quadrant was performed between the segments L5 and L6 (mDFA- = 0.992 as compared to initial conditions mDFA- = 1.186). The synchronization and correlation were reduced even further after a similar additional right spinal lesion (mDFA- = 0.924). In contrast to the classical methods, such as correlation and coherence quantification that define a relation between two sets of data, the mDFA method properly reveals the synchronization of multiple groups of neurons in several segments of the spinal cord. This method is envisaged as a useful tool to characterize the structure of higher order ensembles of cord dorsum spontaneous potentials after spinal cord or peripheral nerve lesions. PMID:22046288
Oligomerisation of Synaptobrevin-2 Studied by Native Mass Spectrometry and Chemical Cross-Linking
NASA Astrophysics Data System (ADS)
Wittig, Sabine; Haupt, Caroline; Hoffmann, Waldemar; Kostmann, Susann; Pagel, Kevin; Schmidt, Carla
2018-06-01
Synaptobrevin-2 is a key player in signal transmission in neurons. It forms, together with SNAP25 and Syntaxin-1A, the neuronal soluble N-ethylmaleimide-sensitive factor attachment protein receptor (SNARE) complex and mediates exocytosis of synaptic vesicles with the pre-synaptic membrane. While Synaptobrevin-2 is part of a four-helix bundle in this SNARE complex, it is natively unstructured in the absence of lipids or other SNARE proteins. Partially folded segments, presumably SNARE complex formation intermediates, as well as formation of Synaptobrevin-2 dimers and oligomers, were identified in previous studies. Here, we employ three Synaptobrevin-2 variants—the full-length protein Syb(1-116), the soluble, cytosolic variant Syb(1-96) as well as a shorter version Syb(49-96) containing structured segments but omitting a trigger site for SNARE complex formation—to study oligomerisation in the absence of interaction partners or when incorporated into the lipid bilayer of liposomes. Combining native mass spectrometry with chemical cross-linking, we find that the truncated versions show increased oligomerisation. Our findings from both techniques agree well and confirm the presence of oligomers in solution while membrane-bound Synaptobrevin-2 is mostly monomeric. Using ion mobility mass spectrometry, we could further show that lower charge states of Syb(49-96) oligomers, which most likely represent solution structures, follow an isotropic growth curve suggesting that they are intrinsically disordered. From a technical point of view, we show that the combination of native ion mobility mass spectrometry with chemical cross-linking is well-suited for the analysis of protein homo-oligomers. [Figure not available: see fulltext.
Three-dimensional automatic computer-aided evaluation of pleural effusions on chest CT images
NASA Astrophysics Data System (ADS)
Bi, Mark; Summers, Ronald M.; Yao, Jianhua
2011-03-01
The ability to estimate the volume of pleural effusions is desirable as it can provide information about the severity of the condition and the need for thoracentesis. We present here an improved version of an automated program to measure the volume of pleural effusions using regular chest CT images. First, the lungs are segmented using region growing, mathematical morphology, and anatomical knowledge. The visceral and parietal layers of the pleura are then extracted based on anatomical landmarks, curve fitting and active contour models. The liver and compressed tissues are segmented out using thresholding. The pleural space is then fitted to a Bezier surface which is subsequently projected onto the individual two-dimensional slices. Finally, the volume of the pleural effusion is quantified. Our method was tested on 15 chest CT studies and validated against three separate manual tracings. The Dice coefficients were 0.74+/-0.07, 0.74+/-0.08, and 0.75+/-0.07 respectively, comparable to the variation between two different manual tracings.
NASA Astrophysics Data System (ADS)
Novelli, Antonio; Aguilar, Manuel A.; Nemmaoui, Abderrahim; Aguilar, Fernando J.; Tarantino, Eufemia
2016-10-01
This paper shows the first comparison between data from Sentinel-2 (S2) Multi Spectral Instrument (MSI) and Landsat 8 (L8) Operational Land Imager (OLI) headed up to greenhouse detection. Two closely related in time scenes, one for each sensor, were classified by using Object Based Image Analysis and Random Forest (RF). The RF input consisted of several object-based features computed from spectral bands and including mean values, spectral indices and textural features. S2 and L8 data comparisons were also extended using a common segmentation dataset extracted form VHR World-View 2 (WV2) imagery to test differences only due to their specific spectral contribution. The best band combinations to perform segmentation were found through a modified version of the Euclidian Distance 2 index. Four different RF classifications schemes were considered achieving 89.1%, 91.3%, 90.9% and 93.4% as the best overall accuracies respectively, evaluated over the whole study area.
Diffusion tensor driven contour closing for cell microinjection targeting.
Becattini, Gabriele; Mattos, Leonardo S; Caldwell, Darwin G
2010-01-01
This article introduces a novel approach to robust automatic detection of unstained living cells in bright-field (BF) microscope images with the goal of producing a target list for an automated microinjection system. The overall image analysis process is described and includes: preprocessing, ridge enhancement, image segmentation, shape analysis and injection point definition. The developed algorithm implements a new version of anisotropic contour completion (ACC) based on the partial differential equation (PDE) for heat diffusion which improves the cell segmentation process by elongating the edges only along their tangent direction. The developed ACC algorithm is equivalent to a dilation of the binary edge image with a continuous elliptic structural element that takes into account local orientation of the contours preventing extension towards normal direction. Experiments carried out on real images of 10 to 50 microm CHO-K1 adherent cells show a remarkable reliability in the algorithm along with up to 85% success for cell detection and injection point definition.
Multicore and GPU algorithms for Nussinov RNA folding
2014-01-01
Background One segment of a RNA sequence might be paired with another segment of the same RNA sequence due to the force of hydrogen bonds. This two-dimensional structure is called the RNA sequence's secondary structure. Several algorithms have been proposed to predict an RNA sequence's secondary structure. These algorithms are referred to as RNA folding algorithms. Results We develop cache efficient, multicore, and GPU algorithms for RNA folding using Nussinov's algorithm. Conclusions Our cache efficient algorithm provides a speedup between 1.6 and 3.0 relative to a naive straightforward single core code. The multicore version of the cache efficient single core algorithm provides a speedup, relative to the naive single core algorithm, between 7.5 and 14.0 on a 6 core hyperthreaded CPU. Our GPU algorithm for the NVIDIA C2050 is up to 1582 times as fast as the naive single core algorithm and between 5.1 and 11.2 times as fast as the fastest previously known GPU algorithm for Nussinov RNA folding. PMID:25082539
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oberreuter, Johannes M., E-mail: johannes.oberreuter@theorie.physik.uni-goettingen.de; Homrighausen, Ingo; Kehrein, Stefan
We study the time evolution of entanglement in a new quantum version of the Kac ring, where two spin chains become dynamically entangled by quantum gates, which are used instead of the classical markers. The features of the entanglement evolution are best understood by using knowledge about the behavior of an ensemble of classical Kac rings. For instance, the recurrence time of the quantum many-body system is twice the length of the chain and “thermalization” only occurs on time scales much smaller than the dimension of the Hilbert space. The model thus elucidates the relation between the results of measurementsmore » in quantum and classical systems: While in classical systems repeated measurements are performed over an ensemble of systems, the corresponding result is obtained by measuring the same quantum system prepared in an appropriate superposition repeatedly.« less
Cosmology in the laboratory: An analogy between hyperbolic metamaterials and the Milne universe
NASA Astrophysics Data System (ADS)
Figueiredo, David; Moraes, Fernando; Fumeron, Sébastien; Berche, Bertrand
2017-11-01
This article shows that the compactified Milne universe geometry, a toy model for the big crunch/big bang transition, can be realized in hyperbolic metamaterials, a new class of nanoengineered systems which have recently found its way as an experimental playground for cosmological ideas. On one side, Klein-Gordon particles, as well as tachyons, are used as probes of the Milne geometry. On the other side, the propagation of light in two versions of a liquid crystal-based metamaterial provides the analogy. It is shown that ray and wave optics in the metamaterial mimic, respectively, the classical trajectories and wave function propagation, of the Milne probes, leading to the exciting perspective of realizing experimental tests of particle tunneling through the cosmic singularity, for instance.
Dual control and prevention of the turn-off phenomenon in a class of mimo systems
NASA Technical Reports Server (NTRS)
Mookerjee, P.; Bar-Shalom, Y.; Molusis, J. A.
1985-01-01
A recently developed methodology of adaptive dual control based upon sensitivity functions is applied here to a multivariable input-output model. The plant has constant but unknown parameters. It represents a simplified linear version of the relationship between the vibration output and the higher harmonic control input for a helicopter. The cautious and the new dual controller are examined. In many instances, the cautious controller is seen to turn off. The new dual controller modifies the cautious control design by numerator and denominator correction terms which depend upon the sensitivity functions of the expected future cost and avoids the turn-off and burst phenomena. Monte Carlo simulations and statistical tests of significance indicate the superiority of the dual controller over the cautious and the heuristic certainity equivalence controllers.
NASA Astrophysics Data System (ADS)
Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.
2012-12-01
At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.
Diffraction-Induced Bidimensional Talbot Self-Imaging with Full Independent Period Control
NASA Astrophysics Data System (ADS)
Guillet de Chatellus, Hugues; Romero Cortés, Luis; Deville, Antonin; Seghilani, Mohamed; Azaña, José
2017-03-01
We predict, formulate, and observe experimentally a generalized version of the Talbot effect that allows one to create diffraction-induced self-images of a periodic two-dimensional (2D) waveform with arbitrary control of the image spatial periods. Through the proposed scheme, the periods of the output self-image are multiples of the input ones by any desired integer or fractional factor, and they can be controlled independently across each of the two wave dimensions. The concept involves conditioning the phase profile of the input periodic wave before free-space diffraction. The wave energy is fundamentally preserved through the self-imaging process, enabling, for instance, the possibility of the passive amplification of the periodic patterns in the wave by a purely diffractive effect, without the use of any active gain.
Diffraction-Induced Bidimensional Talbot Self-Imaging with Full Independent Period Control.
Guillet de Chatellus, Hugues; Romero Cortés, Luis; Deville, Antonin; Seghilani, Mohamed; Azaña, José
2017-03-31
We predict, formulate, and observe experimentally a generalized version of the Talbot effect that allows one to create diffraction-induced self-images of a periodic two-dimensional (2D) waveform with arbitrary control of the image spatial periods. Through the proposed scheme, the periods of the output self-image are multiples of the input ones by any desired integer or fractional factor, and they can be controlled independently across each of the two wave dimensions. The concept involves conditioning the phase profile of the input periodic wave before free-space diffraction. The wave energy is fundamentally preserved through the self-imaging process, enabling, for instance, the possibility of the passive amplification of the periodic patterns in the wave by a purely diffractive effect, without the use of any active gain.
Unsupervised motion-based object segmentation refined by color
NASA Astrophysics Data System (ADS)
Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris
2003-06-01
For various applications, such as data compression, structure from motion, medical imaging and video enhancement, there is a need for an algorithm that divides video sequences into independently moving objects. Because our focus is on video enhancement and structure from motion for consumer electronics, we strive for a low complexity solution. For still images, several approaches exist based on colour, but these lack in both speed and segmentation quality. For instance, colour-based watershed algorithms produce a so-called oversegmentation with many segments covering each single physical object. Other colour segmentation approaches exist which somehow limit the number of segments to reduce this oversegmentation problem. However, this often results in inaccurate edges or even missed objects. Most likely, colour is an inherently insufficient cue for real world object segmentation, because real world objects can display complex combinations of colours. For video sequences, however, an additional cue is available, namely the motion of objects. When different objects in a scene have different motion, the motion cue alone is often enough to reliably distinguish objects from one another and the background. However, because of the lack of sufficient resolution of efficient motion estimators, like the 3DRS block matcher, the resulting segmentation is not at pixel resolution, but at block resolution. Existing pixel resolution motion estimators are more sensitive to noise, suffer more from aperture problems or have less correspondence to the true motion of objects when compared to block-based approaches or are too computationally expensive. From its tendency to oversegmentation it is apparent that colour segmentation is particularly effective near edges of homogeneously coloured areas. On the other hand, block-based true motion estimation is particularly effective in heterogeneous areas, because heterogeneous areas improve the chance a block is unique and thus decrease the chance of the wrong position producing a good match. Consequently, a number of methods exist which combine motion and colour segmentation. These methods use colour segmentation as a base for the motion segmentation and estimation or perform an independent colour segmentation in parallel which is in some way combined with the motion segmentation. The presented method uses both techniques to complement each other by first segmenting on motion cues and then refining the segmentation with colour. To our knowledge few methods exist which adopt this approach. One example is te{meshrefine}. This method uses an irregular mesh, which hinders its efficient implementation in consumer electronics devices. Furthermore, the method produces a foreground/background segmentation, while our applications call for the segmentation of multiple objects. NEW METHOD As mentioned above we start with motion segmentation and refine the edges of this segmentation with a pixel resolution colour segmentation method afterwards. There are several reasons for this approach: + Motion segmentation does not produce the oversegmentation which colour segmentation methods normally produce, because objects are more likely to have colour discontinuities than motion discontinuities. In this way, the colour segmentation only has to be done at the edges of segments, confining the colour segmentation to a smaller part of the image. In such a part, it is more likely that the colour of an object is homogeneous. + This approach restricts the computationally expensive pixel resolution colour segmentation to a subset of the image. Together with the very efficient 3DRS motion estimation algorithm, this helps to reduce the computational complexity. + The motion cue alone is often enough to reliably distinguish objects from one another and the background. To obtain the motion vector fields, a variant of the 3DRS block-based motion estimator which analyses three frames of input was used. The 3DRS motion estimator is known for its ability to estimate motion vectors which closely resemble the true motion. BLOCK-BASED MOTION SEGMENTATION As mentioned above we start with a block-resolution segmentation based on motion vectors. The presented method is inspired by the well-known K-means segmentation method te{K-means}. Several other methods (e.g. te{kmeansc}) adapt K-means for connectedness by adding a weighted shape-error. This adds the additional difficulty of finding the correct weights for the shape-parameters. Also, these methods often bias one particular pre-defined shape. The presented method, which we call K-regions, encourages connectedness because only blocks at the edges of segments may be assigned to another segment. This constrains the segmentation method to such a degree that it allows the method to use least squares for the robust fitting of affine motion models for each segment. Contrary to te{parmkm}, the segmentation step still operates on vectors instead of model parameters. To make sure the segmentation is temporally consistent, the segmentation of the previous frame will be used as initialisation for every new frame. We also present a scheme which makes the algorithm independent of the initially chosen amount of segments. COLOUR-BASED INTRA-BLOCK SEGMENTATION The block resolution motion-based segmentation forms the starting point for the pixel resolution segmentation. The pixel resolution segmentation is obtained from the block resolution segmentation by reclassifying pixels only at the edges of clusters. We assume that an edge between two objects can be found in either one of two neighbouring blocks that belong to different clusters. This assumption allows us to do the pixel resolution segmentation on each pair of such neighbouring blocks separately. Because of the local nature of the segmentation, it largely avoids problems with heterogeneously coloured areas. Because no new segments are introduced in this step, it also does not suffer from oversegmentation problems. The presented method has no problems with bifurcations. For the pixel resolution segmentation itself we reclassify pixels such that we optimize an error norm which favour similarly coloured regions and straight edges. SEGMENTATION MEASURE To assist in the evaluation of the proposed algorithm we developed a quality metric. Because the problem does not have an exact specification, we decided to define a ground truth output which we find desirable for a given input. We define the measure for the segmentation quality as being how different the segmentation is from the ground truth. Our measure enables us to evaluate oversegmentation and undersegmentation seperately. Also, it allows us to evaluate which parts of a frame suffer from oversegmentation or undersegmentation. The proposed algorithm has been tested on several typical sequences. CONCLUSIONS In this abstract we presented a new video segmentation method which performs well in the segmentation of multiple independently moving foreground objects from each other and the background. It combines the strong points of both colour and motion segmentation in the way we expected. One of the weak points is that the segmentation method suffers from undersegmentation when adjacent objects display similar motion. In sequences with detailed backgrounds the segmentation will sometimes display noisy edges. Apart from these results, we think that some of the techniques, and in particular the K-regions technique, may be useful for other two-dimensional data segmentation problems.
Sunitinib-Induced Acute Interstitial Nephritis in a Thrombocytopenic Renal Cell Cancer Patient.
Azar, Ibrahim; Esfandiarifard, Saghi; Sinai, Pedram; Wazir, Ali; Foulke, Llewellyn; Mehdi, Syed
2017-01-01
Sunitinib, a multitargeted tyrosine kinase inhibitor (TKI), is currently the standard of care for patients with metastatic renal cell carcinoma. Renal adverse events associated with sunitinib include proteinuria, renal insufficiency secondary to focal segmental glomerulosclerosis (FSGS), and thrombotic microangiopathy. We describe the second reported instance of biopsy-proven sunitinib-induced acute interstitial nephritis (AIN), in a challenging case complicated by thrombocytopenia. The case illustrates the importance of early diagnosis and intervention in ensuring long-term recovery from renal complications. Four other cases of AIN reported along with inhibition of the vascular endothelial growth factor (VEGF) by either TKI (sunitinib and sorafenib) or antibodies (bevacizumab) suggest a possible class effect. Given our experience, we recommend monitoring renal function with VEGF inhibition, and in the case of renal failure in the setting of an unclear diagnosis, we recommend prompt biopsy.
Results of a pancreatectomy with a limited venous resection for pancreatic cancer.
Illuminati, Giulio; Carboni, Fabio; Lorusso, Riccardo; D'Urso, Antonio; Ceccanei, Gianluca; Papaspyropoulos, Vassilios; Pacile, Maria Antonietta; Santoro, Eugenio
2008-01-01
The indications for a pancreatectomy with a partial resection of the portal or superior mesenteric vein for pancreatic cancer, when the vein is involved by the tumor, remain controversial. It can be assumed that when such involvement is not extensive, resection of the tumor and the involved venous segment, followed by venous reconstruction will extend the potential benefits of this resection to a larger number of patients. The further hypothesis of this study is that whenever involvement of the vein by the tumor does not exceed 2 cm in length, this involvement is more likely due to the location of the tumor being close to the vein rather than because of its aggressive biological behavior. Consequently, in these instances a pancreatectomy with a resection of the involved segment of portal or superior mesenteric vein for pancreatic cancer is indicated, as it will yield results that are superposable to those of a pancreatectomy for cancer without vascular involvement. Twenty-nine patients with carcinoma of the pancreas involving the portal or superior mesenteric vein over a length of 2 cm or less underwent a macroscopically curative resection of the pancreas en bloc with the involved segment of the vein. The venous reconstruction procedures included a tangential resection/lateral suture in 15 cases, a resection/end-to-end anastomosis in 11, and a resection/patch closure in 3. Postoperative mortality was 3.4%; morbidity was 21%. Local recurrence was 14%. Cumulative (standard error) survival rate was 17% (9%) at 3 years. A pancreatectomy combined with a resection of the portal or superior mesenteric vein for cancer with venous involvement not exceeding 2 cm is indicated in order to extend the potential benefits of a curative resection.
Potentials of RF/FSO Communication in UAS Operations
NASA Astrophysics Data System (ADS)
Griethe, Wolfgang; Heine, Frank
2013-08-01
Free Space Optical Communications (FSOC) has gained particular attention during the past few years and is progressing continuously. With the successful in-orbit verification of a Laser Communication Terminal (LCT), the coherent homodyne BPSK scheme advanced to a standard for Free-Space Optical Communication (FSOC) which now prevails more and more. The LCT is presently operated on satellites in Low Earth Orbit (LEO). In the near future, the LCT will be operated in Geosynchronous Orbit (GEO) onboard the ALPHASAT-TDP and the European Data Relay System (EDRS). In other words, the LCT has reached a point of maturity to realize its practical application. With existence of such space assets the time has come for other utilization beyond that of optical Inter-Satellite Links (ISL). Aeronautical applications, as for instance High Altitude Long Endurance (HALE) or Medium Altitude Long Endurance (MALE) Unmanned Aerial Systems (UAS) have to be addressed. This is caused due to an extremely high demand for bandwidth. Driving factors and advantages of FSOC in HALE/MALE UAS missions are highlighted. Numerous practice-related issues are described concerning the space segment, the aeronautical segment as well as the ground segment. The advantages for UAS missions are described resulting from the utilization of FSOC exclusively for wideband transmission of sensor data while vehicle Command & Control (C2) can be maintained, as before, via RF communication. Moreover, the paper discusses FSOC as an enabler for the integration of air and space-based wideband Intelligence, Surveillance & Reconnaissance (ISR) systems into existent military command and control networks. From the given information it can be concluded that FSOC contributes to the future increase of air-and space power.
Vannest, Jennifer J.; Karunanayaka, Prasanna R.; Altaye, Mekibib; Schmithorst, Vincent J.; Plante, Elena M.; Eaton, Kenneth J.; Rasmussen, Jerod M.; Holland, Scott K.
2009-01-01
Purpose To use functional MRI methods to visualize a network of auditory and language-processing brain regions associated with processing an aurally-presented story. We compare a passive listening (PL) story paradigm to an active-response (AR) version including on-line performance monitoring and a sparse acquisition technique. Materials/Methods Twenty children (ages 11−13) completed PL and AR story processing tasks. The PL version presented alternating 30-second blocks of stories and tones; the AR version presented story segments, comprehension questions, and 5s tone sequences, with fMRI acquisitions between stimuli. fMRI data was analyzed using a general linear model approach and paired t-test identifying significant group activation. Results Both tasks activated in primary auditory cortex, superior temporal gyrus bilaterally, left inferior frontal gyrus. The AR task demonstrated more extensive activation, including dorsolateral prefrontal cortex and anterior/posterior cingulate cortex. Comparison of effect size in each paradigm showed a larger effect for the AR paradigm in a left inferior frontal ROI. Conclusion Activation patterns for story processing in children are similar in passive listening and active-response tasks. Increases in extent and magnitude of activation in the AR task are likely associated with memory and attention resources engaged across acquisition intervals. PMID:19306445
Vannest, Jennifer J; Karunanayaka, Prasanna R; Altaye, Mekibib; Schmithorst, Vincent J; Plante, Elena M; Eaton, Kenneth J; Rasmussen, Jerod M; Holland, Scott K
2009-04-01
To use functional MRI (fMRI) methods to visualize a network of auditory and language-processing brain regions associated with processing an aurally-presented story. We compare a passive listening (PL) story paradigm to an active-response (AR) version including online performance monitoring and a sparse acquisition technique. Twenty children (ages 11-13 years) completed PL and AR story processing tasks. The PL version presented alternating 30-second blocks of stories and tones; the AR version presented story segments, comprehension questions, and 5-second tone sequences, with fMRI acquisitions between stimuli. fMRI data was analyzed using a general linear model approach and paired t-test identifying significant group activation. Both tasks showed activation in the primary auditory cortex, superior temporal gyrus bilaterally, and left inferior frontal gyrus (IFG). The AR task demonstrated more extensive activation, including the dorsolateral prefrontal cortex and anterior/posterior cingulate cortex. Comparison of effect size in each paradigm showed a larger effect for the AR paradigm in a left inferior frontal region-of-interest (ROI). Activation patterns for story processing in children are similar in PL and AR tasks. Increases in extent and magnitude of activation in the AR task are likely associated with memory and attention resources engaged across acquisition intervals.
Kohlberg, Gavriel D.; Mancuso, Dean M.; Chari, Divya A.; Lalwani, Anil K.
2015-01-01
Objective. Enjoyment of music remains an elusive goal following cochlear implantation. We test the hypothesis that reengineering music to reduce its complexity can enhance the listening experience for the cochlear implant (CI) listener. Methods. Normal hearing (NH) adults (N = 16) and CI listeners (N = 9) evaluated a piece of country music on three enjoyment modalities: pleasantness, musicality, and naturalness. Participants listened to the original version along with 20 modified, less complex, versions created by including subsets of the musical instruments from the original song. NH participants listened to the segments both with and without CI simulation processing. Results. Compared to the original song, modified versions containing only 1–3 instruments were less enjoyable to the NH listeners but more enjoyable to the CI listeners and the NH listeners with CI simulation. Excluding vocals and including rhythmic instruments improved enjoyment for NH listeners with CI simulation but made no difference for CI listeners. Conclusions. Reengineering a piece of music to reduce its complexity has the potential to enhance music enjoyment for the cochlear implantee. Thus, in addition to improvements in software and hardware, engineering music specifically for the CI listener may be an alternative means to enhance their listening experience. PMID:26543322
Brouwer Award Lecture: Anelastic tides of close-in satellites and exoplanets
NASA Astrophysics Data System (ADS)
Ferraz-Mello, Sylvio
2016-05-01
This lecture reviews a new theory of the anelastic tides of celestial bodies in which the deformation of the body is the result of a Newtonian creep inversely proportional to the viscosity of the body and, along each radius, directly proportional to the distance from the actual surface of the body to the equilibrium. The first version of the theory (AAS/DDA 2012; CeMDA 2013), was restricted to homogeneous bodies. It was applied to many different bodies as the Moon, Mercury, super-Earths and hot Jupiters. An improved version (AAS/DDA 2014) included also the loss of angular momentum due to stellar winds and was applied to the study of the rotational evolution of active stars hosting massive companions. One more recent version (Folonier et al. AAS/DDA 2013; DPS 2015) allowed for the consideration of layered structures and was applied to Titan and Mercury. The resulting anelastic tides depend on the nature of the considered body. In the case of low-viscosity bodies (high relaxation factor), as gaseous planets and stars, the results are nearly the same of Darwin's theory. For instance, in these cases the dissipation grows proportionally to the tidal frequency. In the case of high-viscosity rocky satellites and planets (low relaxation factor), the results are structurally different: the dissipation varies with the tidal frequency following an inverse power law and the rotation may be driven to several attractors whose frequencies are 1/2, 1, 3/2, 2, 5/2,… times the orbital mean-motion, even when no permanent triaxiality exists.
DELPHI: An introduction to output layout and data content
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, C.F.
1994-08-16
DELPHI was the data summary and interpretation code used by gas diagnostics personnel during the period from 1968 through 1986. It was written by Floyd Momyer, and went through several revisions during its period of use. Described here is the final version, which provided the most extensive set of summary tables. Earlier versions of the code lacked some of the capabilities of the final version, but what they did include was of substantially the same format. DELPHI was run against most available input decks in the mid 1980s. Microfiche and hardcopy output were generated. Both now reside in our archives.more » These reruns used modified input decks, which may not have had the proper {open_quotes}trigger{close_quotes} to instruct DELPHI to output some tables. These tables could, therefore be missing from a printout even though the necessary data was present. Also, modifications to DELPHI did, in some instances, eliminate DELPHIs` capability to correctly output some of the earlier optional tables. This monologue is intended to compliment the archived printout, and to provide enough insight so that someone unfamiliar with the techniques of Gas Diagnostics can retrieve the results at some future date. DELPHI last ran on the CDC-7600 machines, and was not converted to run on the Crays when the CDC-7600`s were decommissioned. DELPHI accepted data from various analytical systems, set up data summary tables, and combined preshot tracer and detector data with these results to calculate the total production of measured species and the indicated fission yields and detector conversions.« less
Using pattern based layout comparison for a quick analysis of design changes
NASA Astrophysics Data System (ADS)
Huang, Lucas; Yang, Legender; Kan, Huan; Zou, Elain; Wan, Qijian; Du, Chunshan; Hu, Xinyi; Liu, Zhengfang
2018-03-01
A design usually goes through several versions until achieving a most successful one. These changes between versions are not a complete substitution but a continual improvement, either fixing the known issues of its prior versions (engineering change order) or a more optimized design substitution of a portion of the design. On the manufacturing side, process engineers care more about the design pattern changes because any new pattern occurrence may be a killer of the yield. An effective and efficient way to narrow down the diagnosis scope appeals to the engineers. What is the best approach of comparing two layouts? A direct overlay of two layouts may not always work as even though most of the design instances will be kept in the layout from version to version, the actual placements may be different. An alternative way, pattern based layout comparison, comes to play. By expanding this application, it makes it possible to transfer the learning in one cycle to another and accelerate the process of failure analysis. This paper presents a solution to compare two layouts by using Calibre DRC and Pattern Matching. The key step in this flow is layout decomposition. In theory, with a fixed pattern size, a layout can always be decomposed into limited number of patterns by moving the pattern center around the layout, the number is limited but may be huge if the layout is not processed smartly! A mathematical answer is not what we are looking for but an engineering solution is more desired. Layouts must be decomposed into patterns with physical meaning in a smart way. When a layout is decomposed and patterns are classified, a pattern library with unique patterns inside is created for that layout. After individual pattern libraries for each layout are created, run pattern comparison utility provided by Calibre Pattern Matching to compare the pattern libraries, unique patterns will come out for each layout. This paper illustrates this flow in details and demonstrates the advantage of combining Calibre DRC and Calibre Pattern Matching.
NASA Astrophysics Data System (ADS)
Cao, Xiangyu; Fyodorov, Yan V.; Le Doussal, Pierre
2018-02-01
We address systematically an apparent nonphysical behavior of the free-energy moment generating function for several instances of the logarithmically correlated models: the fractional Brownian motion with Hurst index H =0 (fBm0) (and its bridge version), a one-dimensional model appearing in decaying Burgers turbulence with log-correlated initial conditions and, finally, the two-dimensional log-correlated random-energy model (logREM) introduced in Cao et al. [Phys. Rev. Lett. 118, 090601 (2017), 10.1103/PhysRevLett.118.090601] based on the two-dimensional Gaussian free field with background charges and directly related to the Liouville field theory. All these models share anomalously large fluctuations of the associated free energy, with a variance proportional to the log of the system size. We argue that a seemingly nonphysical vanishing of the moment generating function for some values of parameters is related to the termination point transition (i.e., prefreezing). We study the associated universal log corrections in the frozen phase, both for logREMs and for the standard REM, filling a gap in the literature. For the above mentioned integrable instances of logREMs, we predict the nontrivial free-energy cumulants describing non-Gaussian fluctuations on the top of the Gaussian with extensive variance. Some of the predictions are tested numerically.
Hybrid Microgrid Configuration Optimization with Evolutionary Algorithms
NASA Astrophysics Data System (ADS)
Lopez, Nicolas
This dissertation explores the Renewable Energy Integration Problem, and proposes a Genetic Algorithm embedded with a Monte Carlo simulation to solve large instances of the problem that are impractical to solve via full enumeration. The Renewable Energy Integration Problem is defined as finding the optimum set of components to supply the electric demand to a hybrid microgrid. The components considered are solar panels, wind turbines, diesel generators, electric batteries, connections to the power grid and converters, which can be inverters and/or rectifiers. The methodology developed is explained as well as the combinatorial formulation. In addition, 2 case studies of a single objective optimization version of the problem are presented, in order to minimize cost and to minimize global warming potential (GWP) followed by a multi-objective implementation of the offered methodology, by utilizing a non-sorting Genetic Algorithm embedded with a monte Carlo Simulation. The method is validated by solving a small instance of the problem with known solution via a full enumeration algorithm developed by NREL in their software HOMER. The dissertation concludes that the evolutionary algorithms embedded with Monte Carlo simulation namely modified Genetic Algorithms are an efficient form of solving the problem, by finding approximate solutions in the case of single objective optimization, and by approximating the true Pareto front in the case of multiple objective optimization of the Renewable Energy Integration Problem.
Maximal tree size of few-qubit states
NASA Astrophysics Data System (ADS)
Le, Huy Nguyen; Cai, Yu; Wu, Xingyao; Rabelo, Rafael; Scarani, Valerio
2014-06-01
Tree size (TS) is an interesting measure of complexity for multiqubit states: not only is it in principle computable, but one can obtain lower bounds for it. In this way, it has been possible to identify families of states whose complexity scales superpolynomially in the number of qubits. With the goal of progressing in the systematic study of the mathematical property of TS, in this work we characterize the tree size of pure states for the case where the number of qubits is small, namely, 3 or 4. The study of three qubits does not hold great surprises, insofar as the structure of entanglement is rather simple; the maximal TS is found to be 8, reached for instance by the |W> state. The study of four qubits yields several insights: in particular, the most economic description of a state is found not to be recursive. The maximal TS is found to be 16, reached for instance by a state called |Ψ(4)> which was already discussed in the context of four-photon down-conversion experiments. We also find that the states with maximal tree size form a set of zero measure: a smoothed version of tree size over a neighborhood of a state (ɛ-TS) reduces the maximal values to 6 and 14, respectively. Finally, we introduce a notion of tree size for mixed states and discuss it for a one-parameter family of states.
Multi-instance learning based on instance consistency for image retrieval
NASA Astrophysics Data System (ADS)
Zhang, Miao; Wu, Zhize; Wan, Shouhong; Yue, Lihua; Yin, Bangjie
2017-07-01
Multiple-instance learning (MIL) has been successfully utilized in image retrieval. Existing approaches cannot select positive instances correctly from positive bags which may result in a low accuracy. In this paper, we propose a new image retrieval approach called multiple instance learning based on instance-consistency (MILIC) to mitigate such issue. First, we select potential positive instances effectively in each positive bag by ranking instance-consistency (IC) values of instances. Then, we design a feature representation scheme, which can represent the relationship among bags and instances, based on potential positive instances to convert a bag into a single instance. Finally, we can use a standard single-instance learning strategy, such as the support vector machine, for performing object-based image retrieval. Experimental results on two challenging data sets show the effectiveness of our proposal in terms of accuracy and run time.
Mated Fingerprint Card Pairs (Volumes 1-5)
National Institute of Standards and Technology Data Gateway
NIST Mated Fingerprint Card Pairs (Volumes 1-5) (Web, free access) The NIST database of mated fingerprint card pairs (Special Database 9) consists of multiple volumes. Currently five volumes have been released. Each volume will be a 3-disk set with each CD-ROM containing 90 mated card pairs of segmented 8-bit gray scale fingerprint images (900 fingerprint image pairs per CD-ROM). A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.
Performance study of SKIROC2/A ASIC for ILD Si-W ECAL
NASA Astrophysics Data System (ADS)
Suehara, T.; Sekiya, I.; Callier, S.; Balagura, V.; Boudry, V.; Brient, J.-C.; de la Taille, C.; Kawagoe, K.; Irles, A.; Magniette, F.; Nanni, J.; Pöschl, R.; Yoshioka, T.
2018-03-01
The ILD Si-W ECAL is a sampling calorimeter with tungsten absorber and highly segmented silicon layers for the International Large Detector (ILD), one of the two detector concepts for the International Linear Collider. SKIROC2 is an ASIC for the ILD Si-W ECAL. To investigate the issues found in prototype detectors, we prepared dedicated ASIC evaluation boards with either BGA sockets or directly soldered SKIROC2. We report a performance study with the evaluation boards, including signal-to-noise ratio and TDC performance with comparing SKIROC2 and an updated version, SKIROC2A.
1993-02-01
Segment: BX General Shipment Information Level: A Sequence: 30 Usage: M Max Use: 1 Loop: Purpose: To transmit identification numbers and other basic ...official code as- signed to a city or point (for ratemaking purposes) within a city. 930210 10.7.25 DEPARTM•B•T OF DOOM4N GOVERMET SILL OF LAD#M wDI...development group as the official code as- signed to a city or point (for ratemaking purposes) within a city. 930210 10.7.27 DEPARTMENT OF DG:dEI GOVWERJT
International Space Station (ISS)
2006-09-13
These six STS 117 astronauts, assigned to launch aboard the Space Shuttle Atlantis, are (from the left) astronauts James F. Reilly II, Steven R. Swanson, mission specialists; Frederick W. (Rick) Sturckow, commander; Lee J. Archambault, pilot; and Patrick G. Forrester and John D. (Danny) Olivas, mission specialists. The crewmembers are attired in training versions of their shuttle launch and entry suits. Mission objectives include the addition of the second and third starboard truss segments (S3/S4) with Photovoltaic Radiator (PVR), the deployed third set of solar arrays. The P6 starboard solar array wing and one radiator are to be retracted.
NASA Astrophysics Data System (ADS)
Kinnell, P. I. A.
2014-11-01
The assumption that runoff is produced uniformly over the eroding area underlies the traditional use of Universal Soil Loss Equation (USLE) and the revised version of it, the RUSLE. However, although the application of the USLE/RUSLE to segments on one dimensional hillslopes and cells on two-dimensional hillslopes is based on the assumption that each segment or cell is spatially uniform, factors such as soil infiltration, and hence runoff production, may vary spatially between segments or cells. Results from equations that focus on taking account of spatially variable runoff when applying the USLE/RUSLE and the USLE-M, the modification of the USLE/RUSLE that replaces the EI30 index by the product of EI30 and the runoff ratio, in hillslopes during erosion events where runoff is not produced uniformly were compared on a hypothetical a 300 m long one-dimensional hillslope with a spatially uniform gradient. Results were produced for situations where all the hillslope was tilled bare fallow and where half of the hillslope was cropped with corn and half was tilled bare fallow. Given that the erosive stress within a segment or cell depends on the volume of surface water flowing through the segment or cell, soil loss can be expected to increase not only with distance from the point where runoff begins but also directly with runoff when it varies about the average for the slope containing the segment or cell. The latter effect was achieved when soil loss was predicted using the USLE-M but not when the USLE/RUSLE slope length factor for a segment using an effective upslope length that varies with the ratio of the upslope runoff coefficient and the runoff coefficient for the slope to the bottom of the segment or cell was used. The USLE-M also predicted deposition to occur in a segment containing corn when an area with tilled bare fallow soil existed immediately upslope of it because the USLE-M models erosion on runoff and soil loss plots as a transport limited system. In a comparison of the USLE-M and RUSLE2, the form of the RUSLE that uses a daily time step in modeling rainfall erosion on one-dimensional hillslopes in the USA, on a 300 m long 9% hillslope where management changed from bare fallow to corn midway down the slope, the USLE-M predicted greater deposition in the bottom segment than predicted by RUSLE2. In addition, the USLE-M approach predicted that the deposition that occurred when the slope gradient changed from 9% to 4.5% midway down the slope was much greater than the amount predicted using RUSLE2.
NASA Astrophysics Data System (ADS)
Nennewitz, Markus; Thiede, Rasmus; Bookhagen, Bodo
2017-04-01
The location and magnitude of the active deformation of the Himalaya has been debated for decades, but several aspects remain unknown. For instance, the spatial distribution of the deformation and the shortening that ultimately sustains Himalayan topography and the activity of major fault zones are not well constrained neither for the present day and nor for Holocene and Quarternary timescales. Because of these weakly constrained factors, many previous studies have assumed that the structural setting and the fault geometry of the Himalaya is continuous along strike and similar to fault geometries of central Nepal. Thus, the sub-surface structural information from central Nepal have been projected along strike, but have not been verified at other locations. In this study we use digital topographic analysis of the NW Himalaya. We obtained catchment-averaged, normalized steepness indexes of longitudinal river profiles with drainage basins ranging between 5 and 250km2 and analyzed the relative change in their spatial distribution both along and across strike. More specific, we analyzed the relative changes of basins located in the footwall and in the hanging wall of major fault zones. Under the assumption that along strike changes in the normalized steepness index are primarily controlled by the activity of thrust segments, we revealed new insights in the tectonic deformation and uplift pattern. Our results show three different segments along the northwest Himalaya, which are located, from east to west, in Garwhal, Chamba and Kashmir Himalaya. These have formed independent orogenic segments characterized by significant changes in their structural architecture and fault geometry. Moreover, their topographic changes indicate strong variations on fault displacement rates across first-order fault zones. With the help of along- and across-strike profiles, we were able to identify fault segments of pronounced fault activity across MFT, MBT, and the PT2 and identify the location of along strike changes which are interpreted as their segment boundaries. In addition to the steepness indices we use the accumulation of elevation data as a proxy for the strain that has been accumulated over a specific distance. Thus, despite the changes in topography, structural setting, and kinematics along the NW Himalaya we observe that the topography of the orogen is in good agreement with recently measured convergence rates obtained from GPS campaigns. These data suggest reduced crustal shortening towards the northwest. Deformation in the Central Himalaya has been explained either by in-sequence thrusting along the MFT that localize the entire Holocene shortening or a combination of this with out-of-sequence thrusting in the vicinity of the PT2. In contrast to these conceptual models, we propose that the segmented NW Himalaya is a product of the synchronous activity of different fault segments, accommodating the crustal shortening along three independently deforming organic segments. The lateral discontinuity of these segments is responsible for the accommodation of the variation in the deformation and the maintenance of the topography of the Himalaya in NW India.
TrawlerWeb: an online de novo motif discovery tool for next-generation sequencing datasets.
Dang, Louis T; Tondl, Markus; Chiu, Man Ho H; Revote, Jerico; Paten, Benedict; Tano, Vincent; Tokolyi, Alex; Besse, Florence; Quaife-Ryan, Greg; Cumming, Helen; Drvodelic, Mark J; Eichenlaub, Michael P; Hallab, Jeannette C; Stolper, Julian S; Rossello, Fernando J; Bogoyevitch, Marie A; Jans, David A; Nim, Hieu T; Porrello, Enzo R; Hudson, James E; Ramialison, Mirana
2018-04-05
A strong focus of the post-genomic era is mining of the non-coding regulatory genome in order to unravel the function of regulatory elements that coordinate gene expression (Nat 489:57-74, 2012; Nat 507:462-70, 2014; Nat 507:455-61, 2014; Nat 518:317-30, 2015). Whole-genome approaches based on next-generation sequencing (NGS) have provided insight into the genomic location of regulatory elements throughout different cell types, organs and organisms. These technologies are now widespread and commonly used in laboratories from various fields of research. This highlights the need for fast and user-friendly software tools dedicated to extracting cis-regulatory information contained in these regulatory regions; for instance transcription factor binding site (TFBS) composition. Ideally, such tools should not require prior programming knowledge to ensure they are accessible for all users. We present TrawlerWeb, a web-based version of the Trawler_standalone tool (Nat Methods 4:563-5, 2007; Nat Protoc 5:323-34, 2010), to allow for the identification of enriched motifs in DNA sequences obtained from next-generation sequencing experiments in order to predict their TFBS composition. TrawlerWeb is designed for online queries with standard options common to web-based motif discovery tools. In addition, TrawlerWeb provides three unique new features: 1) TrawlerWeb allows the input of BED files directly generated from NGS experiments, 2) it automatically generates an input-matched biologically relevant background, and 3) it displays resulting conservation scores for each instance of the motif found in the input sequences, which assists the researcher in prioritising the motifs to validate experimentally. Finally, to date, this web-based version of Trawler_standalone remains the fastest online de novo motif discovery tool compared to other popular web-based software, while generating predictions with high accuracy. TrawlerWeb provides users with a fast, simple and easy-to-use web interface for de novo motif discovery. This will assist in rapidly analysing NGS datasets that are now being routinely generated. TrawlerWeb is freely available and accessible at: http://trawler.erc.monash.edu.au .
Performance-based readability testing of participant information for a Phase 3 IVF trial
Knapp, Peter; Raynor, DK; Silcock, Jonathan; Parkinson, Brian
2009-01-01
Background Studies suggest that the process of patient consent to clinical trials is sub-optimal. Participant information sheets are important but can be technical and lengthy documents. Performance-based readability testing is an established means of assessing patient information, and this study aimed to test its application to participant information for a Phase 3 trial. Methods An independent groups design was used to study the User Testing performance of the participant information sheet from the Phase 3 'Poor Responders' trial of In Vitro Fertilisation (IVF). 20 members of the public were asked to read it, then find and demonstrate understanding of 21 key aspects of the trial. The participant information sheet was then re-written and re-designed, and tested on 20 members of the public, using the same 21 item questionnaire. Results The original participant information sheet performed well in some places. Participants could not find some answers and some of the found information was not understood. In total there were 30 instances of information being not found or not understood. Answers to three questions were found but not understood by many of the participants, these related to aspects of the drug timing, Follicle Stimulating Hormone and compensation. Only two of the 20 participants could find and show understanding of all question items when using the original sheet. The revised sheet performed generally better, with 17 instances of information being not found or not understood, although the number of 'not found' items increased. Half of the 20 participants could find and show understanding of all question items when using the revised sheet. When asked to compare the versions of the sheet, almost all participants preferred the revised version. Conclusion The original participant information sheet may not have enabled patients fully to give valid consent. Participants seeing the revised sheet were better able to understand the trial. Those who write information for trial participants should take account of good practice in information design. Performance-based User Testing may be a useful method to indicate strengths and weaknesses in trial information. PMID:19723335
NASA Astrophysics Data System (ADS)
Blum, Mirjam; Rozanov, Vladimir; Bracher, Astrid; Burrows, John P.
The radiative transfer model SCIATRAN [V. V. Rozanov et al., 2002; A. Rozanov et al., 2005, 2008] has been developed to model atmospheric radiative transfer. This model is mainly applied to improve the analysis of high spectrally resolved satellite data as, for instance, data of the instrument SCIAMACHY (Scanning Imaging Absorption Spectrometer for Atmospheric CHar-tographY) onboard the ENVISAT satellite. Within the present study, SCIATRAN has been extended by taking radiative processes as well as at the atmosphere-water interface as within the water into account, which were caused by water itself and its constituents. Comparisons of this extended version of SCIATRAN for in-situ data and for MERIS satellite information yield first results, which will be shown. It is expected that the new version of SCIATRAN, including the coupling of atmospheric and oceanic radiative transfer, will widen the use of high spectrally resolved data in the form of achieving new findings, such as information about ocean biooptics and biogeochemistry like, for example, biomass of different phytoplankton groups or CDOM fluorescence. In addition, it is awaited that the new version improves the retrieval of atmospheric trace gases above oceanic waters. References: 1. V. V. Rozanov, M. Buchwitz, K.-U. Eichmann, R. de Beek, and J. P. Burrows. Sciatran -a new radiative transfer model for geophysical applications in the 240-2400nm spectral region: the pseudo-spherical version. Adv. in Space Res. 29, 1831-1835 (2002) 2. A. Rozanov, V. V. Rozanov, M. Buchwitz, A. Kokhanovsky, and J. P. Burrows. SCIA-TRAN 2.0 -A new radiative tranfer model for geophysical applications in the 175-2400nm spectral region. Adv. in Space Res. 36, 1015-1019 (2005) 3. A. Rozanov. SCIATRAN 2.X: Radiative transfer model and retrieval software package. URL = http://www.iup.physik.uni-bremen.de/sciatran (2008)
The life-cycle of upper-tropospheric jet streams identified with a novel data segmentation algorithm
NASA Astrophysics Data System (ADS)
Limbach, S.; Schömer, E.; Wernli, H.
2010-09-01
Jet streams are prominent features of the upper-tropospheric atmospheric flow. Through the thermal wind relationship these regions with intense horizontal wind speed (typically larger than 30 m/s) are associated with pronounced baroclinicity, i.e., with regions where extratropical cyclones develop due to baroclinic instability processes. Individual jet streams are non-stationary elongated features that can extend over more than 2000 km in the along-flow and 200-500 km in the across-flow direction, respectively. Their lifetime can vary between a few days and several weeks. In recent years, feature-based algorithms have been developed that allow compiling synoptic climatologies and typologies of upper-tropospheric jet streams based upon objective selection criteria and climatological reanalysis datasets. In this study a novel algorithm to efficiently identify jet streams using an extended region-growing segmentation approach is introduced. This algorithm iterates over a 4-dimensional field of horizontal wind speed from ECMWF analyses and decides at each grid point whether all prerequisites for a jet stream are met. In a single pass the algorithm keeps track of all adjacencies of these grid points and creates the 4-dimensional connected segments associated with each jet stream. In addition to the detection of these sets of connected grid points, the algorithm analyzes the development over time of the distinct 3-dimensional features each segment consists of. Important events in the development of these features, for example mergings and splittings, are detected and analyzed on a per-grid-point and per-feature basis. The output of the algorithm consists of the actual sets of grid-points augmented with information about the particular events, and of the so-called event graphs, which are an abstract representation of the distinct 3-dimensional features and events of each segment. This technique provides comprehensive information about the frequency of upper-tropospheric jet streams, their preferred regions of genesis, merging, splitting, and lysis, and statistical information about their size, amplitude and lifetime. The presentation will introduce the technique, provide example visualizations of the time evolution of the identified 3-dimensional jet stream features, and present results from a first multi-month "climatology" of upper-tropospheric jets. In the future, the technique can be applied to longer datasets, for instance reanalyses and output from global climate model simulations - and provide detailed information about key characteristics of jet stream life cycles.
Reynisson, Pall Jens; Scali, Marta; Smistad, Erik; Hofstad, Erlend Fagertun; Leira, Håkon Olav; Lindseth, Frank; Nagelhus Hernes, Toril Anita; Amundsen, Tore; Sorger, Hanne; Langø, Thomas
2015-01-01
Introduction Our motivation is increased bronchoscopic diagnostic yield and optimized preparation, for navigated bronchoscopy. In navigated bronchoscopy, virtual 3D airway visualization is often used to guide a bronchoscopic tool to peripheral lesions, synchronized with the real time video bronchoscopy. Visualization during navigated bronchoscopy, the segmentation time and methods, differs. Time consumption and logistics are two essential aspects that need to be optimized when integrating such technologies in the interventional room. We compared three different approaches to obtain airway centerlines and surface. Method CT lung dataset of 17 patients were processed in Mimics (Materialize, Leuven, Belgium), which provides a Basic module and a Pulmonology module (beta version) (MPM), OsiriX (Pixmeo, Geneva, Switzerland) and our Tube Segmentation Framework (TSF) method. Both MPM and TSF were evaluated with reference segmentation. Automatic and manual settings allowed us to segment the airways and obtain 3D models as well as the centrelines in all datasets. We compared the different procedures by user interactions such as number of clicks needed to process the data and quantitative measures concerning the quality of the segmentation and centrelines such as total length of the branches, number of branches, number of generations, and volume of the 3D model. Results The TSF method was the most automatic, while the Mimics Pulmonology Module (MPM) and the Mimics Basic Module (MBM) resulted in the highest number of branches. MPM is the software which demands the least number of clicks to process the data. We found that the freely available OsiriX was less accurate compared to the other methods regarding segmentation results. However, the TSF method provided results fastest regarding number of clicks. The MPM was able to find the highest number of branches and generations. On the other hand, the TSF is fully automatic and it provides the user with both segmentation of the airways and the centerlines. Reference segmentation comparison averages and standard deviations for MPM and TSF correspond to literature. Conclusion The TSF is able to segment the airways and extract the centerlines in one single step. The number of branches found is lower for the TSF method than in Mimics. OsiriX demands the highest number of clicks to process the data, the segmentation is often sparse and extracting the centerline requires the use of another software system. Two of the software systems performed satisfactory with respect to be used in preprocessing CT images for navigated bronchoscopy, i.e. the TSF method and the MPM. According to reference segmentation both TSF and MPM are comparable with other segmentation methods. The level of automaticity and the resulting high number of branches plus the fact that both centerline and the surface of the airways were extracted, are requirements we considered particularly important. The in house method has the advantage of being an integrated part of a navigation platform for bronchoscopy, whilst the other methods can be considered preprocessing tools to a navigation system. PMID:26657513
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakowatz, C.V. Jr.; Wahl, D.E.; Thompson, P.A.
1996-12-31
Wavefront curvature defocus effects can occur in spotlight-mode SAR imagery when reconstructed via the well-known polar formatting algorithm (PFA) under certain scenarios that include imaging at close range, use of very low center frequency, and/or imaging of very large scenes. The range migration algorithm (RMA), also known as seismic migration, was developed to accommodate these wavefront curvature effects. However, the along-track upsampling of the phase history data required of the original version of range migration can in certain instances represent a major computational burden. A more recent version of migration processing, the Frequency Domain Replication and Downsampling (FReD) algorithm, obviatesmore » the need to upsample, and is accordingly more efficient. In this paper the authors demonstrate that the combination of traditional polar formatting with appropriate space-variant post-filtering for refocus can be as efficient or even more efficient than FReD under some imaging conditions, as demonstrated by the computer-simulated results in this paper. The post-filter can be pre-calculated from a theoretical derivation of the curvature effect. The conclusion is that the new polar formatting with post filtering algorithm (PF2) should be considered as a viable candidate for a spotlight-mode image formation processor when curvature effects are present.« less
Refining inflation using non-canonical scalars
NASA Astrophysics Data System (ADS)
Unnikrishnan, Sanil; Sahni, Varun; Toporensky, Aleksey
2012-08-01
This paper revisits the Inflationary scenario within the framework of scalar field models possessing a non-canonical kinetic term. We obtain closed form solutions for all essential quantities associated with chaotic inflation including slow roll parameters, scalar and tensor power spectra, spectral indices, the tensor-to-scalar ratio, etc. We also examine the Hamilton-Jacobi equation and demonstrate the existence of an inflationary attractor. Our results highlight the fact that non-canonical scalars can significantly improve the viability of inflationary models. They accomplish this by decreasing the tensor-to-scalar ratio while simultaneously increasing the value of the scalar spectral index, thereby redeeming models which are incompatible with the cosmic microwave background (CMB) in their canonical version. For instance, the non-canonical version of the chaotic inflationary potential, V(phi) ~ λphi4, is found to agree with observations for values of λ as large as unity! The exponential potential can also provide a reasonable fit to CMB observations. A central result of this paper is that steep potentials (such as Vproptophi-n) usually associated with dark energy, can drive inflation in the non-canonical setting. Interestingly, non-canonical scalars violate the consistency relation r = -8nT, which emerges as a smoking gun test for this class of models.
JEnsembl: a version-aware Java API to Ensembl data systems
Paterson, Trevor; Law, Andy
2012-01-01
Motivation: The Ensembl Project provides release-specific Perl APIs for efficient high-level programmatic access to data stored in various Ensembl database schema. Although Perl scripts are perfectly suited for processing large volumes of text-based data, Perl is not ideal for developing large-scale software applications nor embedding in graphical interfaces. The provision of a novel Java API would facilitate type-safe, modular, object-orientated development of new Bioinformatics tools with which to access, analyse and visualize Ensembl data. Results: The JEnsembl API implementation provides basic data retrieval and manipulation functionality from the Core, Compara and Variation databases for all species in Ensembl and EnsemblGenomes and is a platform for the development of a richer API to Ensembl datasources. The JEnsembl architecture uses a text-based configuration module to provide evolving, versioned mappings from database schema to code objects. A single installation of the JEnsembl API can therefore simultaneously and transparently connect to current and previous database instances (such as those in the public archive) thus facilitating better analysis repeatability and allowing ‘through time’ comparative analyses to be performed. Availability: Project development, released code libraries, Maven repository and documentation are hosted at SourceForge (http://jensembl.sourceforge.net). Contact: jensembl-develop@lists.sf.net, andy.law@roslin.ed.ac.uk, trevor.paterson@roslin.ed.ac.uk PMID:22945789
The European Southern Observatory-MIDAS table file system
NASA Technical Reports Server (NTRS)
Peron, M.; Grosbol, P.
1992-01-01
The new and substantially upgraded version of the Table File System in MIDAS is presented as a scientific database system. MIDAS applications for performing database operations on tables are discussed, for instance, the exchange of the data to and from the TFS, the selection of objects, the uncertainty joins across tables, and the graphical representation of data. This upgraded version of the TFS is a full implementation of the binary table extension of the FITS format; in addition, it also supports arrays of strings. Different storage strategies for optimal access of very large data sets are implemented and are addressed in detail. As a simple relational database, the TFS may be used for the management of personal data files. This opens the way to intelligent pipeline processing of large amounts of data. One of the key features of the Table File System is to provide also an extensive set of tools for the analysis of the final results of a reduction process. Column operations using standard and special mathematical functions as well as statistical distributions can be carried out; commands for linear regression and model fitting using nonlinear least square methods and user-defined functions are available. Finally, statistical tests of hypothesis and multivariate methods can also operate on tables.
Lim, Hyun-ju; Weinheimer, Oliver; Wielpütz, Mark O.; Dinkel, Julien; Hielscher, Thomas; Gompelmann, Daniela; Kauczor, Hans-Ulrich; Heussel, Claus Peter
2016-01-01
Objectives Surgical or bronchoscopic lung volume reduction (BLVR) techniques can be beneficial for heterogeneous emphysema. Post-processing software tools for lobar emphysema quantification are useful for patient and target lobe selection, treatment planning and post-interventional follow-up. We aimed to evaluate the inter-software variability of emphysema quantification using fully automated lobar segmentation prototypes. Material and Methods 66 patients with moderate to severe COPD who underwent CT for planning of BLVR were included. Emphysema quantification was performed using 2 modified versions of in-house software (without and with prototype advanced lung vessel segmentation; programs 1 [YACTA v.2.3.0.2] and 2 [YACTA v.2.4.3.1]), as well as 1 commercial program 3 [Pulmo3D VA30A_HF2] and 1 pre-commercial prototype 4 [CT COPD ISP ver7.0]). The following parameters were computed for each segmented anatomical lung lobe and the whole lung: lobar volume (LV), mean lobar density (MLD), 15th percentile of lobar density (15th), emphysema volume (EV) and emphysema index (EI). Bland-Altman analysis (limits of agreement, LoA) and linear random effects models were used for comparison between the software. Results Segmentation using programs 1, 3 and 4 was unsuccessful in 1 (1%), 7 (10%) and 5 (7%) patients, respectively. Program 2 could analyze all datasets. The 53 patients with successful segmentation by all 4 programs were included for further analysis. For LV, program 1 and 4 showed the largest mean difference of 72 ml and the widest LoA of [-356, 499 ml] (p<0.05). Program 3 and 4 showed the largest mean difference of 4% and the widest LoA of [-7, 14%] for EI (p<0.001). Conclusions Only a single software program was able to successfully analyze all scheduled data-sets. Although mean bias of LV and EV were relatively low in lobar quantification, ranges of disagreement were substantial in both of them. For longitudinal emphysema monitoring, not only scanning protocol but also quantification software needs to be kept constant. PMID:27029047
A Theory of Relational Ageism: A Discourse Analysis of the 2015 White House Conference on Aging.
Gendron, Tracey L; Inker, Jennifer; Welleford, Elizabeth Ayn
2018-03-19
The widespread use of ageist language is generally accepted as commonplace and routine in most cultures and settings. In order to disrupt ageism, we must examine the use of ageist language and sentiments among those on the front line of providing advocacy, services, and policy for older adults; the professional culture of the aging services network. The recorded video segments from the sixth White House Conference on Aging (WHCOA) provided a unique opportunity to examine discourse used by professionals and appointed representatives in the field of aging within a professional sociocultural context. A qualitative discourse analysis of video recordings was used to analyze the 15 video fragments that comprised the recorded sessions of the 2015 WHCOA. About 26 instances were identified that captured statements expressing personal age, aging or an age-related characteristic negatively in regard to self or other (microageism), and/or statements expressing global negative opinions or beliefs about aging and older adults based on group membership (macroageism). A theoretical pathway was established that represents the dynamic process by which ageist statements were expressed and reinforced (relational ageism). Numerous instances of ageism were readily identified as part of a live streamed and publically accessible professional conference attended and presented by representatives of the aging services network. To make meaningful gains in the movement to disrupt ageism and promote optimal aging for all individuals, we must raise awareness of the relational nature, expression, and perpetuation of ageism.
Roth, Holger R; Lu, Le; Lay, Nathan; Harrison, Adam P; Farag, Amal; Sohn, Andrew; Summers, Ronald M
2018-04-01
Accurate and automatic organ segmentation from 3D radiological scans is an important yet challenging problem for medical image analysis. Specifically, as a small, soft, and flexible abdominal organ, the pancreas demonstrates very high inter-patient anatomical variability in both its shape and volume. This inhibits traditional automated segmentation methods from achieving high accuracies, especially compared to the performance obtained for other organs, such as the liver, heart or kidneys. To fill this gap, we present an automated system from 3D computed tomography (CT) volumes that is based on a two-stage cascaded approach-pancreas localization and pancreas segmentation. For the first step, we localize the pancreas from the entire 3D CT scan, providing a reliable bounding box for the more refined segmentation step. We introduce a fully deep-learning approach, based on an efficient application of holistically-nested convolutional networks (HNNs) on the three orthogonal axial, sagittal, and coronal views. The resulting HNN per-pixel probability maps are then fused using pooling to reliably produce a 3D bounding box of the pancreas that maximizes the recall. We show that our introduced localizer compares favorably to both a conventional non-deep-learning method and a recent hybrid approach based on spatial aggregation of superpixels using random forest classification. The second, segmentation, phase operates within the computed bounding box and integrates semantic mid-level cues of deeply-learned organ interior and boundary maps, obtained by two additional and separate realizations of HNNs. By integrating these two mid-level cues, our method is capable of generating boundary-preserving pixel-wise class label maps that result in the final pancreas segmentation. Quantitative evaluation is performed on a publicly available dataset of 82 patient CT scans using 4-fold cross-validation (CV). We achieve a (mean ± std. dev.) Dice similarity coefficient (DSC) of 81.27 ± 6.27% in validation, which significantly outperforms both a previous state-of-the art method and a preliminary version of this work that report DSCs of 71.80 ± 10.70% and 78.01 ± 8.20%, respectively, using the same dataset. Copyright © 2018. Published by Elsevier B.V.
Weberized Mumford-Shah Model with Bose-Einstein Photon Noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen Jianhong, E-mail: jhshen@math.umn.edu; Jung, Yoon-Mo
Human vision works equally well in a large dynamic range of light intensities, from only a few photons to typical midday sunlight. Contributing to such remarkable flexibility is a famous law in perceptual (both visual and aural) psychology and psychophysics known as Weber's Law. The current paper develops a new segmentation model based on the integration of Weber's Law and the celebrated Mumford-Shah segmentation model (Comm. Pure Appl. Math., vol. 42, pp. 577-685, 1989). Explained in detail are issues concerning why the classical Mumford-Shah model lacks light adaptivity, and why its 'weberized' version can more faithfully reflect human vision's superiormore » segmentation capability in a variety of illuminance conditions from dawn to dusk. It is also argued that the popular Gaussian noise model is physically inappropriate for the weberization procedure. As a result, the intrinsic thermal noise of photon ensembles is introduced based on Bose and Einstein's distributions in quantum statistics, which turns out to be compatible with weberization both analytically and computationally. The current paper focuses on both the theory and computation of the weberized Mumford-Shah model with Bose-Einstein noise. In particular, Ambrosio-Tortorelli's {gamma}-convergence approximation theory is adapted (Boll. Un. Mat. Ital. B, vol. 6, pp. 105-123, 1992), and stable numerical algorithms are developed for the associated pair ofnonlinear Euler-Lagrange PDEs.« less
van den Wollenberg, D J M; van den Hengel, S K; Dautzenberg, I J C; Cramer, S J; Kranenburg, O; Hoeben, R C
2008-12-01
Human Orthoreovirus Type 3 Dearing is not pathogenic to humans and has been evaluated clinically as an oncolytic agent. Its transduction efficiency and the tumor cell selectivity may be enhanced by incorporating ligands for alternative receptors. However, the genetic modification of reoviruses has been difficult, and genetic targeting of reoviruses has not been reported so far. Here we describe a technique for generating genetically targeted reoviruses. The propagation of wild-type reoviruses on cells expressing a modified sigma 1-encoding segment embedded in a conventional RNA polymerase II transcript leads to substitution of the wild-type genome segment by the modified version. This technique was used for generating reoviruses that are genetically targeted to an artificial receptor expressed on U118MG cells. These cells lack the junction adhesion molecule-1 and therefore resist infection by wild-type reoviruses. The targeted reoviruses were engineered to carry the ligand for this receptor at the C terminus of the sigma 1 spike protein. This demonstrates that the C terminus of the sigma 1 protein is a suitable locale for the insertion of oligopeptide ligands and that targeting of reoviruses is feasible. The genetically targeted viruses can be propagated using the modified U118MG cells as helper cells. This technique may be applicable for the improvement of human reoviruses as oncolytic agents.
Motor imagery of body movements that can't be executed on Earth.
Kalicinski, Michael; Bock, Otmar; Schott, Nadja
2017-01-01
Before participating in a space mission, astronauts undergo parabolic-flight and underwater training to facilitate their subsequent adaptation to weightlessness. A quick, simple and inexpensive alternative could be training by motor imagery (MI). An important prerequisite for this training approach is that humans are able to imagine movements which are unfamiliar, since they can't be performed in the presence of gravity. Our study addresses this prerequisite. 68 young subjects completed a modified version of the CMI test (Schott, 2013). With eyes closed, subjects were asked to imagine moving their body according to six consecutive verbal instructions. After the sixth instruction, subjects opened their eyes and arranged the segments of a manikin into the assumed final body configuration. In a first condition, subjects received instructions only for moving individual body segments (CMIground). In a second condition, subjects received instructions for moving body segments or their full body (CMIfloat). After each condition, subjects were asked to rate their subjective visual and kinesthetic vividness of MI. Condition differences emerged for the CMI scores and for the duration of correct trials with better performance in the CMIground condition. Condition differences were also represented for the subjective MI performance. Motor imagery is possible but degraded when subjects are asked to imagine body movements while floating. This confirms that preflight training of MI while floating might be beneficial for astronauts' mission performance.
Image based Monte Carlo Modeling for Computational Phantom
NASA Astrophysics Data System (ADS)
Cheng, Mengyun; Wang, Wen; Zhao, Kai; Fan, Yanchang; Long, Pengcheng; Wu, Yican
2014-06-01
The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verfication of the models for Monte carlo(MC)simulation are very tedious, error-prone and time-consuming. In addiation, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling by FDS Team (Advanced Nuclear Energy Research Team, http://www.fds.org.cn). The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients(Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection.
Mizukami, Naoki; Clark, Martyn P.; Sampson, Kevin; Nijssen, Bart; Mao, Yixin; McMillan, Hilary; Viger, Roland; Markstrom, Steven; Hay, Lauren E.; Woods, Ross; Arnold, Jeffrey R.; Brekke, Levi D.
2016-01-01
This paper describes the first version of a stand-alone runoff routing tool, mizuRoute. The mizuRoute tool post-processes runoff outputs from any distributed hydrologic model or land surface model to produce spatially distributed streamflow at various spatial scales from headwater basins to continental-wide river systems. The tool can utilize both traditional grid-based river network and vector-based river network data. Both types of river network include river segment lines and the associated drainage basin polygons, but the vector-based river network can represent finer-scale river lines than the grid-based network. Streamflow estimates at any desired location in the river network can be easily extracted from the output of mizuRoute. The routing process is simulated as two separate steps. First, hillslope routing is performed with a gamma-distribution-based unit-hydrograph to transport runoff from a hillslope to a catchment outlet. The second step is river channel routing, which is performed with one of two routing scheme options: (1) a kinematic wave tracking (KWT) routing procedure; and (2) an impulse response function – unit-hydrograph (IRF-UH) routing procedure. The mizuRoute tool also includes scripts (python, NetCDF operators) to pre-process spatial river network data. This paper demonstrates mizuRoute's capabilities to produce spatially distributed streamflow simulations based on river networks from the United States Geological Survey (USGS) Geospatial Fabric (GF) data set in which over 54 000 river segments and their contributing areas are mapped across the contiguous United States (CONUS). A brief analysis of model parameter sensitivity is also provided. The mizuRoute tool can assist model-based water resources assessments including studies of the impacts of climate change on streamflow.
Low-rank Atlas Image Analyses in the Presence of Pathologies
Liu, Xiaoxiao; Niethammer, Marc; Kwitt, Roland; Singh, Nikhil; McCormick, Matt; Aylward, Stephen
2015-01-01
We present a common framework, for registering images to an atlas and for forming an unbiased atlas, that tolerates the presence of pathologies such as tumors and traumatic brain injury lesions. This common framework is particularly useful when a sufficient number of protocol-matched scans from healthy subjects cannot be easily acquired for atlas formation and when the pathologies in a patient cause large appearance changes. Our framework combines a low-rank-plus-sparse image decomposition technique with an iterative, diffeomorphic, group-wise image registration method. At each iteration of image registration, the decomposition technique estimates a “healthy” version of each image as its low-rank component and estimates the pathologies in each image as its sparse component. The healthy version of each image is used for the next iteration of image registration. The low-rank and sparse estimates are refined as the image registrations iteratively improve. When that framework is applied to image-to-atlas registration, the low-rank image is registered to a pre-defined atlas, to establish correspondence that is independent of the pathologies in the sparse component of each image. Ultimately, image-to-atlas registrations can be used to define spatial priors for tissue segmentation and to map information across subjects. When that framework is applied to unbiased atlas formation, at each iteration, the average of the low-rank images from the patients is used as the atlas image for the next iteration, until convergence. Since each iteration’s atlas is comprised of low-rank components, it provides a population-consistent, pathology-free appearance. Evaluations of the proposed methodology are presented using synthetic data as well as simulated and clinical tumor MRI images from the brain tumor segmentation (BRATS) challenge from MICCAI 2012. PMID:26111390
Pervasive Sound Sensing: A Weakly Supervised Training Approach.
Kelly, Daniel; Caulfield, Brian
2016-01-01
Modern smartphones present an ideal device for pervasive sensing of human behavior. Microphones have the potential to reveal key information about a person's behavior. However, they have been utilized to a significantly lesser extent than other smartphone sensors in the context of human behavior sensing. We postulate that, in order for microphones to be useful in behavior sensing applications, the analysis techniques must be flexible and allow easy modification of the types of sounds to be sensed. A simplification of the training data collection process could allow a more flexible sound classification framework. We hypothesize that detailed training, a prerequisite for the majority of sound sensing techniques, is not necessary and that a significantly less detailed and time consuming data collection process can be carried out, allowing even a nonexpert to conduct the collection, labeling, and training process. To test this hypothesis, we implement a diverse density-based multiple instance learning framework, to identify a target sound, and a bag trimming algorithm, which, using the target sound, automatically segments weakly labeled sound clips to construct an accurate training set. Experiments reveal that our hypothesis is a valid one and results show that classifiers, trained using the automatically segmented training sets, were able to accurately classify unseen sound samples with accuracies comparable to supervised classifiers, achieving an average F -measure of 0.969 and 0.87 for two weakly supervised datasets.
Treatment of cerebral vasospasm with self-expandable retrievable stents: proof of concept.
Bhogal, Pervinder; Loh, Yince; Brouwer, Patrick; Andersson, Tommy; Söderman, Michael
2017-01-01
To report our preliminary experience with the use of stent retrievers to cause vasodilation in patients with delayed cerebral vasospasm secondary to subarachnoid hemorrhage. Four patients from two different high volume neurointerventional centers developed cerebral vasospasm following subarachnoid hemorrhage. In addition to standard techniques for the treatment of cerebral vasospasm, we used commercially available stent retrievers (Solitaire and Capture stent retrievers) to treat the vasospastic segment including M2, M1, A2, and A1. We evaluated the safety of this technique, degree of vasodilation, and longevity of the effect. Stent retrievers can be used to safely achieve cerebral vasodilation in the setting of delayed cerebral vasospasm. The effect is long-lasting (>24 hours) and, in our initial experience, carries a low morbidity. We have not experienced any complications using this technique although we have noted that the radial force was not sufficient to cause vasodilation in some instances. The vasospasm did not return in the vessel segments treated with stent angioplasty in any of these cases. In two of our cases stent angioplasty resulted in the reversal of focal neurological symptoms. Stent retrievers can provide long-lasting cerebral vasodilation in patients with delayed cerebral vasospasm. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Composite quantum collision models
NASA Astrophysics Data System (ADS)
Lorenzo, Salvatore; Ciccarello, Francesco; Palma, G. Massimo
2017-09-01
A collision model (CM) is a framework to describe open quantum dynamics. In its memoryless version, it models the reservoir R as consisting of a large collection of elementary ancillas: the dynamics of the open system S results from successive collisions of S with the ancillas of R . Here, we present a general formulation of memoryless composite CMs, where S is partitioned into the very open system under study S coupled to one or more auxiliary systems {Si} . Their composite dynamics occurs through internal S -{Si} collisions interspersed with external ones involving {Si} and the reservoir R . We show that important known instances of quantum non-Markovian dynamics of S —such as the emission of an atom into a reservoir featuring a Lorentzian, or multi-Lorentzian, spectral density or a qubit subject to random telegraph noise—can be mapped on to such memoryless composite CMs.
Generalized relative entropies in the classical limit
NASA Astrophysics Data System (ADS)
Kowalski, A. M.; Martin, M. T.; Plastino, A.
2015-03-01
Our protagonists are (i) the Cressie-Read family of divergences (characterized by the parameter γ), (ii) Tsallis' generalized relative entropies (characterized by the q one), and, as a particular instance of both, (iii) the Kullback-Leibler (KL) relative entropy. In their normalized versions, we ascertain the equivalence between (i) and (ii). Additionally, we employ these three entropic quantifiers in order to provide a statistical investigation of the classical limit of a semiclassical model, whose properties are well known from a purely dynamic viewpoint. This places us in a good position to assess the appropriateness of our statistical quantifiers for describing involved systems. We compare the behaviour of (i), (ii), and (iii) as one proceeds towards the classical limit. We determine optimal ranges for γ and/or q. It is shown the Tsallis-quantifier is better than KL's for 1.5 < q < 2.5.
Taking the liability out of contaminated property transactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ayers, K.W.; Taylor, R.J.
Brownfield redevelopment has been one of the hottest environmental topics for the past several years. However, brownfields are only a small segment of the contaminated property transaction market that includes the sale of real estate, signing of leases, and mergers and acquisitions that involve the transfer of property impacted by environmental contamination. Historic site pollution creates problems due to strict, joint and several, and retroactive liability imposed by environmental laws. In response to the interest in contaminated properties, the environmental insurance industry has developed a number of products that cap the remediation costs and supplement or in many instances replacemore » indemnity agreements. These insurance products allow buyers, sellers, and remediation contractors to cap remediation costs, provide long-term warranties, manage balance sheet liabilities, and even allow PRPs to walk away from site cleanup and long-term operation and maintenance obligations.« less
NASA Astrophysics Data System (ADS)
van Buren, Simon; Hertle, Ellen; Figueiredo, Patric; Kneer, Reinhold; Rohlfs, Wilko
2017-11-01
Frost formation is a common, often undesired phenomenon in heat exchanges such as air coolers. Thus, air coolers have to be defrosted periodically, causing significant energy consumption. For the design and optimization, prediction of defrosting by a CFD tool is desired. This paper presents a one-dimensional transient model approach suitable to be used as a zero-dimensional wall-function in CFD for modeling the defrost process at the fin and tube interfaces. In accordance to previous work a multi stage defrost model is introduced (e.g. [1, 2]). In the first instance the multi stage model is implemented and validated using MATLAB. The defrost process of a one-dimensional frost segment is investigated. Fixed boundary conditions are provided at the frost interfaces. The simulation results verify the plausibility of the designed model. The evaluation of the simulated defrost process shows the expected convergent behavior of the three-stage sequence.
Space pruning monotonic search for the non-unique probe selection problem.
Pappalardo, Elisa; Ozkok, Beyza Ahlatcioglu; Pardalos, Panos M
2014-01-01
Identification of targets, generally viruses or bacteria, in a biological sample is a relevant problem in medicine. Biologists can use hybridisation experiments to determine whether a specific DNA fragment, that represents the virus, is presented in a DNA solution. A probe is a segment of DNA or RNA, labelled with a radioactive isotope, dye or enzyme, used to find a specific target sequence on a DNA molecule by hybridisation. Selecting unique probes through hybridisation experiments is a difficult task, especially when targets have a high degree of similarity, for instance in a case of closely related viruses. After preliminary experiments, performed by a canonical Monte Carlo method with Heuristic Reduction (MCHR), a new combinatorial optimisation approach, the Space Pruning Monotonic Search (SPMS) method, is introduced. The experiments show that SPMS provides high quality solutions and outperforms the current state-of-the-art algorithms.
Longitudinal Receptive American Sign Language Skills Across a Diverse Deaf Student Body
2016-01-01
This article presents results of a longitudinal study of receptive American Sign Language (ASL) skills for a large portion of the student body at a residential school for the deaf across four consecutive years. Scores were analyzed by age, gender, parental hearing status, years attending the residential school, and presence of a disability (i.e., deaf with a disability). Years 1 through 4 included the ASL Receptive Skills Test (ASL-RST); Years 2 through 4 also included the Receptive Test of ASL (RT-ASL). Student performance for both measures positively correlated with age; deaf students with deaf parents scored higher than their same-age peers with hearing parents in some instances but not others; and those with a documented disability tended to score lower than their peers without disabilities. These results provide longitudinal findings across a diverse segment of the deaf/hard of hearing residential school population. PMID:26864689
An international standard for observation data
NASA Astrophysics Data System (ADS)
Cox, Simon
2010-05-01
A generic information model for observations and related features supports data exchange both within and between different scientific and technical communities. Observations and Measurements (O&M) formalizes a neutral terminology for observation data and metadata. It was based on a model developed for medical observations, and draws on experience from geology and mineral exploration, in-situ monitoring, remote sensing, intelligence, biodiversity studies, ocean observations and climate simulations. Hundreds of current deployments of Sensor Observation Services (SOS), covering multiple disciplines, provide validation of the O&M model. A W3C Incubator group on 'Semantic Sensor Networks' is now using O&M as one of the bases for development of a formal ontology for sensor networks. O&M defines the information describing observation acts and their results, including the following key terms: observation, result, observed-property, feature-of-interest, procedure, phenomenon-time, and result-time. The model separates of the (meta-)data associated with the observation procedure, the observed feature, and the observation event itself. Observation results may take various forms, including scalar quantities, categories, vectors, grids, or any data structure required to represent the value of some property of some observed feature. O&M follows the ISO/TC 211 General Feature Model so non-geometric properties must be associated with typed feature instances. This requires formalization of information that may be trivial when working within some earth-science sub-disciplines (e.g. temperature, pressure etc. are associated with the atmosphere or ocean, and not just a location) but is critical to cross-disciplinary applications. It also allows the same structure and terminology to be used for in-situ, ex-situ and remote sensing observations, as well as for simulations. For example: a stream level observation is an in-situ monitoring application where the feature-of-interest is a reach, the observed property is water-level, and the result is a time-series of heights; stream quality is usually determined by ex-situ observation where the feature-of-interest is a specimen that is recovered from the stream, the observed property is water-quality, and the result is a set of measures of various parameters, or an assessment derived from these; on the other hand, distribution of surface temperature of a water body is typically determined through remote-sensing, where at observation time the procedure is located distant from the feature-of-interest, and the result is an image or grid. Observations usually involve sampling of an ultimate feature-of-interest. In the environmental sciences common sampling strategies are used. Spatial sampling is classified primarily by topological dimension (point, curve, surface, volume) and is supported by standard processing and visualisation tools. Specimens are used for ex-situ processing in most disciplines. Sampling features are often part of complexes (e.g. specimens are sub-divided; specimens are retrieved from points along a transect; sections are taken across tracts), so relationships between instances must be recorded. And observational campaigns involve collections of sampling features. The sampling feature model is a core part of O&M, and application experience has shown that describing the relationships between sampling features and observations is generally critical to successful use of the model. O&M was developed through Open Geospatial Consortium (OGC) as part of the Sensor Web Enablement (SWE) initiative. Other SWE standards include SensorML, SOS, Sensor Planning Service (SPS). The OGC O&M standard (Version 1) had two parts: part 1 describes observation events, and part 2 provides a schema sampling features. A revised version of O&M (Version 2) is to be published in a single document as ISO 19156. O&M Version 1 included an XML encoding for data exchange, which is used as the payload for SOS responses. The new version will provide a UML model only. Since an XML encoding may be generated following a rule, such as that presented in ISO 19136 (GML 3.2), it is not included in the standard directly. O&M Version 2 thus supports multiple physical implementations and versions.
Baser, Gonen; Cengiz, Hakan; Uyar, Murat; Seker Un, Emine
2016-01-01
To investigate the effects of dehydration due to fasting on diurnal changes of intraocular pressure, anterior segment biometrics, and refraction. The intraocular pressures, anterior segment biometrics (axial length: AL; Central corneal thickness: CCT; Lens thickness: LT; Anterior chamber depth: ACD), and refractive measurements of 30 eyes of 15 fasting healthy male volunteers were recorded at 8:00 in the morning and 17:00 in the evening in the Ramadan of 2013 and two months later. The results were compared and the statistical analyses were performed using the Rstudio software version 0.98.501. The variables were investigated using visual (histograms, probability plots) and analytical methods (Kolmogorov-Smirnov/Shapiro-Wilk test) to determine whether or not they were normally distributed. The refractive values remained stable in the fasting as well as in the control period (p = 0.384). The axial length measured slightly shorter in the fasting period (p = 0.001). The corneal thickness presented a diurnal variation, in which the cornea measured thinner in the evening. The difference between the fasting and control period was not statistically significant (p = 0.359). The major differences were observed in the anterior chamber depth and IOP. The ACD was shallower in the evening during the fasting period, where it was deeper in the control period. The diurnal IOP difference was greater in the fasting period than the control period. Both were statistically significant (p = 0.001). The LT remained unchanged in both periods. The major difference was shown in the anterior chamber shallowing in the evening hours and IOP. Our study contributes the hypothesis that the posterior segment of the eye is more responsible for the axial length alterations and normovolemia has a more dominant influence on diurnal IOP changes.
Khuong, Anaïs; Lecheval, Valentin; Fournier, Richard; Blanco, Stéphane; Weitz, Sébastian; Bezian, Jean-Jacques; Gautrais, Jacques
2013-01-01
The goal of this study is to describe accurately how the directional information given by support inclinations affects the ant Lasius niger motion in terms of a behavioral decision. To this end, we have tracked the spontaneous motion of 345 ants walking on a 0.5×0.5 m plane canvas, which was tilted with 5 various inclinations by [Formula: see text] rad ([Formula: see text] data points). At the population scale, support inclination favors dispersal along uphill and downhill directions. An ant's decision making process is modeled using a version of the Boltzmann Walker model, which describes an ant's random walk as a series of straight segments separated by reorientation events, and was extended to take directional influence into account. From the data segmented accordingly ([Formula: see text] segments), this extension allows us to test separately how average speed, segments lengths and reorientation decisions are affected by support inclination and current walking direction of the ant. We found that support inclination had a major effect on average speed, which appeared approximately three times slower on the [Formula: see text] incline. However, we found no effect of the walking direction on speed. Contrastingly, we found that ants tend to walk longer in the same direction when they move uphill or downhill, and also that they preferentially adopt new uphill or downhill headings at turning points. We conclude that ants continuously adapt their decision making about where to go, and how long to persist in the same direction, depending on how they are aligned with the line of maximum declivity gradient. Hence, their behavioral decision process appears to combine klinokinesis with geomenotaxis. The extended Boltzmann Walker model parameterized by these effects gives a fair account of the directional dispersal of ants on inclines.
A fully automated system for quantification of background parenchymal enhancement in breast DCE-MRI
NASA Astrophysics Data System (ADS)
Ufuk Dalmiş, Mehmet; Gubern-Mérida, Albert; Borelli, Cristina; Vreemann, Suzan; Mann, Ritse M.; Karssemeijer, Nico
2016-03-01
Background parenchymal enhancement (BPE) observed in breast dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) has been identified as an important biomarker associated with risk for developing breast cancer. In this study, we present a fully automated framework for quantification of BPE. We initially segmented fibroglandular tissue (FGT) of the breasts using an improved version of an existing method. Subsequently, we computed BPEabs (volume of the enhancing tissue), BPErf (BPEabs divided by FGT volume) and BPErb (BPEabs divided by breast volume), using different relative enhancement threshold values between 1% and 100%. To evaluate and compare the previous and improved FGT segmentation methods, we used 20 breast DCE-MRI scans and we computed Dice similarity coefficient (DSC) values with respect to manual segmentations. For evaluation of the BPE quantification, we used a dataset of 95 breast DCE-MRI scans. Two radiologists, in individual reading sessions, visually analyzed the dataset and categorized each breast into minimal, mild, moderate and marked BPE. To measure the correlation between automated BPE values to the radiologists' assessments, we converted these values into ordinal categories and we used Spearman's rho as a measure of correlation. According to our results, the new segmentation method obtained an average DSC of 0.81 0.09, which was significantly higher (p<0.001) compared to the previous method (0.76 0.10). The highest correlation values between automated BPE categories and radiologists' assessments were obtained with the BPErf measurement (r=0.55, r=0.49, p<0.001 for both), while the correlation between the scores given by the two radiologists was 0.82 (p<0.001). The presented framework can be used to systematically investigate the correlation between BPE and risk in large screening cohorts.
Khuong, Anaïs; Lecheval, Valentin; Fournier, Richard; Blanco, Stéphane; Weitz, Sébastian; Bezian, Jean-Jacques; Gautrais, Jacques
2013-01-01
The goal of this study is to describe accurately how the directional information given by support inclinations affects the ant Lasius niger motion in terms of a behavioral decision. To this end, we have tracked the spontaneous motion of 345 ants walking on a 0.5×0.5 m plane canvas, which was tilted with 5 various inclinations by rad ( data points). At the population scale, support inclination favors dispersal along uphill and downhill directions. An ant's decision making process is modeled using a version of the Boltzmann Walker model, which describes an ant's random walk as a series of straight segments separated by reorientation events, and was extended to take directional influence into account. From the data segmented accordingly ( segments), this extension allows us to test separately how average speed, segments lengths and reorientation decisions are affected by support inclination and current walking direction of the ant. We found that support inclination had a major effect on average speed, which appeared approximately three times slower on the incline. However, we found no effect of the walking direction on speed. Contrastingly, we found that ants tend to walk longer in the same direction when they move uphill or downhill, and also that they preferentially adopt new uphill or downhill headings at turning points. We conclude that ants continuously adapt their decision making about where to go, and how long to persist in the same direction, depending on how they are aligned with the line of maximum declivity gradient. Hence, their behavioral decision process appears to combine klinokinesis with geomenotaxis. The extended Boltzmann Walker model parameterized by these effects gives a fair account of the directional dispersal of ants on inclines. PMID:24204636
Malik, Salim S; Lythgoe, Mark P; McPhail, Mark; Monahan, Kevin J
2017-11-30
Around 5% of colorectal cancers are due to mutations within DNA mismatch repair genes, resulting in Lynch syndrome (LS). These mutations have a high penetrance with early onset of colorectal cancer at a mean age of 45 years. The mainstay of surgical management is either a segmental or extensive colectomy. Currently there is no unified agreement as to which management strategy is superior due to limited conclusive empirical evidence available. A systematic review and meta- analysis to evaluate the risk of metachronous colorectal cancer (MCC) and mortality in LS following segmental and extensive colectomy. A systematic review of the PubMed database was conducted. Studies were included/ excluded based on pre-specified criteria. To assess the risk of MCC and mortality attributed to segmental or extensive colectomies, relative risks (RR) were calculated and corresponding 95% confidence intervals (CI). Publication bias was investigated using funnel plots. Data about mortality, as well as patient ascertainment [Amsterdam criteria (AC), germline mutation (GM)] were also extracted. Statistical analysis was conducted using the R program (version 3.2.3). The literature search identified 85 studies. After further analysis ten studies were eligible for inclusion in data synthesis. Pooled data identified 1389 patients followed up for a mean of 100.7 months with a mean age of onset of 45.5 years of age. A total 1119 patients underwent segmental colectomies with an absolute risk of MCC in this group of 22.4% at the end of follow-up. The 270 patients who had extensive colectomies had a MCC absolute risk of 4.7% (0% in those with a panproctocolecomy). Segmental colectomy was significantly associated with an increased relative risk of MCC (RR = 5.12; 95% CI 2.88-9.11; Fig. 1), although no significant association with mortality was identified (RR = 1.65; 95% CI 0.90-3.02). There was no statistically significant difference in the risk of MCC between AC and GM cohorts (p = 0.5, Chi-squared test). In LS, segmental colectomy results in a significant increased risk of developing MCC. Despite the choice of segmental or extensive colectomies having no statistically significant impact on mortality, the choice of initial surgical management can impact a patient's requirement for further surgery. An extensive colectomy can result in decreased need for further surgery; reduced hospital stays and associated costs. The significant difference in the risk of MCC, following segmental or extensive colectomies should be discussed with patients when deciding appropriate management. An individualised approach should be utilised, taking into account the patient's age, co-morbidities and genotype. In order to determine likely germline-specific effects, or a difference in survival, larger and more comprehensive studies are required.
Davies, Louise; Donnelly, Kyla Z; Goodman, Daisy J; Ogrinc, Greg
2016-04-01
The Standards for Quality Improvement Reporting Excellence (SQUIRE) Guideline was published in 2008 (SQUIRE 1.0) and was the first publication guideline specifically designed to advance the science of healthcare improvement. Advances in the discipline of improvement prompted us to revise it. We adopted a novel approach to the revision by asking end-users to 'road test' a draft version of SQUIRE 2.0. The aim was to determine whether they understood and implemented the guidelines as intended by the developers. Forty-four participants were assigned a manuscript section (ie, introduction, methods, results, discussion) and asked to use the draft Guidelines to guide their writing process. They indicated the text that corresponded to each SQUIRE item used and submitted it along with a confidential survey. The survey examined usability of the Guidelines using Likert-scaled questions and participants' interpretation of key concepts in SQUIRE using open-ended questions. On the submitted text, we evaluated concordance between participants' item usage/interpretation and the developers' intended application. For the survey, the Likert-scaled responses were summarised using descriptive statistics and the open-ended questions were analysed by content analysis. Consistent with the SQUIRE Guidelines' recommendation that not every item be included, less than one-third (n=14) of participants applied every item in their section in full. Of the 85 instances when an item was partially used or was omitted, only 7 (8.2%) of these instances were due to participants not understanding the item. Usage of Guideline items was highest for items most similar to standard scientific reporting (ie, 'Specific aim of the improvement' (introduction), 'Description of the improvement' (methods) and 'Implications for further studies' (discussion)) and lowest (<20% of the time) for those unique to healthcare improvement (ie, 'Assessment methods for context factors that contributed to success or failure' and 'Costs and strategic trade-offs'). Items unique to healthcare improvement, specifically 'Evolution of the improvement', 'Context elements that influenced the improvement', 'The logic on which the improvement was based', 'Process and outcome measures', demonstrated poor concordance between participants' interpretation and developers' intended application. User testing of a draft version of SQUIRE 2.0 revealed which items have poor concordance between developer intent and author usage, which will inform final editing of the Guideline and development of supporting supplementary materials. It also identified the items that require special attention when teaching about scholarly writing in healthcare improvement. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Dynamic graph cuts for efficient inference in Markov Random Fields.
Kohli, Pushmeet; Torr, Philip H S
2007-12-01
Abstract-In this paper we present a fast new fully dynamic algorithm for the st-mincut/max-flow problem. We show how this algorithm can be used to efficiently compute MAP solutions for certain dynamically changing MRF models in computer vision such as image segmentation. Specifically, given the solution of the max-flow problem on a graph, the dynamic algorithm efficiently computes the maximum flow in a modified version of the graph. The time taken by it is roughly proportional to the total amount of change in the edge weights of the graph. Our experiments show that, when the number of changes in the graph is small, the dynamic algorithm is significantly faster than the best known static graph cut algorithm. We test the performance of our algorithm on one particular problem: the object-background segmentation problem for video. It should be noted that the application of our algorithm is not limited to the above problem, the algorithm is generic and can be used to yield similar improvements in many other cases that involve dynamic change.
Optimal processing for gel electrophoresis images: Applying Monte Carlo Tree Search in GelApp.
Nguyen, Phi-Vu; Ghezal, Ali; Hsueh, Ya-Chih; Boudier, Thomas; Gan, Samuel Ken-En; Lee, Hwee Kuan
2016-08-01
In biomedical research, gel band size estimation in electrophoresis analysis is a routine process. To facilitate and automate this process, numerous software have been released, notably the GelApp mobile app. However, the band detection accuracy is limited due to a band detection algorithm that cannot adapt to the variations in input images. To address this, we used the Monte Carlo Tree Search with Upper Confidence Bound (MCTS-UCB) method to efficiently search for optimal image processing pipelines for the band detection task, thereby improving the segmentation algorithm. Incorporating this into GelApp, we report a significant enhancement of gel band detection accuracy by 55.9 ± 2.0% for protein polyacrylamide gels, and 35.9 ± 2.5% for DNA SYBR green agarose gels. This implementation is a proof-of-concept in demonstrating MCTS-UCB as a strategy to optimize general image segmentation. The improved version of GelApp-GelApp 2.0-is freely available on both Google Play Store (for Android platform), and Apple App Store (for iOS platform). © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Yuizono, Takaya; Munemori, Jun
GUNGEN-DXII, a new version of the GUNGEN groupware, allows the users to process hundreds of qualitative data segments (phrases and sentences) and compose a coherent piece of text containing a number of emergent ideas. The idea generation process is guided by the KJ method, a leading idea generation technique in Japan. This paper describes functions of GUNGEN supporting three major sub-activities of idea generation, namely, brainstorming, idea clustering, and text composition, and also summarizes the results obtained from a few hundred trial sessions with the old and new GUNGEN systems in terms of some qualitative and quantitative measures. The results show that the sessions with GUNGEN yield intermediate and final products at least as good as those from the original paper-and-pencil KJ method sessions, in addition to the advantages of the online system, such as distance collaboration and digital storage of the products. Moreover, results from the new GUNGEN-DXII raises hope for enabling the users to handle an extremely large number of qualitative data segments in the near future.
CDP++.Italian: Modelling Sublexical and Supralexical Inconsistency in a Shallow Orthography
Perry, Conrad; Ziegler, Johannes C.; Zorzi, Marco
2014-01-01
Most models of reading aloud have been constructed to explain data in relatively complex orthographies like English and French. Here, we created an Italian version of the Connectionist Dual Process Model of Reading Aloud (CDP++) to examine the extent to which the model could predict data in a language which has relatively simple orthography-phonology relationships but is relatively complex at a suprasegmental (word stress) level. We show that the model exhibits good quantitative performance and accounts for key phenomena observed in naming studies, including some apparently contradictory findings. These effects include stress regularity and stress consistency, both of which have been especially important in studies of word recognition and reading aloud in Italian. Overall, the results of the model compare favourably to an alternative connectionist model that can learn non-linear spelling-to-sound mappings. This suggests that CDP++ is currently the leading computational model of reading aloud in Italian, and that its simple linear learning mechanism adequately captures the statistical regularities of the spelling-to-sound mapping both at the segmental and supra-segmental levels. PMID:24740261
CloudNeo: a cloud pipeline for identifying patient-specific tumor neoantigens.
Bais, Preeti; Namburi, Sandeep; Gatti, Daniel M; Zhang, Xinyu; Chuang, Jeffrey H
2017-10-01
We present CloudNeo, a cloud-based computational workflow for identifying patient-specific tumor neoantigens from next generation sequencing data. Tumor-specific mutant peptides can be detected by the immune system through their interactions with the human leukocyte antigen complex, and neoantigen presence has recently been shown to correlate with anti T-cell immunity and efficacy of checkpoint inhibitor therapy. However computing capabilities to identify neoantigens from genomic sequencing data are a limiting factor for understanding their role. This challenge has grown as cancer datasets become increasingly abundant, making them cumbersome to store and analyze on local servers. Our cloud-based pipeline provides scalable computation capabilities for neoantigen identification while eliminating the need to invest in local infrastructure for data transfer, storage or compute. The pipeline is a Common Workflow Language (CWL) implementation of human leukocyte antigen (HLA) typing using Polysolver or HLAminer combined with custom scripts for mutant peptide identification and NetMHCpan for neoantigen prediction. We have demonstrated the efficacy of these pipelines on Amazon cloud instances through the Seven Bridges Genomics implementation of the NCI Cancer Genomics Cloud, which provides graphical interfaces for running and editing, infrastructure for workflow sharing and version tracking, and access to TCGA data. The CWL implementation is at: https://github.com/TheJacksonLaboratory/CloudNeo. For users who have obtained licenses for all internal software, integrated versions in CWL and on the Seven Bridges Cancer Genomics Cloud platform (https://cgc.sbgenomics.com/, recommended version) can be obtained by contacting the authors. jeff.chuang@jax.org. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.
1992-01-01
The Experiment Control and Monitor (EC&M) software was developed at NASA Lewis Research Center to support the Advanced Communications Technology Satellite (ACTS) High Burst Rate Link Evaluation Terminal (HBR-LET). The HBR-LET is an experimenter's terminal to communicate with the ACTS for various investigations by government agencies, universities, and industry. The EC&M software is one segment of the Control and Performance Monitoring (C&PM) software system of the HBR-LET. The EC&M software allows users to initialize, control, and monitor the instrumentation within the HBR-LET using a predefined sequence of commands. Besides instrument control, the C&PM software system is also responsible for computer communication between the HBR-LET and the ACTS NASA Ground Station and for uplink power control of the HBR-LET to demonstrate power augmentation during rain fade events. The EC&M Software User's Guide, Version 1.0 (NASA-CR-189160) outlines the commands required to install and operate the EC&M software. Input and output file descriptions, operator commands, and error recovery procedures are discussed in the document. The EC&M Software Maintenance Manual, Version 1.0 (NASA-CR-189161) is a programmer's guide that describes current implementation of the EC&M software from a technical perspective. An overview of the EC&M software, computer algorithms, format representation, and computer hardware configuration are included in the manual.
Mexican sign language recognition using normalized moments and artificial neural networks
NASA Astrophysics Data System (ADS)
Solís-V., J.-Francisco; Toxqui-Quitl, Carina; Martínez-Martínez, David; H.-G., Margarita
2014-09-01
This work presents a framework designed for the Mexican Sign Language (MSL) recognition. A data set was recorded with 24 static signs from the MSL using 5 different versions, this MSL dataset was captured using a digital camera in incoherent light conditions. Digital Image Processing was used to segment hand gestures, a uniform background was selected to avoid using gloved hands or some special markers. Feature extraction was performed by calculating normalized geometric moments of gray scaled signs, then an Artificial Neural Network performs the recognition using a 10-fold cross validation tested in weka, the best result achieved 95.83% of recognition rate.
Autonomous Flying Controls Testbed
NASA Technical Reports Server (NTRS)
Motter, Mark A.
2005-01-01
The Flying Controls Testbed (FLiC) is a relatively small and inexpensive unmanned aerial vehicle developed specifically to test highly experimental flight control approaches. The most recent version of the FLiC is configured with 16 independent aileron segments, supports the implementation of C-coded experimental controllers, and is capable of fully autonomous flight from takeoff roll to landing, including flight test maneuvers. The test vehicle is basically a modified Army target drone, AN/FQM-117B, developed as part of a collaboration between the Aviation Applied Technology Directorate (AATD) at Fort Eustis,Virginia and NASA Langley Research Center. Several vehicles have been constructed and collectively have flown over 600 successful test flights.
Hardware based redundant multi-threading inside a GPU for improved reliability
Sridharan, Vilas; Gurumurthi, Sudhanva
2015-05-05
A system and method for verifying computation output using computer hardware are provided. Instances of computation are generated and processed on hardware-based processors. As instances of computation are processed, each instance of computation receives a load accessible to other instances of computation. Instances of output are generated by processing the instances of computation. The instances of output are verified against each other in a hardware based processor to ensure accuracy of the output.
Micijevic, Esad; Morfitt, Ron
2010-01-01
Systematic characterization and calibration of the Landsat sensors and the assessment of image data quality are performed using the Image Assessment System (IAS). The IAS was first introduced as an element of the Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) ground segment and recently extended to Landsat 4 (L4) and 5 (L5) Thematic Mappers (TM) and Multispectral Sensors (MSS) on-board the Landsat 1-5 satellites. In preparation for the Landsat Data Continuity Mission (LDCM), the IAS was developed for the Earth Observer 1 (EO-1) Advanced Land Imager (ALI) with a capability to assess pushbroom sensors. This paper describes the LDCM version of the IAS and how it relates to unique calibration and validation attributes of its on-board imaging sensors. The LDCM IAS system will have to handle a significantly larger number of detectors and the associated database than the previous IAS versions. An additional challenge is that the LDCM IAS must handle data from two sensors, as the LDCM products will combine the Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) spectral bands.
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.
1992-01-01
The Experiment Control and Monitor (EC&M) software was developed at NASA Lewis Research Center to support the Advanced Communications Technology Satellite (ACTS) High Burst Rate Link Evaluation Terminal (HBR-LET). The HBR-LET is an experimenter's terminal to communicate with the ACTS for various investigations by government agencies, universities, and industry. The EC&M software is one segment of the Control and Performance Monitoring (C&PM) software system of the HBR-LET. The EC&M software allows users to initialize, control, and monitor the instrumentation within the HBR-LET using a predefined sequence of commands. Besides instrument control, the C&PM software system is also responsible for computer communication between the HBR-LET and the ACTS NASA Ground Station and for uplink power control of the HBR-LET to demonstrate power augmentation during rain fade events. The EC&M Software User's Guide, Version 1.0 (NASA-CR-189160) outlines the commands required to install and operate the EC&M software. Input and output file descriptions, operator commands, and error recovery procedures are discussed in the document.
Manual for Getdata Version 3.1: a FORTRAN Utility Program for Time History Data
NASA Technical Reports Server (NTRS)
Maine, Richard E.
1987-01-01
This report documents version 3.1 of the GetData computer program. GetData is a utility program for manipulating files of time history data, i.e., data giving the values of parameters as functions of time. The most fundamental capability of GetData is extracting selected signals and time segments from an input file and writing the selected data to an output file. Other capabilities include converting file formats, merging data from several input files, time skewing, interpolating to common output times, and generating calculated output signals as functions of the input signals. This report also documents the interface standards for the subroutines used by GetData to read and write the time history files. All interface to the data files is through these subroutines, keeping the main body of GetData independent of the precise details of the file formats. Different file formats can be supported by changes restricted to these subroutines. Other computer programs conforming to the interface standards can call the same subroutines to read and write files in compatible formats.
Some insights on hard quadratic assignment problem instances
NASA Astrophysics Data System (ADS)
Hussin, Mohamed Saifullah
2017-11-01
Since the formal introduction of metaheuristics, a huge number Quadratic Assignment Problem (QAP) instances have been introduced. Those instances however are loosely-structured, and therefore made it difficult to perform any systematic analysis. The QAPLIB for example, is a library that contains a huge number of QAP benchmark instances that consists of instances with different size and structure, but with a very limited availability for every instance type. This prevents researchers from performing organized study on those instances, such as parameter tuning and testing. In this paper, we will discuss several hard instances that have been introduced over the years, and algorithms that have been used for solving them.
Celik, Onur; Eskiizmir, Gorkem; Pabuscu, Yuksel; Ulkumen, Burak; Toker, Gokce Tanyeri
The exact etiology of Bell's palsy still remains obscure. The only authenticated finding is inflammation and edema of the facial nerve leading to entrapment inside the facial canal. To identify if there is any relationship between the grade of Bell's palsy and diameter of the facial canal, and also to study any possible anatomic predisposition of facial canal for Bell's palsy including parts which have not been studied before. Medical records and temporal computed tomography scans of 34 patients with Bell's palsy were utilized in this retrospective clinical study. Diameters of both facial canals (affected and unaffected) of each patient were measured at labyrinthine segment, geniculate ganglion, tympanic segment, second genu, mastoid segment and stylomastoid foramen. The House-Brackmann (HB) scale of each patient at presentation and 3 months after the treatment was evaluated from their medical records. The paired samples t-test and Wilcoxon signed-rank test were used for comparison of width between the affected side and unaffected side. The Wilcoxon signed-rank test was also used for evaluation of relationship between the diameter of facial canal and the grade of the Bell's palsy. Significant differences were established at a level of p=0.05 (IBM SPSS Statistics for Windows, Version 21.0.; Armonk, NY, IBM Corp). Thirty-four patients - 16 females, 18 males; mean age±Standard Deviation, 40.3±21.3 - with Bell's palsy were included in the study. According to the HB facial nerve grading system; 8 patients were grade V, 6 were grade IV, 11 were grade III, 8 were grade II and 1 patient was grade I. The mean width at the labyrinthine segment of the facial canal in the affected temporal bone was significantly smaller than the equivalent in the unaffected temporal bone (p=0.00). There was no significant difference between the affected and unaffected temporal bones at the geniculate ganglion (p=0.87), tympanic segment (p=0.66), second genu (p=0.62), mastoid segment (p=0.67) and stylomastoid foramen (p=0.16). We did not find any relationship between the HB grade and the facial canal diameter at the level of labyrinthine segment (p=0.41), tympanic segment (p=0.12), mastoid segment (p=0.14), geniculate ganglion (p=0.13) and stylomastoid foramen (p=0.44), while we found significant relationship at the level of second genu (p=0.02). We found the diameter of labyrinthine segment of facial canal as an anatomic risk factor for Bell's palsy. We also found significant relationship between the HB grade and FC diameter at the level of second genu. Future studies (MRI-CT combined or 3D modeling) are needed to promote this possible relevance especially at second genu. Thus, in the future it may be possible to selectively decompress particular segments in high grade BP patients. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Design of a nano-satellite demonstrator of an infrared imaging space interferometer: the HyperCube
NASA Astrophysics Data System (ADS)
Dohlen, Kjetil; Vives, Sébastien; Rakotonimbahy, Eddy; Sarkar, Tanmoy; Tasnim Ava, Tanzila; Baccichet, Nicola; Savini, Giorgio; Swinyard, Bruce
2014-07-01
The construction of a kilometer-baseline far infrared imaging interferometer is one of the big instrumental challenges for astronomical instrumentation in the coming decades. Recent proposals such as FIRI, SPIRIT, and PFI illustrate both science cases, from exo-planetary science to study of interstellar media and cosmology, and ideas for construction of such instruments, both in space and on the ground. An interesting option for an imaging multi-aperture interferometer with km baseline is the space-based hyper telescope (HT) where a giant, sparsely populated primary mirror is constituted of several free-flying satellites each carrying a mirror segment. All the segments point the same object and direct their part of the pupil towards a common focus where another satellite, containing recombiner optics and a detector unit, is located. In Labeyrie's [1] original HT concept, perfect phasing of all the segments was assumed, allowing snap-shot imaging within a reduced field of view and coronagraphic extinction of the star. However, for a general purpose observatory, image reconstruction using closure phase a posteriori image reconstruction is possible as long as the pupil is fully non-redundant. Such reconstruction allows for much reduced alignment tolerances, since optical path length control is only required to within several tens of wavelengths, rather than within a fraction of a wavelength. In this paper we present preliminary studies for such an instrument and plans for building a miniature version to be flown on a nano satellite. A design for recombiner optics is proposed, including a scheme for exit pupil re-organization, is proposed, indicating the focal plane satellite in the case of a km-baseline interferometer could be contained within a 1m3 unit. Different options for realization of a miniature version are presented, including instruments for solar observations in the visible and the thermal infrared and giant planet observations in the visible, and an algorithm for design of optimal aperture layout based on least-squares minimization is described. A first experimental setup realized by master students is presented, where a 20mm baseline interferometer with 1mm apertures associated with a thermal infrared camera pointed the sun. The absence of fringes in this setup is discussed in terms of spatial spectrum analysis. Finally, we discuss requirements in terms of satellite pointing requirements for such a miniature interferometer.
NASA Astrophysics Data System (ADS)
Lignell, David O.; Lansinger, Victoria B.; Medina, Juan; Klein, Marten; Kerstein, Alan R.; Schmidt, Heiko; Fistler, Marco; Oevermann, Michael
2018-06-01
The one-dimensional turbulence (ODT) model resolves a full range of time and length scales and is computationally efficient. ODT has been applied to a wide range of complex multi-scale flows, such as turbulent combustion. Previous ODT comparisons to experimental data have focused mainly on planar flows. Applications to cylindrical flows, such as round jets, have been based on rough analogies, e.g., by exploiting the fortuitous consistency of the similarity scalings of temporally developing planar jets and spatially developing round jets. To obtain a more systematic treatment, a new formulation of the ODT model in cylindrical and spherical coordinates is presented here. The model is written in terms of a geometric factor so that planar, cylindrical, and spherical configurations are represented in the same way. Temporal and spatial versions of the model are presented. A Lagrangian finite-volume implementation is used with a dynamically adaptive mesh. The adaptive mesh facilitates the implementation of cylindrical and spherical versions of the triplet map, which is used to model turbulent advection (eddy events) in the one-dimensional flow coordinate. In cylindrical and spherical coordinates, geometric stretching of the three triplet map images occurs due to the radial dependence of volume, with the stretching being strongest near the centerline. Two triplet map variants, TMA and TMB, are presented. In TMA, the three map images have the same volume, but different radial segment lengths. In TMB, the three map images have the same radial segment lengths, but different segment volumes. Cylindrical results are presented for temporal pipe flow, a spatial nonreacting jet, and a spatial nonreacting jet flame. These results compare very well to direct numerical simulation for the pipe flow, and to experimental data for the jets. The nonreacting jet treatment overpredicts velocity fluctuations near the centerline, due to the geometric stretching of the triplet maps and its effect on the eddy event rate distribution. TMB performs better than TMA. A hybrid planar-TMB (PTMB) approach is also presented, which further improves the results. TMA, TMB, and PTMB are nearly identical in the pipe flow where the key dynamics occur near the wall away from the centerline. The jet flame illustrates effects of variable density and viscosity, including dilatational effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, H.B. III; Rosenkrantz, D.J.; Stearns, R.E.
We study both the complexity and approximability of various graph and combinatorial problems specified using two dimensional narrow periodic specifications (see [CM93, HW92, KMW67, KO91, Or84b, Wa93]). The following two general kinds of results are presented. (1) We prove that a number of natural graph and combinatorial problems are NEXPTIME- or EXPSPACE-complete when instances are so specified; (2) In contrast, we prove that the optimization versions of several of these NEXPTIME-, EXPSPACE-complete problems have polynomial time approximation algorithms with constant performance guarantees. Moreover, some of these problems even have polynomial time approximation schemes. We also sketch how our NEXPTIME-hardness resultsmore » can be used to prove analogous NEXPTIME-hardness results for problems specified using other kinds of succinct specification languages. Our results provide the first natural problems for which there is a proven exponential (and possibly doubly exponential) gap between the complexities of finding exact and approximate solutions.« less
NASA Astrophysics Data System (ADS)
Ercikan, Kadriye; Alper, Naim
2009-03-01
This commentary first summarizes and discusses the analysis of the two translation processes described in the Oliveira, Colak, and Akerson article and the inferences these researchers make based on their research. In the second part of the commentary, we describe procedures and criteria used in adapting tests into different languages and how they may apply to adaptation of instructional materials. The authors provide a good theoretical analysis of what took place in two translation instances and make an important contribution by taking the first step in providing a systematic discussion of adaptation of instructional materials. Our discussion proposes procedures for adapting instructional materials for examining equivalence of source and target versions of adapted instructional materials. We highlight that many of the procedures and criteria used in examining comparability of educational tests is missing in this emerging research of area.
Continuous-Time Random Walk with multi-step memory: an application to market dynamics
NASA Astrophysics Data System (ADS)
Gubiec, Tomasz; Kutner, Ryszard
2017-11-01
An extended version of the Continuous-Time Random Walk (CTRW) model with memory is herein developed. This memory involves the dependence between arbitrary number of successive jumps of the process while waiting times between jumps are considered as i.i.d. random variables. This dependence was established analyzing empirical histograms for the stochastic process of a single share price on a market within the high frequency time scale. Then, it was justified theoretically by considering bid-ask bounce mechanism containing some delay characteristic for any double-auction market. Our model appeared exactly analytically solvable. Therefore, it enables a direct comparison of its predictions with their empirical counterparts, for instance, with empirical velocity autocorrelation function. Thus, the present research significantly extends capabilities of the CTRW formalism. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
NASA Technical Reports Server (NTRS)
Wynveen, R. A.; Powell, J. D.; Schubert, F. H.
1973-01-01
A successful 30-day test is described of a prototype Iodine Generating and Dispensing System (IGDS). The IGDS was sized to iodinate the drinking water nominally consumed by six men, 4.5 to 13.6 kg (10 to 30 lb) water per man-day with a + or - 10 to 20% variation with iodine (I2) levels of 0.5 to 20 parts per million (ppm). The I2 treats reclaimed water to prevent or eliminate microorganism contamination. Treatment is maintained with a residual of I2 within the manned spacecraft water supply. A simplified version of the chlorogen water disinfection concept, developed by life systems for on-site generation of chlorine (Cl2), was used as a basis for IGDS development. Potable water contaminated with abundant E. Coliform Group organisms was treated by electrolytically generated I2 at levels of 5 to 10 ppm. In all instances, the E. coli were eliminated.
Optimal Analyses for 3×n AB Games in the Worst Case
NASA Astrophysics Data System (ADS)
Huang, Li-Te; Lin, Shun-Shii
The past decades have witnessed a growing interest in research on deductive games such as Mastermind and AB game. Because of the complicated behavior of deductive games, tree-search approaches are often adopted to find their optimal strategies. In this paper, a generalized version of deductive games, called 3×n AB games, is introduced. However, traditional tree-search approaches are not appropriate for solving this problem since it can only solve instances with smaller n. For larger values of n, a systematic approach is necessary. Therefore, intensive analyses of playing 3×n AB games in the worst case optimally are conducted and a sophisticated method, called structural reduction, which aims at explaining the worst situation in this game is developed in the study. Furthermore, a worthwhile formula for calculating the optimal numbers of guesses required for arbitrary values of n is derived and proven to be final.
NASA Astrophysics Data System (ADS)
Meringer, Markus; Gretschany, Sergei; Lichtenberg, Gunter; Hilboll, Andreas; Richter, Andreas; Burrows, John P.
2015-11-01
SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric ChartographY) aboard ESA's environmental satellite ENVISAT observed the Earth's atmosphere in limb, nadir, and solar/lunar occultation geometries covering the UV-Visible to NIR spectral range. Limb and nadir geometries were the main operation modes for the retrieval of scientific data. The new version 6 of ESA's level 2 processor now provides for the first time an operational algorithm to combine measurements of these two geometries in order to generate new products. As a first instance the retrieval of tropospheric NO2 has been implemented based on IUP-Bremen's reference algorithm. We will detail the single processing steps performed by the operational limb-nadir matching algorithm and report the results of comparisons with the scientific tropospheric NO2 products of IUP and the Tropospheric Emission Monitoring Internet Service (TEMIS).
What can the programming language Rust do for astrophysics?
NASA Astrophysics Data System (ADS)
Blanco-Cuaresma, Sergi; Bolmont, Emeline
2017-06-01
The astrophysics community uses different tools for computational tasks such as complex systems simulations, radiative transfer calculations or big data. Programming languages like Fortran, C or C++ are commonly present in these tools and, generally, the language choice was made based on the need for performance. However, this comes at a cost: safety. For instance, a common source of error is the access to invalid memory regions, which produces random execution behaviors and affects the scientific interpretation of the results. In 2015, Mozilla Research released the first stable version of a new programming language named Rust. Many features make this new language attractive for the scientific community, it is open source and it guarantees memory safety while offering zero-cost abstraction. We explore the advantages and drawbacks of Rust for astrophysics by re-implementing the fundamental parts of Mercury-T, a Fortran code that simulates the dynamical and tidal evolution of multi-planet systems.
Cornelius, Carl-Peter; Giessler, Goetz Andreas; Wilde, Frank; Metzger, Marc Christian; Mast, Gerson; Probst, Florian Andreas
2016-03-01
Computer-assisted planning and intraoperative implementation using templates have become appreciated modalities in craniofacial reconstruction with fibula and DCIA flaps due to saving in operation time, improved accuracy of osteotomies and easy insetting. Up to now, a similar development for flaps from the subscapular vascular system, namely the lateral scapular border and tip, has not been addressed in the literature. A cohort of 12 patients who underwent mandibular (n = 10) or maxillary (n = 2) reconstruction with free flaps containing the lateral scapular border and tip using computer-assisted planning, stereolithography (STL) models and selective laser sintered (SLS) templates for bone contouring and sub-segmentation osteotomies was reviewed focussing on iterations in the design of computer generated tools and templates. The technical evolution migrated from hybrid STL models over SLS templates for cut out as well as sub-segmentation with a uniplanar framework to plug-on tandem template assemblies providing a biplanar access for the in toto cut out from the posterior aspect in succession with contouring into sub-segments from the medial side. The latest design version is the proof of concept that virtual planning of bone flaps from the lateral scapular border can be successfully transferred into surgery by appropriate templates. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Comprehensive analysis of NMR data using advanced line shape fitting.
Niklasson, Markus; Otten, Renee; Ahlner, Alexandra; Andresen, Cecilia; Schlagnitweit, Judith; Petzold, Katja; Lundström, Patrik
2017-10-01
NMR spectroscopy is uniquely suited for atomic resolution studies of biomolecules such as proteins, nucleic acids and metabolites, since detailed information on structure and dynamics are encoded in positions and line shapes of peaks in NMR spectra. Unfortunately, accurate determination of these parameters is often complicated and time consuming, in part due to the need for different software at the various analysis steps and for validating the results. Here, we present an integrated, cross-platform and open-source software that is significantly more versatile than the typical line shape fitting application. The software is a completely redesigned version of PINT ( https://pint-nmr.github.io/PINT/ ). It features a graphical user interface and includes functionality for peak picking, editing of peak lists and line shape fitting. In addition, the obtained peak intensities can be used directly to extract, for instance, relaxation rates, heteronuclear NOE values and exchange parameters. In contrast to most available software the entire process from spectral visualization to preparation of publication-ready figures is done solely using PINT and often within minutes, thereby, increasing productivity for users of all experience levels. Unique to the software are also the outstanding tools for evaluating the quality of the fitting results and extensive, but easy-to-use, customization of the fitting protocol and graphical output. In this communication, we describe the features of the new version of PINT and benchmark its performance.
Cai, Li
2015-06-01
Lord and Wingersky's (Appl Psychol Meas 8:453-461, 1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high-dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications.
Ramirez-Gonzalez, Ricardo; Caccamo, Mario; MacLean, Daniel
2011-10-01
Scientists now use high-throughput sequencing technologies and short-read assembly methods to create draft genome assemblies in just days. Tools and pipelines like the assembler, and the workflow management environments make it easy for a non-specialist to implement complicated pipelines to produce genome assemblies and annotations very quickly. Such accessibility results in a proliferation of assemblies and associated files, often for many organisms. These assemblies get used as a working reference by lots of different workers, from a bioinformatician doing gene prediction or a bench scientist designing primers for PCR. Here we describe Gee Fu, a database tool for genomic assembly and feature data, including next-generation sequence alignments. Gee Fu is an instance of a Ruby-On-Rails web application on a feature database that provides web and console interfaces for input, visualization of feature data via AnnoJ, access to data through a web-service interface, an API for direct data access by Ruby scripts and access to feature data stored in BAM files. Gee Fu provides a platform for storing and sharing different versions of an assembly and associated features that can be accessed and updated by bench biologists and bioinformaticians in ways that are easy and useful for each. http://tinyurl.com/geefu dan.maclean@tsl.ac.uk.
Shape prior modeling using sparse representation and online dictionary learning.
Zhang, Shaoting; Zhan, Yiqiang; Zhou, Yan; Uzunbas, Mustafa; Metaxas, Dimitris N
2012-01-01
The recently proposed sparse shape composition (SSC) opens a new avenue for shape prior modeling. Instead of assuming any parametric model of shape statistics, SSC incorporates shape priors on-the-fly by approximating a shape instance (usually derived from appearance cues) by a sparse combination of shapes in a training repository. Theoretically, one can increase the modeling capability of SSC by including as many training shapes in the repository. However, this strategy confronts two limitations in practice. First, since SSC involves an iterative sparse optimization at run-time, the more shape instances contained in the repository, the less run-time efficiency SSC has. Therefore, a compact and informative shape dictionary is preferred to a large shape repository. Second, in medical imaging applications, training shapes seldom come in one batch. It is very time consuming and sometimes infeasible to reconstruct the shape dictionary every time new training shapes appear. In this paper, we propose an online learning method to address these two limitations. Our method starts from constructing an initial shape dictionary using the K-SVD algorithm. When new training shapes come, instead of re-constructing the dictionary from the ground up, we update the existing one using a block-coordinates descent approach. Using the dynamically updated dictionary, sparse shape composition can be gracefully scaled up to model shape priors from a large number of training shapes without sacrificing run-time efficiency. Our method is validated on lung localization in X-Ray and cardiac segmentation in MRI time series. Compared to the original SSC, it shows comparable performance while being significantly more efficient.
Representing sentence information
NASA Astrophysics Data System (ADS)
Perkins, Walton A., III
1991-03-01
This paper describes a computer-oriented representation for sentence information. Whereas many Artificial Intelligence (AI) natural language systems start with a syntactic parse of a sentence into the linguist's components: noun, verb, adjective, preposition, etc., we argue that it is better to parse the input sentence into 'meaning' components: attribute, attribute value, object class, object instance, and relation. AI systems need a representation that will allow rapid storage and retrieval of information and convenient reasoning with that information. The attribute-of-object representation has proven useful for handling information in relational databases (which are well known for their efficiency in storage and retrieval) and for reasoning in knowledge- based systems. On the other hand, the linguist's syntactic representation of the works in sentences has not been shown to be useful for information handling and reasoning. We think it is an unnecessary and misleading intermediate form. Our sentence representation is semantic based in terms of attribute, attribute value, object class, object instance, and relation. Every sentence is segmented into one or more components with the form: 'attribute' of 'object' 'relation' 'attribute value'. Using only one format for all information gives the system simplicity and good performance as a RISC architecture does for hardware. The attribute-of-object representation is not new; it is used extensively in relational databases and knowledge-based systems. However, we will show that it can be used as a meaning representation for natural language sentences with minor extensions. In this paper we describe how a computer system can parse English sentences into this representation and generate English sentences from this representation. Much of this has been tested with computer implementation.
A cadaveric study of the endoscopic endonasal transclival approach to the basilar artery.
Lai, Leon T; Morgan, Michael K; Chin, David C W; Snidvongs, Kornkiat; Huang, June X Z; Malek, Joanne; Lam, Matthew; McLachlan, Rohan; Harvey, Richard J
2013-04-01
The anterior transclival route to basilar artery aneurysms is not widely performed. The objective of this study was to carry out a feasibility assessment of the transclival approach to basilar aneurysms with advanced endonasal techniques on 11 cadaver heads. Clival dura was exposed from the sella to the foramen magnum between the paraclival segments of the internal carotid arteries (ICA) laterally. An inverted dural "U" flap was reflected inferiorly to expose the basilar artery. The maximal dimensions from operative measurements were recorded. Surgical manoeuvrability of multiple instruments and the proficiency to place proximal and distal vascular clips were evaluated. The mean operative depth (± standard deviation), measured from the anterior choanae to the basilar artery, was 110±6mm. The lateral corridors were limited distally by the medial pterygoids (mean width 21±2mm) and paraclival ICA (mean width 20±2mm). The mean transclival craniectomy dimensions were 19±2mm (width) and 23±4mm (height). Exposure of the basilar-anterior inferior cerebellar artery junction, superior cerebellar artery, and the basilar caput were possible in 100%, 91%, and 64% of instances, respectively. Placements of proximal and distal aneurysm clips were achieved in all instances. Based on our findings, the transclival endoscopic endonasal surgery approach provides excellent visualisation of the basilar artery. Clip application and manoeuvrability of instruments was considered adequate for basilar aneurysm surgery. Surgical skills and instrumentation to control significant haemorrhage can potentially limit the clinical applicability of this technique. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
Query-by-example surgical activity detection.
Gao, Yixin; Vedula, S Swaroop; Lee, Gyusung I; Lee, Mija R; Khudanpur, Sanjeev; Hager, Gregory D
2016-06-01
Easy acquisition of surgical data opens many opportunities to automate skill evaluation and teaching. Current technology to search tool motion data for surgical activity segments of interest is limited by the need for manual pre-processing, which can be prohibitive at scale. We developed a content-based information retrieval method, query-by-example (QBE), to automatically detect activity segments within surgical data recordings of long duration that match a query. The example segment of interest (query) and the surgical data recording (target trial) are time series of kinematics. Our approach includes an unsupervised feature learning module using a stacked denoising autoencoder (SDAE), two scoring modules based on asymmetric subsequence dynamic time warping (AS-DTW) and template matching, respectively, and a detection module. A distance matrix of the query against the trial is computed using the SDAE features, followed by AS-DTW combined with template scoring, to generate a ranked list of candidate subsequences (substrings). To evaluate the quality of the ranked list against the ground-truth, thresholding conventional DTW distances and bipartite matching are applied. We computed the recall, precision, F1-score, and a Jaccard index-based score on three experimental setups. We evaluated our QBE method using a suture throw maneuver as the query, on two tool motion datasets (JIGSAWS and MISTIC-SL) captured in a training laboratory. We observed a recall of 93, 90 and 87 % and a precision of 93, 91, and 88 % with same surgeon same trial (SSST), same surgeon different trial (SSDT) and different surgeon (DS) experiment setups on JIGSAWS, and a recall of 87, 81 and 75 % and a precision of 72, 61, and 53 % with SSST, SSDT and DS experiment setups on MISTIC-SL, respectively. We developed a novel, content-based information retrieval method to automatically detect multiple instances of an activity within long surgical recordings. Our method demonstrated adequate recall across different complexity datasets and experimental conditions.
Is computer-aided interpretation of 99Tcm-HMPAO leukocyte scans better than the naked eye?
Almer, S; Peters, A M; Ekberg, S; Franzén, L; Granerus, G; Ström, M
1995-04-01
In order to compare visual interpretation of inflammation detected by leukocyte scintigraphy with that of different computer-aided quantification methods, 34 patients (25 with ulcerative colitis and 9 with endoscopically verified non-inflamed colonic mucosa), were investigated using 99Tcm-hexamethylpropyleneamine oxime (99Tcm-HMPAO) leukocyte scintigraphy and colonoscopy with biopsies. Scintigrams were obtained 45 min and 4 h after the injection of labelled cells. Computer-generated grading of seven colon segments using four different methods was performed on each scintigram for each patient. The same segments were graded independently using a 4-point visual scale. Endoscopic and histological inflammation were scored on 4-point scales. At 45 min, a positive correlation was found between endoscopic and scan gradings in individual colon segments when using visual grading and three of the four computer-aided methods (Spearman's rs = 0.30-0.64, P < 0.001). Histological grading correlated with visual grading and with two of the four computer-aided methods at 45 min (rs = 0.42-0.54, P < 0.001). At 4 h, all grading methods correlated positively with both endoscopic and histological assessment. The correlation coefficients were, in all but one instance, highest for the visual grading. As an inter-observer comparison to assess agreement between the visual gradings of two nuclear physicians, 14 additional patients (9 ulcerative colitis, 5 infectious enterocolitis) underwent leukocyte scintigraphy. Agreement assessed using kappa statistics was 0.54 at 45 min (P < 0.001). Separate data concerning the presence/absence of active inflammation showed a high kappa value (0.74, P < 0.001). Our results showed that a simple scintigraphic scoring system based on assessment using the human eye reflects colonic inflammation at least as well as computer-aided grading, and that highly correlated results can be achieved between different investigators.
Chang, Minsu; Kim, Yeongmin; Lee, Yoseph; Jeon, Doyoung
2017-07-01
This paper proposes a method of detecting the postural stability of a person wearing the lower limb exoskeletal robot with the HAT(Head-Arm-Trunk) model. Previous studies have shown that the human posture is stable when the CoM(Center of Mass) of the human body is placed on the BoS(Base of Support). In the case of the lower limb exoskeletal robot, the motion data, which are used for the CoM estimation, are acquired by sensors in the robot. The upper body, however, does not have sensors in each segment so that it may cause the error of the CoM estimation. In this paper, the HAT(Head-Arm-Trunk) model which combines head, arms, and torso into a single segment is considered because the motion of head and arms are unknown due to the lack of sensors. To verify the feasibility of HAT model, the reflecting markers are attached to each segment of the whole human body and the exact motion data are acquired by the VICON to compare the COM of the full body model and HAT model. The difference between the CoM with full body and that with HAT model is within 20mm for the various motions of head and arms. Based on the HAT model, the XCoM(Extrapolated Center of Mass) which includes the velocity of the CoM is used for prediction of the postural stability. The experiment of making unstable posture shows that the XCoM of the whole body based on the HAT model is feasible to detect the instance of postural instability earlier than the CoM by 20-250 msec. This result may be used for the lower limb exoskeletal robot to prepare for any action to prevent the falling down.
Signature detection and matching for document image retrieval.
Zhu, Guangyu; Zheng, Yefeng; Doermann, David; Jaeger, Stefan
2009-11-01
As one of the most pervasive methods of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. However, detection and segmentation of free-form objects such as signatures from clustered background is currently an open document analysis problem. In this paper, we focus on two fundamental problems in signature-based document image retrieval. First, we propose a novel multiscale approach to jointly detecting and segmenting signatures from document images. Rather than focusing on local features that typically have large variations, our approach captures the structural saliency using a signature production model and computes the dynamic curvature of 2D contour fragments over multiple scales. This detection framework is general and computationally tractable. Second, we treat the problem of signature retrieval in the unconstrained setting of translation, scale, and rotation invariant nonrigid shape matching. We propose two novel measures of shape dissimilarity based on anisotropic scaling and registration residual error and present a supervised learning framework for combining complementary shape information from different dissimilarity metrics using LDA. We quantitatively study state-of-the-art shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple instances as query in document image retrieval. We further demonstrate our matching techniques in offline signature verification. Extensive experiments using large real-world collections of English and Arabic machine-printed and handwritten documents demonstrate the excellent performance of our approaches.
Estimates of the absolute error and a scheme for an approximate solution to scheduling problems
NASA Astrophysics Data System (ADS)
Lazarev, A. A.
2009-02-01
An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.
Registration of reactor neutrinos with the highly segmented plastic scintillator detector DANSSino
NASA Astrophysics Data System (ADS)
Belov, V.; Brudanin, V.; Danilov, M.; Egorov, V.; Fomina, M.; Kobyakin, A.; Rusinov, V.; Shirchenko, M.; Shitov, Yu; Starostin, A.; Zhitnikov, I.
2013-05-01
DANSSino is a simplified pilot version of a solid-state detector of reactor antineutrino (it is being created within the DANSS project and will be installed close to an industrial nuclear power reactor). Numerous tests performed under a 3 GWth reactor of the Kalinin NPP at a distance of 11 m from the core demonstrate operability of the chosen design and reveal the main sources of the background. In spite of its small size (20 × 20 × 100 cm3), the pilot detector turned out to be quite sensitive to reactor neutrinos, detecting about 70 IBD events per day with the signal-to-background ratio about unity.
A parallel finite-difference method for computational aerodynamics
NASA Technical Reports Server (NTRS)
Swisshelm, Julie M.
1989-01-01
A finite-difference scheme for solving complex three-dimensional aerodynamic flow on parallel-processing supercomputers is presented. The method consists of a basic flow solver with multigrid convergence acceleration, embedded grid refinements, and a zonal equation scheme. Multitasking and vectorization have been incorporated into the algorithm. Results obtained include multiprocessed flow simulations from the Cray X-MP and Cray-2. Speedups as high as 3.3 for the two-dimensional case and 3.5 for segments of the three-dimensional case have been achieved on the Cray-2. The entire solver attained a factor of 2.7 improvement over its unitasked version on the Cray-2. The performance of the parallel algorithm on each machine is analyzed.
Monitoring service for the Gran Telescopio Canarias control system
NASA Astrophysics Data System (ADS)
Huertas, Manuel; Molgo, Jordi; Macías, Rosa; Ramos, Francisco
2016-07-01
The Monitoring Service collects, persists and propagates the Telescope and Instrument telemetry, for the Gran Telescopio CANARIAS (GTC), an optical-infrared 10-meter segmented mirror telescope at the ORM observatory in Canary Islands (Spain). A new version of the Monitoring Service has been developed in order to improve performance, provide high availability, guarantee fault tolerance and scalability to cope with high volume of data. The architecture is based on a distributed in-memory data store with a Product/Consumer pattern design. The producer generates the data samples. The consumers either persists the samples to a database for further analysis or propagates them to the consoles in the control room to monitorize the state of the whole system.
Gender subordination in the vulnerability of women to domestic violence.
Macedo Piosiadlo, Laura Christina; Godoy Serpa da Fonseca, Rosa Maria
2016-06-01
To create and validate an instrument that identifies women's vulnerability to domestic violence through gender subordination indicators in the family. An instrument consisting on 61 phrases was created, that indicates gender subordination in the family. After the assessment from ten judges, 34 phrases were validated. The approved version was administered to 321 health service users of São José dos Pinhais (Estado de Paraná, Brasil), along with the validated Portuguese version of the Abuse Assessment Screen (AAS) (for purposes of separating the sample group - the ''YES'' group was composed of women who have suffered violence and the ''NO'' group consisted of women who had not suffered violence). Data were transferred into the Statistical Package for the Social Sciences (SPSS) software, version 22, and quantitatively analyzed using exploratory and factor analysis, and tests for internal consistency. After analysis (Kaiser-Meyer-Olkin (KMO) statistics, Monte Carlo Principal Components Analysis (PCA, and diagram segmentation), two factors were identified: F1 - consisting of phrases related to home maintenance and family structure; F2 - phrases intrinsic to the couple's relationship. For the statements that reinforce gender subordination, the mean of the factors were higher for the group that answered YES to one of the violence identifying issues. The created instrument was able to identify women who were vulnerable to domestic violence using gender subordination indicators. This could be an important tool for nurses and other professionals in multidisciplinary teams, in order to organize and plan actions to prevent violence against women.
Global Contrast Based Salient Region Detection.
Cheng, Ming-Ming; Mitra, Niloy J; Huang, Xiaolei; Torr, Philip H S; Hu, Shi-Min
2015-03-01
Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.