NASA Astrophysics Data System (ADS)
Lee, Han Sang; Kim, Hyeun A.; Kim, Hyeonjin; Hong, Helen; Yoon, Young Cheol; Kim, Junmo
2016-03-01
In spite of its clinical importance in diagnosis of osteoarthritis, segmentation of cartilage in knee MRI remains a challenging task due to its shape variability and low contrast with surrounding soft tissues and synovial fluid. In this paper, we propose a multi-atlas segmentation of cartilage in knee MRI with sequential atlas registrations and locallyweighted voting (LWV). First, bone is segmented by sequential volume- and object-based registrations and LWV. Second, to overcome the shape variability of cartilage, cartilage is segmented by bone-mask-based registration and LWV. In experiments, the proposed method improved the bone segmentation by reducing misclassified bone region, and enhanced the cartilage segmentation by preventing cartilage leakage into surrounding similar intensity region, with the help of sequential registrations and LWV.
Sequential cloning of chromosomes
Lacks, Sanford A.
1995-07-18
A method for sequential cloning of chromosomal DNA of a target organism is disclosed. A first DNA segment homologous to the chromosomal DNA to be sequentially cloned is isolated. The first segment has a first restriction enzyme site on either side. A first vector product is formed by ligating the homologous segment into a suitably designed vector. The first vector product is circularly integrated into the target organism's chromosomal DNA. The resulting integrated chromosomal DNA segment includes the homologous DNA segment at either end of the integrated vector segment. The integrated chromosomal DNA is cleaved with a second restriction enzyme and ligated to form a vector-containing plasmid, which is replicated in a host organism. The replicated plasmid is then cleaved with the first restriction enzyme. Next, a DNA segment containing the vector and a segment of DNA homologous to a distal portion of the previously isolated DNA segment is isolated. This segment is then ligated to form a plasmid which is replicated within a suitable host. This plasmid is then circularly integrated into the target chromosomal DNA. The chromosomal DNA containing the circularly integrated vector is treated with a third, retrorestriction (class IIS) enzyme. The cleaved DNA is ligated to give a plasmid that is used to transform a host permissive for replication of its vector. The sequential cloning process continues by repeated cycles of circular integration and excision. The excision is carried out alternately with the second and third enzymes.
Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing
NASA Technical Reports Server (NTRS)
Ozguner, Fusun
1996-01-01
Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.
Zhu, Jundong; Jiang, Fan; Li, Pu; Shao, Pengfei; Liang, Chao; Xu, Aiming; Miao, Chenkui; Qin, Chao; Wang, Zengjun; Yin, Changjun
2017-09-11
To explore the feasibility and safety of retroperitoneal laparoscopic partial nephrectomy with sequential segmental renal artery clamping for the patients with multiple renal tumor of who have solitary kidney or contralateral kidney insufficiency. Nine patients who have undergone retroperitoneal laparoscopic partial nephrectomy with sequential segmental renal artery clamping between October 2010 and January 2017 were retrospectively analyzed. Clinical materials and parameters during and after the operation were summarized. Nineteen tumors were resected in nine patients and the operations were all successful. The operation time ranged from 100 to 180 min (125 min); clamping time of segmental renal artery was 10 ~ 30 min (23 min); the amount of blood loss during the operation was 120 ~ 330 ml (190 ml); hospital stay after the operation is 3 ~ 6d (5d). There was no complication during the perioperative period, and the pathology diagnosis after the surgery showed that there were 13 renal clear cell carcinomas, two papillary carcinoma and four perivascular epithelioid cell tumors with negative margins from the 19 tumors. All patients were followed up for 3 ~ 60 months, and no local recurrence or metastasis was detected. At 3-month post-operation follow-up, the mean serum creatinine was 148.6 ± 28.1 μmol/L (p = 0.107), an increase of 3.0 μmol/L from preoperative baseline. For the patients with multiple renal tumors and solitary kidney or contralateral kidney insufficiency, retroperitoneal laparoscopic partial nephrectomy with sequential segmental renal artery clamping was feasible and safe, which minimized the warm ischemia injury to the kidney and preserved the renal function effectively.
Sequential cloning of chromosomes
Lacks, S.A.
1995-07-18
A method for sequential cloning of chromosomal DNA of a target organism is disclosed. A first DNA segment homologous to the chromosomal DNA to be sequentially cloned is isolated. The first segment has a first restriction enzyme site on either side. A first vector product is formed by ligating the homologous segment into a suitably designed vector. The first vector product is circularly integrated into the target organism`s chromosomal DNA. The resulting integrated chromosomal DNA segment includes the homologous DNA segment at either end of the integrated vector segment. The integrated chromosomal DNA is cleaved with a second restriction enzyme and ligated to form a vector-containing plasmid, which is replicated in a host organism. The replicated plasmid is then cleaved with the first restriction enzyme. Next, a DNA segment containing the vector and a segment of DNA homologous to a distal portion of the previously isolated DNA segment is isolated. This segment is then ligated to form a plasmid which is replicated within a suitable host. This plasmid is then circularly integrated into the target chromosomal DNA. The chromosomal DNA containing the circularly integrated vector is treated with a third, retrorestriction (class IIS) enzyme. The cleaved DNA is ligated to give a plasmid that is used to transform a host permissive for replication of its vector. The sequential cloning process continues by repeated cycles of circular integration and excision. The excision is carried out alternately with the second and third enzymes. 9 figs.
NASA Astrophysics Data System (ADS)
Luo, Yun-Gang; Ko, Jacky Kl; Shi, Lin; Guan, Yuefeng; Li, Linong; Qin, Jing; Heng, Pheng-Ann; Chu, Winnie Cw; Wang, Defeng
2015-07-01
Myocardial iron loading thalassemia patients could be identified using T2* magnetic resonance images (MRI). To quantitatively assess cardiac iron loading, we proposed an effective algorithm to segment aligned free induction decay sequential myocardium images based on morphological operations and geodesic active contour (GAC). Nine patients with thalassemia major were recruited (10 male and 16 female) to undergo a thoracic MRI scan in the short axis view. Free induction decay images were registered for T2* mapping. The GAC were utilized to segment aligned MR images with a robust initialization. Segmented myocardium regions were divided into sectors for a region-based quantification of cardiac iron loading. Our proposed automatic segmentation approach achieve a true positive rate at 84.6% and false positive rate at 53.8%. The area difference between manual and automatic segmentation was 25.5% after 1000 iterations. Results from T2* analysis indicated that regions with intensity lower than 20 ms were suffered from heavy iron loading in thalassemia major patients. The proposed method benefited from abundant edge information of the free induction decay sequential MRI. Experiment results demonstrated that the proposed method is feasible in myocardium segmentation and was clinically applicable to measure myocardium iron loading.
Radio Frequency Ablation Registration, Segmentation, and Fusion Tool
McCreedy, Evan S.; Cheng, Ruida; Hemler, Paul F.; Viswanathan, Anand; Wood, Bradford J.; McAuliffe, Matthew J.
2008-01-01
The Radio Frequency Ablation Segmentation Tool (RFAST) is a software application developed using NIH's Medical Image Processing Analysis and Visualization (MIPAV) API for the specific purpose of assisting physicians in the planning of radio frequency ablation (RFA) procedures. The RFAST application sequentially leads the physician through the steps necessary to register, fuse, segment, visualize and plan the RFA treatment. Three-dimensional volume visualization of the CT dataset with segmented 3D surface models enables the physician to interactively position the ablation probe to simulate burns and to semi-manually simulate sphere packing in an attempt to optimize probe placement. PMID:16871716
NASA Astrophysics Data System (ADS)
Wang, Yu; Zhao, Yan-Jiao; Huang, Ji-Ping
2012-07-01
The detection of macromolecular conformation is particularly important in many physical and biological applications. Here we theoretically explore a method for achieving this detection by probing the electricity of sequential charged segments of macromolecules. Our analysis is based on molecular dynamics simulations, and we investigate a single file of water molecules confined in a half-capped single-walled carbon nanotube (SWCNT) with an external electric charge of +e or -e (e is the elementary charge). The charge is located in the vicinity of the cap of the SWCNT and along the centerline of the SWCNT. We reveal the picosecond timescale for the re-orientation (namely, from one unidirectional direction to the other) of the water molecules in response to a switch in the charge signal, -e → +e or +e → -e. Our results are well understood by taking into account the electrical interactions between the water molecules and between the water molecules and the external charge. Because such signals of re-orientation can be magnified and transported according to Tu et al. [2009 Proc. Natl. Acad. Sci. USA 106 18120], it becomes possible to record fingerprints of electric signals arising from sequential charged segments of a macromolecule, which are expected to be useful for recognizing the conformations of some particular macromolecules.
ERIC Educational Resources Information Center
Mori, Junko
2004-01-01
Using the methodological framework of conversation analysis (CA) as a central tool for analysis, this study examines a peer interactive task that occurred in a Japanese as a foreign language classroom. During the short segment of interaction, the students shifted back and forth between the development of an assigned task and the management of…
Hemodynamic analysis of sequential graft from right coronary system to left coronary system.
Wang, Wenxin; Mao, Boyan; Wang, Haoran; Geng, Xueying; Zhao, Xi; Zhang, Huixia; Xie, Jinsheng; Zhao, Zhou; Lian, Bo; Liu, Youjun
2016-12-28
Sequential and single grafting are two surgical procedures of coronary artery bypass grafting. However, it remains unclear if the sequential graft can be used between the right and left coronary artery system. The purpose of this paper is to clarify the possibility of right coronary artery system anastomosis to left coronary system. A patient-specific 3D model was first reconstructed based on coronary computed tomography angiography (CCTA) images. Two different grafts, the normal multi-graft (Model 1) and the novel multi-graft (Model 2), were then implemented on this patient-specific model using virtual surgery techniques. In Model 1, the single graft was anastomosed to right coronary artery (RCA) and the sequential graft was adopted to anastomose left anterior descending (LAD) and left circumflex artery (LCX). While in Model 2, the single graft was anastomosed to LAD and the sequential graft was adopted to anastomose RCA and LCX. A zero-dimensional/three-dimensional (0D/3D) coupling method was used to realize the multi-scale simulation of both the pre-operative and two post-operative models. Flow rates in the coronary artery and grafts were obtained. The hemodynamic parameters were also showed, including wall shear stress (WSS) and oscillatory shear index (OSI). The area of low WSS and OSI in Model 1 was much less than that in Model 2. Model 1 shows optimistic hemodynamic modifications which may enhance the long-term patency of grafts. The anterior segments of sequential graft have better long-term patency than the posterior segments. With rational spatial position of the heart vessels, the last anastomosis of sequential graft should be connected to the main branch.
Jacob, Soosan; Agarwal, Amar; Mazzotta, Cosimo; Agarwal, Athiya; Raj, John Michael
2017-04-01
Small-incision lenticule extraction may be associated with complications such as partial lenticular dissection, torn lenticule, lenticular adherence to cap, torn cap, and sub-cap epithelial ingrowth, some of which are more likely to occur during low-myopia corrections. We describe sequential segmental terminal lenticular side-cut dissection to facilitate minimally traumatic and smooth lenticular extraction. Anterior lamellar dissection is followed by central posterior lamellar dissection, leaving a thin peripheral rim and avoiding the lenticular side cut. This is followed by sequential segmental dissection of the lenticular side cut in a manner that fixates the lenticule and provides sufficient resistance for smooth and complete dissection of the posterior lamellar cut without undesired movements of the lenticule. The technique is advantageous in thin lenticules, where the risk for complications is high, but can also be used in thick lenticular dissection using wider sweeps to separate the lenticular side cut sequentially. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Structural constraints in the packaging of bluetongue virus genomic segments
Burkhardt, Christiane; Sung, Po-Yu; Celma, Cristina C.
2014-01-01
The mechanism used by bluetongue virus (BTV) to ensure the sorting and packaging of its 10 genomic segments is still poorly understood. In this study, we investigated the packaging constraints for two BTV genomic segments from two different serotypes. Segment 4 (S4) of BTV serotype 9 was mutated sequentially and packaging of mutant ssRNAs was investigated by two newly developed RNA packaging assay systems, one in vivo and the other in vitro. Modelling of the mutated ssRNA followed by biochemical data analysis suggested that a conformational motif formed by interaction of the 5′ and 3′ ends of the molecule was necessary and sufficient for packaging. A similar structural signal was also identified in S8 of BTV serotype 1. Furthermore, the same conformational analysis of secondary structures for positive-sense ssRNAs was used to generate a chimeric segment that maintained the putative packaging motif but contained unrelated internal sequences. This chimeric segment was packaged successfully, confirming that the motif identified directs the correct packaging of the segment. PMID:24980574
Lin, Lu; Wang, Yi-Ning; Kong, Ling-Yan; Jin, Zheng-Yu; Lu, Guang-Ming; Zhang, Zhao-Qi; Cao, Jian; Li, Shuo; Song, Lan; Wang, Zhi-Wei; Zhou, Kang; Wang, Ming
2013-01-01
Objective To evaluate the image quality (IQ) and radiation dose of 128-slice dual-source computed tomography (DSCT) coronary angiography using prospectively electrocardiogram (ECG)-triggered sequential scan mode compared with ECG-gated spiral scan mode in a population with atrial fibrillation. Methods Thirty-two patients with suspected coronary artery disease and permanent atrial fibrillation referred for a second-generation 128-slice DSCT coronary angiography were included in the prospective study. Of them, 17 patients (sequential group) were randomly selected to use a prospectively ECG-triggered sequential scan, while the other 15 patients (spiral group) used a retrospectively ECG-gated spiral scan. The IQ was assessed by two readers independently, using a four-point grading scale from excel-lent (grade 1) to non-assessable (grade 4), based on the American Heart Association 15-segment model. IQ of each segment and effective dose of each patient were compared between the two groups. Results The mean heart rate (HR) of the sequential group was 96±27 beats per minute (bpm) with a variation range of 73±25 bpm, while the mean HR of the spiral group was 86±22 bpm with a variationrange of 65±24 bpm. Both of the mean HR (t=1.91, P=0.243) and HR variation range (t=0.950, P=0.350) had no significant difference between the two groups. In per-segment analysis, IQ of the sequential group vs. spiral group was rated as excellent (grade 1) in 190/244 (78%) vs. 177/217 (82%) by reader1 and 197/245 (80%) vs. 174/214 (81%) by reader2, as non-assessable (grade 4) in 4/244 (2%) vs. 2/217 (1%) by reader1 and 6/245 (2%) vs. 4/214 (2%) by reader2. Overall averaged IQ per-patient in the sequential and spiral group showed equally good (1.27±0.19 vs. 1.25±0.22, Z=-0.834, P=0.404). The effective radiation dose of the sequential group reduced significantly compared with the spiral group (4.88±1.77 mSv vs. 10.20±3.64 mSv; t=-5.372, P=0.000). Conclusion Compared with retrospectively ECG-gated spiral scan, prospectively ECG-triggered sequential DSCT coronary angiography provides similarly diagnostically valuable images in patients with atrial fibrillation and significantly reduces radiation dose.
Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis
NASA Astrophysics Data System (ADS)
Che, E.; Olsen, M. J.
2017-09-01
Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.
Sequential pattern formation governed by signaling gradients
NASA Astrophysics Data System (ADS)
Jörg, David J.; Oates, Andrew C.; Jülicher, Frank
2016-10-01
Rhythmic and sequential segmentation of the embryonic body plan is a vital developmental patterning process in all vertebrate species. However, a theoretical framework capturing the emergence of dynamic patterns of gene expression from the interplay of cell oscillations with tissue elongation and shortening and with signaling gradients, is still missing. Here we show that a set of coupled genetic oscillators in an elongating tissue that is regulated by diffusing and advected signaling molecules can account for segmentation as a self-organized patterning process. This system can form a finite number of segments and the dynamics of segmentation and the total number of segments formed depend strongly on kinetic parameters describing tissue elongation and signaling molecules. The model accounts for existing experimental perturbations to signaling gradients, and makes testable predictions about novel perturbations. The variety of different patterns formed in our model can account for the variability of segmentation between different animal species.
Particle filters, a quasi-Monte-Carlo-solution for segmentation of coronaries.
Florin, Charles; Paragios, Nikos; Williams, Jim
2005-01-01
In this paper we propose a Particle Filter-based approach for the segmentation of coronary arteries. To this end, successive planes of the vessel are modeled as unknown states of a sequential process. Such states consist of the orientation, position, shape model and appearance (in statistical terms) of the vessel that are recovered in an incremental fashion, using a sequential Bayesian filter (Particle Filter). In order to account for bifurcations and branchings, we consider a Monte Carlo sampling rule that propagates in parallel multiple hypotheses. Promising results on the segmentation of coronary arteries demonstrate the potential of the proposed approach.
2018-01-01
ABSTRACT Long-germ insects, such as the fruit fly Drosophila melanogaster, pattern their segments simultaneously, whereas short-germ insects, such as the beetle Tribolium castaneum, pattern their segments sequentially, from anterior to posterior. Although the two modes of segmentation at first appear quite distinct, much of this difference might simply reflect developmental heterochrony. We now show here that, in both Drosophila and Tribolium, segment patterning occurs within a common framework of sequential Caudal, Dichaete and Odd-paired expression. In Drosophila, these transcription factors are expressed like simple timers within the blastoderm, whereas in Tribolium they form wavefronts that sweep from anterior to posterior across the germband. In Drosophila, all three are known to regulate pair-rule gene expression and influence the temporal progression of segmentation. We propose that these regulatory roles are conserved in short-germ embryos, and that therefore the changing expression profiles of these genes across insects provide a mechanistic explanation for observed differences in the timing of segmentation. In support of this hypothesis, we demonstrate that Odd-paired is essential for segmentation in Tribolium, contrary to previous reports. PMID:29724758
Kinematics of the field hockey penalty corner push-in.
Kerr, Rebecca; Ness, Kevin
2006-01-01
The aims of the study were to determine those variables that significantly affect push-in execution and thereby formulate coaching recommendations specific to the push-in. Two 50 Hz video cameras recorded transverse and longitudinal views of push-in trials performed by eight experienced and nine inexperienced male push-in performers. Video footage was digitized for data analysis of ball speed, stance width, drag distance, drag time, drag speed, centre of massy displacement and segment and stick displacements and velocities. Experienced push-in performers demonstrated a significantly greater (p < 0.05) stance width, a significantly greater distance between the ball and the front foot at the start of the push-in and a significantly faster ball speed than inexperienced performers. In addition, the experienced performers showed a significant positive correlation between ball speed and playing experience and tended to adopt a combination of simultaneous and sequential segment rotation to achieve accuracy and fast ball speed. The study yielded the following coaching recommendations for enhanced push-in performance: maximize drag distance by maximizing front foot-ball distance at the start of the push-in; use a combination of simultaneous and sequential segment rotations to optimise both accuracy and ball speed and maximize drag speed.
NASA Astrophysics Data System (ADS)
Akil, Mohamed
2017-05-01
The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.
Sequential segmental classification of feline congenital heart disease.
Scansen, Brian A; Schneider, Matthias; Bonagura, John D
2015-12-01
Feline congenital heart disease is less commonly encountered in veterinary medicine than acquired feline heart diseases such as cardiomyopathy. Understanding the wide spectrum of congenital cardiovascular disease demands a familiarity with a variety of lesions, occurring both in isolation and in combination, along with an appreciation of complex nomenclature and variable classification schemes. This review begins with an overview of congenital heart disease in the cat, including proposed etiologies and prevalence, examination approaches, and principles of therapy. Specific congenital defects are presented and organized by a sequential segmental classification with respect to their morphologic lesions. Highlights of diagnosis, treatment options, and prognosis are offered. It is hoped that this review will provide a framework for approaching congenital heart disease in the cat, and more broadly in other animal species based on the sequential segmental approach, which represents an adaptation of the common methodology used in children and adults with congenital heart disease. Copyright © 2015 Elsevier B.V. All rights reserved.
An analog scrambler for speech based on sequential permutations in time and frequency
NASA Astrophysics Data System (ADS)
Cox, R. V.; Jayant, N. S.; McDermott, B. J.
Permutation of speech segments is an operation that is frequently used in the design of scramblers for analog speech privacy. In this paper, a sequential procedure for segment permutation is considered. This procedure can be extended to two dimensional permutation of time segments and frequency bands. By subjective testing it is shown that this combination gives a residual intelligibility for spoken digits of 20 percent with a delay of 256 ms. (A lower bound for this test would be 10 percent). The complexity of implementing such a system is considered and the issues of synchronization and channel equalization are addressed. The computer simulation results for the system using both real and simulated channels are examined.
Sequential segmental neuromuscular stimulation: an effective approach to enhance fatigue resistance.
Zonnevijlle, E D; Somia, N N; Stremel, R W; Maldonado, C J; Werker, P M; Kon, M; Barker, J H
2000-02-01
Electrical stimulation of skeletal muscle flaps is used clinically in applications that require contraction of muscle and force generation at the recipient site, for example, to assist a failing myocardium (cardiomyoplasty) or to reestablish urinary or fecal continence as a neo-sphincter (dynamic graciloplasty). A major problem in these applications (muscle fatigue) results from the nonphysiologic manner in which most of the fibers within the muscle are recruited in a single burst-like contraction. To circumvent this problem, current protocols call for the muscle to be put through a rigorous training regimen to transform it from a fatigue-prone to a fatigue-resistant state. This process takes several weeks during which, aside from becoming fatigue-resistant, the muscle loses power and contraction speed. This study tested the feasibility of electrically stimulating a muscle flap in a more physiologic way; namely, by stimulating different anatomical parts of the muscle sequentially rather than the entire muscle all at once. Sequential segmental neuromuscular stimulation (SSNS) allows parts of the muscle to rest while other parts are contracting. In a paired designed study in dogs (n = 7), the effects of SSNS on muscle fatigability and muscle blood perfusion in gracilis muscles were compared with conventional stimulation: SSNS on one side and whole muscle stimulation on the other. In SSNS, electrodes were implanted in the muscles in such a way that four separate segments of each muscle could be stimulated separately. Then, each segment was stimulated so that part of the muscle was always contracted while part was always resting. This type of stimulation permitted sequential yet continuous force generation. Muscles in both groups maintained an equal amount of continuous force. In SSNS muscles, separate segments were stimulated so that the duty cycle for any one segment was 25, 50, 75, or 100 percent, thus varying the amount of work and rest that any segment experienced at any one time. With duty cycles of 25, 50, and 75 percent, SSNS produced significantly (p < 0.01) enhanced resistance to fatigue. In addition, muscle perfusion was significantly (p < 0.01) increased in these sequentially stimulated muscles compared with the controls receiving whole muscle stimulation. It was concluded that SSNS reduces muscle fatigue and enhances muscle blood flow during stimulation. These findings suggest that using SSNS in clinical myoplasty procedures could obviate the need for prolonged training protocols and minimize problems associated with muscle training.
Three parameters optimizing closed-loop control in sequential segmental neuromuscular stimulation.
Zonnevijlle, E D; Somia, N N; Perez Abadia, G; Stremel, R W; Maldonado, C J; Werker, P M; Kon, M; Barker, J H
1999-05-01
In conventional dynamic myoplasties, the force generation is poorly controlled. This causes unnecessary fatigue of the transposed/transplanted electrically stimulated muscles and causes damage to the involved tissues. We introduced sequential segmental neuromuscular stimulation (SSNS) to reduce muscle fatigue by allowing part of the muscle to rest periodically while the other parts work. Despite this improvement, we hypothesize that fatigue could be further reduced in some applications of dynamic myoplasty if the muscles were made to contract according to need. The first necessary step is to gain appropriate control over the contractile activity of the dynamic myoplasty. Therefore, closed-loop control was tested on a sequentially stimulated neosphincter to strive for the best possible control over the amount of generated pressure. A selection of parameters was validated for optimizing control. We concluded that the frequency of corrections, the threshold for corrections, and the transition time are meaningful parameters in the controlling algorithm of the closed-loop control in a sequentially stimulated myoplasty.
Hajati, Omid; Zarrabi, Khalil; Karimi, Reza; Hajati, Azadeh
2012-01-01
There is still controversy over the differences in the patency rates of the sequential and individual coronary artery bypass grafting (CABG) techniques. The purpose of this paper was to non-invasively evaluate hemodynamic parameters using complete 3D computational fluid dynamics (CFD) simulations of the sequential and the individual methods based on the patient-specific data extracted from computed tomography (CT) angiography. For CFD analysis, the geometric model of coronary arteries was reconstructed using an ECG-gated 64-detector row CT. Modeling the sequential and individual bypass grafting, this study simulates the flow from the aorta to the occluded posterior descending artery (PDA) and the posterior left ventricle (PLV) vessel with six coronary branches based on the physiologically measured inlet flow as the boundary condition. The maximum calculated wall shear stress (WSS) in the sequential and the individual models were estimated to be 35.1 N/m(2) and 36.5 N/m(2), respectively. Compared to the individual bypass method, the sequential graft has shown a higher velocity at the proximal segment and lower spatial wall shear stress gradient (SWSSG) due to the flow splitting caused by the side-to-side anastomosis. Simulated results combined with its surgical benefits including the requirement of shorter vein length and fewer anastomoses advocate the sequential method as a more favorable CABG method.
Patterns and Sequences: Interactive Exploration of Clickstreams to Understand Common Visitor Paths.
Liu, Zhicheng; Wang, Yang; Dontcheva, Mira; Hoffman, Matthew; Walker, Seth; Wilson, Alan
2017-01-01
Modern web clickstream data consists of long, high-dimensional sequences of multivariate events, making it difficult to analyze. Following the overarching principle that the visual interface should provide information about the dataset at multiple levels of granularity and allow users to easily navigate across these levels, we identify four levels of granularity in clickstream analysis: patterns, segments, sequences and events. We present an analytic pipeline consisting of three stages: pattern mining, pattern pruning and coordinated exploration between patterns and sequences. Based on this approach, we discuss properties of maximal sequential patterns, propose methods to reduce the number of patterns and describe design considerations for visualizing the extracted sequential patterns and the corresponding raw sequences. We demonstrate the viability of our approach through an analysis scenario and discuss the strengths and limitations of the methods based on user feedback.
Parry, Gareth; Malbut, Katie; Dark, John H; Bexton, Rodney S
1992-01-01
Objective—To investigate the response of the transplanted heart to different pacing modes and to synchronisation of the recipient and donor atria in terms of cardiac output at rest. Design—Doppler derived cardiac output measurements at three pacing rates (90/min, 110/min and 130/min) in five pacing modes: right ventricular pacing, donor atrial pacing, recipient-donor synchronous pacing, donor atrial-ventricular sequential pacing, and synchronous recipient-donor atrial-ventricular sequential pacing. Patients—11 healthy cardiac transplant recipients with three pairs of epicardial leads inserted at transplantation. Results—Donor atrial pacing (+11% overall) and donor atrial-ventricular sequential pacing (+8% overall) were significantly better than right ventricular pacing (p < 0·001) at all pacing rates. Synchronised pacing of recipient and donor atrial segments did not confer additional benefit in either atrial or atrial-ventricular sequential modes of pacing in terms of cardiac output at rest at these fixed rates. Conclusions—Atrial pacing or atrial-ventricular sequential pacing appear to be appropriate modes in cardiac transplant recipients. Synchronisation of recipient and donor atrial segments in this study produced no additional benefit. Chronotropic competence in these patients may, however, result in improved exercise capacity and deserves further investigation. PMID:1389737
Heuristic Bayesian segmentation for discovery of coexpressed genes within genomic regions.
Pehkonen, Petri; Wong, Garry; Törönen, Petri
2010-01-01
Segmentation aims to separate homogeneous areas from the sequential data, and plays a central role in data mining. It has applications ranging from finance to molecular biology, where bioinformatics tasks such as genome data analysis are active application fields. In this paper, we present a novel application of segmentation in locating genomic regions with coexpressed genes. We aim at automated discovery of such regions without requirement for user-given parameters. In order to perform the segmentation within a reasonable time, we use heuristics. Most of the heuristic segmentation algorithms require some decision on the number of segments. This is usually accomplished by using asymptotic model selection methods like the Bayesian information criterion. Such methods are based on some simplification, which can limit their usage. In this paper, we propose a Bayesian model selection to choose the most proper result from heuristic segmentation. Our Bayesian model presents a simple prior for the segmentation solutions with various segment numbers and a modified Dirichlet prior for modeling multinomial data. We show with various artificial data sets in our benchmark system that our model selection criterion has the best overall performance. The application of our method in yeast cell-cycle gene expression data reveals potential active and passive regions of the genome.
S V, Mahesh Kumar; R, Gunasundari
2018-06-02
Eye disease is a major health problem among the elderly people. Cataract and corneal arcus are the major abnormalities that exist in the anterior segment eye region of aged people. Hence, computer-aided diagnosis of anterior segment eye abnormalities will be helpful for mass screening and grading in ophthalmology. In this paper, we propose a multiclass computer-aided diagnosis (CAD) system using visible wavelength (VW) eye images to diagnose anterior segment eye abnormalities. In the proposed method, the input VW eye images are pre-processed for specular reflection removal and the iris circle region is segmented using a circular Hough Transform (CHT)-based approach. The first-order statistical features and wavelet-based features are extracted from the segmented iris circle and used for classification. The Support Vector Machine (SVM) by Sequential Minimal Optimization (SMO) algorithm was used for the classification. In experiments, we used 228 VW eye images that belong to three different classes of anterior segment eye abnormalities. The proposed method achieved a predictive accuracy of 96.96% with 97% sensitivity and 99% specificity. The experimental results show that the proposed method has significant potential for use in clinical applications.
Transportable Maps Software. Volume I.
1982-07-01
being collected at the beginning or end of the routine. This allows the interaction to be followed sequentially through its steps by anyone reading the...flow is either simple sequential , simple conditional (the equivalent of ’if-then-else’), simple iteration (’DO-loop’), or the non-linear recursion...input raster images to be in the form of sequential binary files with a SEGMENTED record type. The advantage of this form is that large logical records
The Evolution of Gene Regulatory Networks that Define Arthropod Body Plans.
Auman, Tzach; Chipman, Ariel D
2017-09-01
Our understanding of the genetics of arthropod body plan development originally stems from work on Drosophila melanogaster from the late 1970s and onward. In Drosophila, there is a relatively detailed model for the network of gene interactions that proceeds in a sequential-hierarchical fashion to define the main features of the body plan. Over the years, we have a growing understanding of the networks involved in defining the body plan in an increasing number of arthropod species. It is now becoming possible to tease out the conserved aspects of these networks and to try to reconstruct their evolution. In this contribution, we focus on several key nodes of these networks, starting from early patterning in which the main axes are determined and the broad morphological domains of the embryo are defined, and on to later stage wherein the growth zone network is active in sequential addition of posterior segments. The pattern of conservation of networks is very patchy, with some key aspects being highly conserved in all arthropods and others being very labile. Many aspects of early axis patterning are highly conserved, as are some aspects of sequential segment generation. In contrast, regional patterning varies among different taxa, and some networks, such as the terminal patterning network, are only found in a limited range of taxa. The growth zone segmentation network is ancient and is probably plesiomorphic to all arthropods. In some insects, it has undergone significant modification to give rise to a more hardwired network that generates individual segments separately. In other insects and in most arthropods, the sequential segmentation network has undergone a significant amount of systems drift, wherein many of the genes have changed. However, it maintains a conserved underlying logic and function. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Gorpas, D.; Yova, D.
2009-07-01
One of the major challenges in biomedical imaging is the extraction of quantified information from the acquired images. Light and tissue interaction leads to the acquisition of images that present inconsistent intensity profiles and thus the accurate identification of the regions of interest is a rather complicated process. On the other hand, the complex geometries and the tangent objects that very often are present in the acquired images, lead to either false detections or to the merging, shrinkage or expansion of the regions of interest. In this paper an algorithm, which is based on alternating sequential filtering and watershed transformation, is proposed for the segmentation of biomedical images. This algorithm has been tested over two applications, each one based on different acquisition system, and the results illustrate its accuracy in segmenting the regions of interest.
USDA-ARS?s Scientific Manuscript database
Effective Salmonella control in broilers is important from the standpoint of both consumer protection and industry viability. We investigated associations between Salmonella recovery from different sample types collected at sequential stages of one grow-out from the broiler flock and production env...
Continuum theory of gene expression waves during vertebrate segmentation.
Jörg, David J; Morelli, Luis G; Soroldoni, Daniele; Oates, Andrew C; Jülicher, Frank
2015-09-01
The segmentation of the vertebrate body plan during embryonic development is a rhythmic and sequential process governed by genetic oscillations. These genetic oscillations give rise to traveling waves of gene expression in the segmenting tissue. Here we present a minimal continuum theory of vertebrate segmentation that captures the key principles governing the dynamic patterns of gene expression including the effects of shortening of the oscillating tissue. We show that our theory can quantitatively account for the key features of segmentation observed in zebrafish, in particular the shape of the wave patterns, the period of segmentation and the segment length as a function of time.
Continuum theory of gene expression waves during vertebrate segmentation
Jörg, David J; Morelli, Luis G; Soroldoni, Daniele; Oates, Andrew C; Jülicher, Frank
2015-01-01
Abstract The segmentation of the vertebrate body plan during embryonic development is a rhythmic and sequential process governed by genetic oscillations. These genetic oscillations give rise to traveling waves of gene expression in the segmenting tissue. Here we present a minimal continuum theory of vertebrate segmentation that captures the key principles governing the dynamic patterns of gene expression including the effects of shortening of the oscillating tissue. We show that our theory can quantitatively account for the key features of segmentation observed in zebrafish, in particular the shape of the wave patterns, the period of segmentation and the segment length as a function of time. PMID:28725158
The open for business model of the bithorax complex in Drosophila.
Maeda, Robert K; Karch, François
2015-09-01
After nearly 30 years of effort, Ed Lewis published his 1978 landmark paper in which he described the analysis of a series of mutations that affect the identity of the segments that form along the anterior-posterior (AP) axis of the fly (Lewis 1978). The mutations behaved in a non-canonical fashion in complementation tests, forming what Ed Lewis called a "pseudo-allelic" series. Because of this, he never thought that the mutations represented segment-specific genes. As all of these mutations were grouped to a particular area of the Drosophila third chromosome, the locus became known of as the bithorax complex (BX-C). One of the key findings of Lewis' article was that it revealed for the first time, to a wide scientific audience, that there was a remarkable correlation between the order of the segment-specific mutations along the chromosome and the order of the segments they affected along the AP axis. In Ed Lewis' eyes, the mutants he discovered affected "segment-specific functions" that were sequentially activated along the chromosome as one moves from anterior to posterior along the body axis (the colinearity concept now cited in elementary biology textbooks). The nature of the "segment-specific functions" started to become clear when the BX-C was cloned through the pioneering chromosomal walk initiated in the mid 1980s by the Hogness and Bender laboratories (Bender et al. 1983a; Karch et al. 1985). Through this molecular biology effort, and along with genetic characterizations performed by Gines Morata's group in Madrid (Sanchez-Herrero et al. 1985) and Robert Whittle's in Sussex (Tiong et al. 1985), it soon became clear that the whole BX-C encoded only three protein-coding genes (Ubx, abd-A, and Abd-B). Later, immunostaining against the Ubx protein hinted that the segment-specific functions could, in fact, be cis-regulatory elements regulating the expression of the three protein-coding genes. In 1987, Peifer, Karch, and Bender proposed a comprehensive model of the functioning of the BX-C, in which the "segment-specific functions" appear as segment-specific enhancers regulating, Ubx, abd-A, or Abd-B (Peifer et al. 1987). Key to their model was that the segmental address of these enhancers was not an inherent ability of the enhancers themselves, but was determined by the chromosomal location in which they lay. In their view, the sequential activation of the segment-specific functions resulted from the sequential opening of chromatin domains along the chromosome as one moves from anterior to posterior. This model soon became known of as the open for business model. While the open for business model is quite easy to visualize at a conceptual level, molecular evidence to validate this model has been missing for almost 30 years. The recent publication describing the outstanding, joint effort from the Bender and Kingston laboratories now provides the missing proof to support this model (Bowman et al. 2014). The purpose of this article is to review the open for business model and take the reader through the genetic arguments that led to its elaboration.
A damped oscillator imposes temporal order on posterior gap gene expression in Drosophila.
Verd, Berta; Clark, Erik; Wotton, Karl R; Janssens, Hilde; Jiménez-Guri, Eva; Crombach, Anton; Jaeger, Johannes
2018-02-01
Insects determine their body segments in two different ways. Short-germband insects, such as the flour beetle Tribolium castaneum, use a molecular clock to establish segments sequentially. In contrast, long-germband insects, such as the vinegar fly Drosophila melanogaster, determine all segments simultaneously through a hierarchical cascade of gene regulation. Gap genes constitute the first layer of the Drosophila segmentation gene hierarchy, downstream of maternal gradients such as that of Caudal (Cad). We use data-driven mathematical modelling and phase space analysis to show that shifting gap domains in the posterior half of the Drosophila embryo are an emergent property of a robust damped oscillator mechanism, suggesting that the regulatory dynamics underlying long- and short-germband segmentation are much more similar than previously thought. In Tribolium, Cad has been proposed to modulate the frequency of the segmentation oscillator. Surprisingly, our simulations and experiments show that the shift rate of posterior gap domains is independent of maternal Cad levels in Drosophila. Our results suggest a novel evolutionary scenario for the short- to long-germband transition and help explain why this transition occurred convergently multiple times during the radiation of the holometabolan insects.
A damped oscillator imposes temporal order on posterior gap gene expression in Drosophila
Verd, Berta; Clark, Erik; Wotton, Karl R.; Janssens, Hilde; Jiménez-Guri, Eva; Crombach, Anton
2018-01-01
Insects determine their body segments in two different ways. Short-germband insects, such as the flour beetle Tribolium castaneum, use a molecular clock to establish segments sequentially. In contrast, long-germband insects, such as the vinegar fly Drosophila melanogaster, determine all segments simultaneously through a hierarchical cascade of gene regulation. Gap genes constitute the first layer of the Drosophila segmentation gene hierarchy, downstream of maternal gradients such as that of Caudal (Cad). We use data-driven mathematical modelling and phase space analysis to show that shifting gap domains in the posterior half of the Drosophila embryo are an emergent property of a robust damped oscillator mechanism, suggesting that the regulatory dynamics underlying long- and short-germband segmentation are much more similar than previously thought. In Tribolium, Cad has been proposed to modulate the frequency of the segmentation oscillator. Surprisingly, our simulations and experiments show that the shift rate of posterior gap domains is independent of maternal Cad levels in Drosophila. Our results suggest a novel evolutionary scenario for the short- to long-germband transition and help explain why this transition occurred convergently multiple times during the radiation of the holometabolan insects. PMID:29451884
Clinical application of a light-pen computer system for quantitative angiography
NASA Technical Reports Server (NTRS)
Alderman, E. L.
1975-01-01
The paper describes an angiographic analysis system which uses a video disk for recording and playback, a light-pen for data input, minicomputer processing, and an electrostatic printer/plotter for hardcopy output. The method is applied to quantitative analysis of ventricular volumes, sequential ventriculography for assessment of physiologic and pharmacologic interventions, analysis of instantaneous time sequence of ventricular systolic and diastolic events, and quantitation of segmental abnormalities. The system is shown to provide the capability for computation of ventricular volumes and other measurements from operator-defined margins by greatly reducing the tedium and errors associated with manual planimetry.
Gaudrain, Etienne; Carlyon, Robert P
2013-01-01
Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish the target and the masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed.
Gaudrain, Etienne; Carlyon, Robert P.
2013-01-01
Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish target and masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed. PMID:23297922
Simple and flexible SAS and SPSS programs for analyzing lag-sequential categorical data.
O'Connor, B P
1999-11-01
This paper describes simple and flexible programs for analyzing lag-sequential categorical data, using SAS and SPSS. The programs read a stream of codes and produce a variety of lag-sequential statistics, including transitional frequencies, expected transitional frequencies, transitional probabilities, adjusted residuals, z values, Yule's Q values, likelihood ratio tests of stationarity across time and homogeneity across groups or segments, transformed kappas for unidirectional dependence, bidirectional dependence, parallel and nonparallel dominance, and significance levels based on both parametric and randomization tests.
Segmentation of remotely sensed data using parallel region growing
NASA Technical Reports Server (NTRS)
Tilton, J. C.; Cox, S. C.
1983-01-01
The improved spatial resolution of the new earth resources satellites will increase the need for effective utilization of spatial information in machine processing of remotely sensed data. One promising technique is scene segmentation by region growing. Region growing can use spatial information in two ways: only spatially adjacent regions merge together, and merging criteria can be based on region-wide spatial features. A simple region growing approach is described in which the similarity criterion is based on region mean and variance (a simple spatial feature). An effective way to implement region growing for remote sensing is as an iterative parallel process on a large parallel processor. A straightforward parallel pixel-based implementation of the algorithm is explored and its efficiency is compared with sequential pixel-based, sequential region-based, and parallel region-based implementations. Experimental results from on aircraft scanner data set are presented, as is a discussioon of proposed improvements to the segmentation algorithm.
Nguyen, Nam-Trung; Huang, Xiaoyang
2006-06-01
Effective and fast mixing is important for many microfluidic applications. In many cases, mixing is limited by molecular diffusion due to constrains of the laminar flow in the microscale regime. According to scaling law, decreasing the mixing path can shorten the mixing time and enhance mixing quality. One of the techniques for reducing mixing path is sequential segmentation. This technique divides solvent and solute into segments in axial direction. The so-called Taylor-Aris dispersion can improve axial transport by three orders of magnitudes. The mixing path can be controlled by the switching frequency and the mean velocity of the flow. Mixing ratio can be controlled by pulse width modulation of the switching signal. This paper first presents a simple time-dependent one-dimensional analytical model for sequential segmentation. The model considers an arbitrary mixing ratio between solute and solvent as well as the axial Taylor-Aris dispersion. Next, a micromixer was designed and fabricated based on polymeric micromachining. The micromixer was formed by laminating four polymer layers. The layers are micro machined by a CO(2) laser. Switching of the fluid flows was realized by two piezoelectric valves. Mixing experiments were evaluated optically. The concentration profile along the mixing channel agrees qualitatively well with the analytical model. Furthermore, mixing results at different switching frequencies were investigated. Due to the dynamic behavior of the valves and the fluidic system, mixing quality decreases with increasing switching frequency.
ERIC Educational Resources Information Center
Abla, Dilshat; Okanoya, Kazuo
2008-01-01
Word segmentation, that is, discovering the boundaries between words that are embedded in a continuous speech stream, is an important faculty for language learners; humans solve this task partly by calculating transitional probabilities between sounds. Behavioral and ERP studies suggest that detection of sequential probabilities (statistical…
Uncertainty aggregation and reduction in structure-material performance prediction
NASA Astrophysics Data System (ADS)
Hu, Zhen; Mahadevan, Sankaran; Ao, Dan
2018-02-01
An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.
Burnik Šturm, Martina; Pukazhenthi, Budhan; Reed, Dolores; Ganbaatar, Oyunsaikhan; Sušnik, Stane; Haymerle, Agnes; Voigt, Christian C; Kaczensky, Petra
2015-06-15
In recent years, segmental stable isotope analysis of hair has been a focus of research in animal dietary ecology and migration. To correctly assign tail hair segments to seasons or even Julian dates, information on tail hair growth rates is a key parameter, but is lacking for most species. We (a) reviewed the literature on tail hair growth rates in mammals; b) made own measurements of three captive equid species; (c) measured δ(2)H, δ(13)C and δ(15)N values in sequentially cut tail hairs of three sympatric, free-ranging equids from the Mongolian Gobi, using isotope ratio mass spectrometry (IRMS); and (d) collected environmental background data on seasonal variation by measuring δ(2)H values in precipitation by IRMS and by compiling pasture productivity measured by remote sensing via the normalized difference vegetation index (NDVI). Tail hair growth rates showed significant inter- and intra-specific variation making temporal alignment problematic. In the Mongolian Gobi, high seasonal variation of δ(2)H values in precipitation results in winter lows and summer highs of δ(2)H values of available water sources. In water-dependent equids, this seasonality is reflected in the isotope signatures of sequentially cut tails hairs. In regions which are subject to strong seasonal patterns we suggest identifying key isotopes which show strong seasonal variation in the environment and can be expected to be reflected in the animal tissue. The known interval between the maxima and minima of these isotope values can then be used to correctly temporally align the segmental stable isotope signature for each individual animal. © 2015 The Authors. Rapid Communications in Mass Spectrometry published by John Wiley & Sons Ltd.
Burnik Šturm, Martina; Pukazhenthi, Budhan; Reed, Dolores; Ganbaatar, Oyunsaikhan; Sušnik, Stane; Haymerle, Agnes; Voigt, Christian C; Kaczensky, Petra
2015-01-01
Rationale In recent years, segmental stable isotope analysis of hair has been a focus of research in animal dietary ecology and migration. To correctly assign tail hair segments to seasons or even Julian dates, information on tail hair growth rates is a key parameter, but is lacking for most species. Methods We (a) reviewed the literature on tail hair growth rates in mammals; b) made own measurements of three captive equid species; (c) measured δ2H, δ13C and δ15N values in sequentially cut tail hairs of three sympatric, free-ranging equids from the Mongolian Gobi, using isotope ratio mass spectrometry (IRMS); and (d) collected environmental background data on seasonal variation by measuring δ2H values in precipitation by IRMS and by compiling pasture productivity measured by remote sensing via the normalized difference vegetation index (NDVI). Results Tail hair growth rates showed significant inter- and intra-specific variation making temporal alignment problematic. In the Mongolian Gobi, high seasonal variation of δ2H values in precipitation results in winter lows and summer highs of δ2H values of available water sources. In water-dependent equids, this seasonality is reflected in the isotope signatures of sequentially cut tails hairs. Conclusions In regions which are subject to strong seasonal patterns we suggest identifying key isotopes which show strong seasonal variation in the environment and can be expected to be reflected in the animal tissue. The known interval between the maxima and minima of these isotope values can then be used to correctly temporally align the segmental stable isotope signature for each individual animal. © 2015 The Authors. Rapid Communications in Mass Spectrometry published by John Wiley & Sons Ltd. PMID:26044272
Zonnevijlle, E D; Somia, N N; Abadia, G P; Stremel, R W; Maldonado, C J; Werker, P M; Kon, M; Barker, J H
2000-09-01
Dynamic graciloplasty is used as a treatment modality for total urinary incontinence caused by a paralyzed sphincter. A problem with this application is undesirable fatigue of the muscle caused by continuous electrical stimulation. Therefore, the neosphincter must be trained via a rigorous regimen to transform it from a fatigue-prone state to a fatigue-resistant state. To avoid or shorten this training period, the application of sequential segmental neuromuscular stimulation (SSNS) was examined. This form of stimulation proved previously to be highly effective in acutely reducing fatigue caused by electrical stimulation. The contractile function and perfusion of gracilis muscles employed as neosphincters were compared between conventional, single-channel, continuous stimulation, and multichannel sequential stimulation in 8 dogs. The sequentially stimulated neosphincter proved to have an endurance 2.9 times longer (as measured by halftime to fatigue) than continuous stimulation and a better blood perfusion during stimulation (both of which were significant changes, p < 0.05). Clinically, this will not antiquate training of the muscle, but SSNS could reduce the need for long and rigorous training protocols, making dynamic graciloplasty more attractive as a method of treating urinary or fecal incontinence.
Qi, Xin; Xing, Fuyong; Foran, David J.; Yang, Lin
2013-01-01
Summary Background Automated analysis of imaged histopathology specimens could potentially provide support for improved reliability in detection and classification in a range of investigative and clinical cancer applications. Automated segmentation of cells in the digitized tissue microarray (TMA) is often the prerequisite for quantitative analysis. However overlapping cells usually bring significant challenges for traditional segmentation algorithms. Objectives In this paper, we propose a novel, automatic algorithm to separate overlapping cells in stained histology specimens acquired using bright-field RGB imaging. Methods It starts by systematically identifying salient regions of interest throughout the image based upon their underlying visual content. The segmentation algorithm subsequently performs a quick, voting based seed detection. Finally, the contour of each cell is obtained using a repulsive level set deformable model using the seeds generated in the previous step. We compared the experimental results with the most current literature, and the pixel wise accuracy between human experts' annotation and those generated using the automatic segmentation algorithm. Results The method is tested with 100 image patches which contain more than 1000 overlapping cells. The overall precision and recall of the developed algorithm is 90% and 78%, respectively. We also implement the algorithm on GPU. The parallel implementation is 22 times faster than its C/C++ sequential implementation. Conclusion The proposed overlapping cell segmentation algorithm can accurately detect the center of each overlapping cell and effectively separate each of the overlapping cells. GPU is proven to be an efficient parallel platform for overlapping cell segmentation. PMID:22526139
A class of temporal boundaries derived by quantifying the sense of separation.
Paine, Llewyn Elise; Gilden, David L
2013-12-01
The perception of moment-to-moment environmental flux as being composed of meaningful events requires that memory processes coordinate with cues that signify beginnings and endings. We have constructed a technique that allows this coordination to be monitored indirectly. This technique works by embedding a sequential priming task into the event under study. Memory and perception must be coordinated to resolve temporal flux into scenes. The implicit memory processes inherent in sequential priming are able to effectively shadow then mirror scene-forming processes. Certain temporal boundaries are found to weaken the strength of irrelevant feature priming, a signal which can then be used in more ambiguous cases to infer how people segment time. Over the course of 13 independent studies, we were able to calibrate the technique and then use it to measure the strength of event segmentation in several instructive contexts that involved both visual and auditory modalities. The signal generated by sequential priming may permit the sense of separation between events to be measured as an extensive psychophysical quantity.
Mainela-Arnold, Elina; Evans, Julia L.
2014-01-01
This study tested the predictions of the procedural deficit hypothesis by investigating the relationship between sequential statistical learning and two aspects of lexical ability, lexical-phonological and lexical-semantic, in children with and without specific language impairment (SLI). Participants included 40 children (ages 8;5–12;3), 20 children with SLI and 20 with typical development. Children completed Saffran’s statistical word segmentation task, a lexical-phonological access task (gating task), and a word definition task. Poor statistical learners were also poor at managing lexical-phonological competition during the gating task. However, statistical learning was not a significant predictor of semantic richness in word definitions. The ability to track statistical sequential regularities may be important for learning the inherently sequential structure of lexical-phonology, but not as important for learning lexical-semantic knowledge. Consistent with the procedural/declarative memory distinction, the brain networks associated with the two types of lexical learning are likely to have different learning properties. PMID:23425593
Rios Piedra, Edgar A; Taira, Ricky K; El-Saden, Suzie; Ellingson, Benjamin M; Bui, Alex A T; Hsu, William
2016-02-01
Brain tumor analysis is moving towards volumetric assessment of magnetic resonance imaging (MRI), providing a more precise description of disease progression to better inform clinical decision-making and treatment planning. While a multitude of segmentation approaches exist, inherent variability in the results of these algorithms may incorrectly indicate changes in tumor volume. In this work, we present a systematic approach to characterize variability in tumor boundaries that utilizes equivalence tests as a means to determine whether a tumor volume has significantly changed over time. To demonstrate these concepts, 32 MRI studies from 8 patients were segmented using four different approaches (statistical classifier, region-based, edge-based, knowledge-based) to generate different regions of interest representing tumor extent. We showed that across all studies, the average Dice coefficient for the superset of the different methods was 0.754 (95% confidence interval 0.701-0.808) when compared to a reference standard. We illustrate how variability obtained by different segmentations can be used to identify significant changes in tumor volume between sequential time points. Our study demonstrates that variability is an inherent part of interpreting tumor segmentation results and should be considered as part of the interpretation process.
Estevan, Isaac; Falco, Coral; Silvernail, Julia Freedman; Jandacka, Daniel
2015-01-01
In taekwondo, there is a lack of consensus about how the kick sequence occurs. The aim of this study was to analyse the peak velocity (resultant and value in each plane) of lower limb segments (thigh, shank and foot), and the time to reach this peak velocity in the kicking lower limb during the execution of the roundhouse kick technique. Ten experienced taekwondo athletes (five males and five females; mean age of 25.3 ±5.1 years; mean experience of 12.9 ±5.3 years) participated voluntarily in this study performing consecutive kicking trials to a target located at their sternum height. Measurements for the kinematic analysis were performed using two 3D force plates and an eight camera motion capture system. The results showed that the proximal segment reached a lower peak velocity (resultant and in each plane) than distal segments (except the peak velocity in the frontal plane where the thigh and shank presented similar values), with the distal segment taking the longest to reach this peak velocity (p < 0.01). Also, at the instant every segment reached the peak velocity, the velocity of the distal segment was higher than the proximal one (p < 0.01). It provides evidence about the sequential movement of the kicking lower limb segments. In conclusion, during the roundhouse kick in taekwondo inter-segment motion seems to be based on a proximo-distal pattern. PMID:26557189
Estevan, Isaac; Falco, Coral; Silvernail, Julia Freedman; Jandacka, Daniel
2015-09-29
In taekwondo, there is a lack of consensus about how the kick sequence occurs. The aim of this study was to analyse the peak velocity (resultant and value in each plane) of lower limb segments (thigh, shank and foot), and the time to reach this peak velocity in the kicking lower limb during the execution of the roundhouse kick technique. Ten experienced taekwondo athletes (five males and five females; mean age of 25.3 ±5.1 years; mean experience of 12.9 ±5.3 years) participated voluntarily in this study performing consecutive kicking trials to a target located at their sternum height. Measurements for the kinematic analysis were performed using two 3D force plates and an eight camera motion capture system. The results showed that the proximal segment reached a lower peak velocity (resultant and in each plane) than distal segments (except the peak velocity in the frontal plane where the thigh and shank presented similar values), with the distal segment taking the longest to reach this peak velocity (p < 0.01). Also, at the instant every segment reached the peak velocity, the velocity of the distal segment was higher than the proximal one (p < 0.01). It provides evidence about the sequential movement of the kicking lower limb segments. In conclusion, during the roundhouse kick in taekwondo inter-segment motion seems to be based on a proximo-distal pattern.
Analysis on the use of Multi-Sequence MRI Series for Segmentation of Abdominal Organs
NASA Astrophysics Data System (ADS)
Selver, M. A.; Selvi, E.; Kavur, E.; Dicle, O.
2015-01-01
Segmentation of abdominal organs from MRI data sets is a challenging task due to various limitations and artefacts. During the routine clinical practice, radiologists use multiple MR sequences in order to analyze different anatomical properties. These sequences have different characteristics in terms of acquisition parameters (such as contrast mechanisms and pulse sequence designs) and image properties (such as pixel spacing, slice thicknesses and dynamic range). For a complete understanding of the data, computational techniques should combine the information coming from these various MRI sequences. These sequences are not acquired in parallel but in a sequential manner (one after another). Therefore, patient movements and respiratory motions change the position and shape of the abdominal organs. In this study, the amount of these effects is measured using three different symmetric surface distance metrics performed to three dimensional data acquired from various MRI sequences. The results are compared to intra and inter observer differences and discussions on using multiple MRI sequences for segmentation and the necessities for registration are presented.
Faster embryonic segmentation through elevated Delta-Notch signalling
Liao, Bo-Kai; Jörg, David J.; Oates, Andrew C.
2016-01-01
An important step in understanding biological rhythms is the control of period. A multicellular, rhythmic patterning system termed the segmentation clock is thought to govern the sequential production of the vertebrate embryo's body segments, the somites. Several genetic loss-of-function conditions, including the Delta-Notch intercellular signalling mutants, result in slower segmentation. Here, we generate DeltaD transgenic zebrafish lines with a range of copy numbers and correspondingly increased signalling levels, and observe faster segmentation. The highest-expressing line shows an altered oscillating gene expression wave pattern and shortened segmentation period, producing embryos with more, shorter body segments. Our results reveal surprising differences in how Notch signalling strength is quantitatively interpreted in different organ systems, and suggest a role for intercellular communication in regulating the output period of the segmentation clock by altering its spatial pattern. PMID:27302627
Establishing the Learning Curve of Robotic Sacral Colpopexy in a Start-up Robotics Program.
Sharma, Shefali; Calixte, Rose; Finamore, Peter S
2016-01-01
To determine the learning curve of the following segments of a robotic sacral colpopexy: preoperative setup, operative time, postoperative transition, and room turnover. A retrospective cohort study to determine the number of cases needed to reach points of efficiency in the various segments of a robotic sacral colpopexy (Canadian Task Force II-2). A university-affiliated community hospital. Women who underwent robotic sacral colpopexy at our institution from 2009 to 2013 comprise the study population. Patient characteristics and operative reports were extracted from a patient database that has been maintained since the inception of the robotics program at Winthrop University Hospital and electronic medical records. Based on additional procedures performed, 4 groups of patients were created (A-D). Learning curves for each of the segment times of interest were created using penalized basis spline (B-spline) regression. Operative time was further analyzed using an inverse curve and sequential grouping. A total of 176 patients were eligible. Nonparametric tests detected no difference in procedure times between the 4 groups (A-D) of patients. The preoperative and postoperative points of efficiency were 108 and 118 cases, respectively. The operative points of proficiency and efficiency were 25 and 36 cases, respectively. Operative time was further analyzed using an inverse curve that revealed that after 11 cases the surgeon had reached 90% of the learning plateau. Sequential grouping revealed no significant improvement in operative time after 60 cases. Turnover time could not be assessed because of incomplete data. There is a difference in the operative time learning curve for robotic sacral colpopexy depending on the statistical analysis used. The learning curve of the operative segment showed an improvement in operative time between 25 and 36 cases when using B-spline regression. When the data for operative time was fit to an inverse curve, a learning rate of 11 cases was appreciated. Using sequential grouping to describe the data, no improvement in operative time was seen after 60 cases. Ultimately, we believe that efficiency in operative time is attained after 30 to 60 cases when performing robotic sacral colpopexy. The learning curve for preoperative setup and postoperative transition, which is reflective of anesthesia and nursing staff, was approximately 110 cases. Copyright © 2016 AAGL. Published by Elsevier Inc. All rights reserved.
A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.
Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus
2016-01-01
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.
A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines
Mikut, Ralf; Reischl, Markus
2016-01-01
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213
Odéen, Henrik; Todd, Nick; Diakite, Mahamadou; Minalga, Emilee; Payne, Allison; Parker, Dennis L.
2014-01-01
Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemes utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm3 FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations. PMID:25186406
Programing Procedures Manual (PPM).
1981-12-15
terms ’reel’, ’unit’, and ’volume’ are synonymous and completely interchangeable in the CLOSE statement. Treatment of sequential mass storage files is...logically equivalent to the treatment of a file on tape or analogous sequential media. * For the purpose of showing the effect of various types of CLOSE...Overlay Area CA6 Address of Abend Relative to beginning of overlay segment The programer can now refer to the compile source listing for the overlay
GPU accelerated fuzzy connected image segmentation by using CUDA.
Zhuge, Ying; Cao, Yong; Miller, Robert W
2009-01-01
Image segmentation techniques using fuzzy connectedness principles have shown their effectiveness in segmenting a variety of objects in several large applications in recent years. However, one problem of these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays commodity graphics hardware provides high parallel computing power. In this paper, we present a parallel fuzzy connected image segmentation algorithm on Nvidia's Compute Unified Device Architecture (CUDA) platform for segmenting large medical image data sets. Our experiments based on three data sets with small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 7.2x, 7.3x, and 14.4x, correspondingly, for the three data sets over the sequential implementation of fuzzy connected image segmentation algorithm on CPU.
Sequential addition of short DNA oligos in DNA-polymerase-based synthesis reactions
Gardner, Shea N; Mariella, Jr., Raymond P; Christian, Allen T; Young, Jennifer A; Clague, David S
2013-06-25
A method of preselecting a multiplicity of DNA sequence segments that will comprise the DNA molecule of user-defined sequence, separating the DNA sequence segments temporally, and combining the multiplicity of DNA sequence segments with at least one polymerase enzyme wherein the multiplicity of DNA sequence segments join to produce the DNA molecule of user-defined sequence. Sequence segments may be of length n, where n is an odd integer. In one embodiment the length of desired hybridizing overlap is specified by the user and the sequences and the protocol for combining them are guided by computational (bioinformatics) predictions. In one embodiment sequence segments are combined from multiple reading frames to span the same region of a sequence, so that multiple desired hybridizations may occur with different overlap lengths.
NASA Astrophysics Data System (ADS)
Huang, Xia; Li, Chunqiang; Xiao, Chuan; Sun, Wenqing; Qian, Wei
2017-03-01
The temporal focusing two-photon microscope (TFM) is developed to perform depth resolved wide field fluorescence imaging by capturing frames sequentially. However, due to strong nonignorable noises and diffraction rings surrounding particles, further researches are extremely formidable without a precise particle localization technique. In this paper, we developed a fully-automated scheme to locate particles positions with high noise tolerance. Our scheme includes the following procedures: noise reduction using a hybrid Kalman filter method, particle segmentation based on a multiscale kernel graph cuts global and local segmentation algorithm, and a kinematic estimation based particle tracking method. Both isolated and partial-overlapped particles can be accurately identified with removal of unrelated pixels. Based on our quantitative analysis, 96.22% isolated particles and 84.19% partial-overlapped particles were successfully detected.
Portnoy, Orith; Guranda, Larisa; Apter, Sara; Eiss, David; Amitai, Marianne Michal; Konen, Eli
2011-11-01
The purpose of this study was to compare opacification of the urinary collecting system and radiation dose associated with three-phase 64-MDCT urographic protocols and those associated with a split-bolus dual-phase protocol including furosemide. Images from 150 CT urographic examinations performed with three scanning protocols were retrospectively evaluated. Group A consisted of 50 sequentially registered patients who underwent a three-phase protocol with saline infusion. Group B consisted of 50 sequentially registered patients who underwent a reduced-radiation three-phase protocol with saline. Group C consisted of 50 sequentially registered patients who underwent a dual-phase split-bolus protocol that included a low-dose furosemide injection. Opacification of the urinary collecting system was evaluated with segmental binary scoring. Contrast artifacts were evaluated, and radiation doses were recorded. Results were compared by analysis of variance. A significant reduction in mean effective radiation dose was found between groups A and B (p < 0.001) and between groups B and C (p < 0.001), resulting in 65% reduction between groups A and C (p < 0.001). This reduction did not significantly affect opacification score in any of the 12 urinary segments (p = 0.079). In addition, dense contrast artifacts overlying the renal parenchyma observed with the three-phase protocols (groups A and B) were avoided with the dual-phase protocol (group C) (p < 0.001). A dual-phase protocol with furosemide injection is the preferable technique for CT urography. In comparison with commonly used three-phase protocols, the dual-phase protocol significantly reduces radiation exposure dose without reduction in image quality.
An eye movement based reading intervention in lexical and segmental readers with acquired dyslexia.
Ablinger, Irene; von Heyden, Kerstin; Vorstius, Christian; Halm, Katja; Huber, Walter; Radach, Ralph
2014-01-01
Due to their brain damage, aphasic patients with acquired dyslexia often rely to a greater extent on lexical or segmental reading procedures. Thus, therapy intervention is mostly targeted on the more impaired reading strategy. In the present work we introduce a novel therapy approach based on real-time measurement of patients' eye movements as they attempt to read words. More specifically, an eye movement contingent technique of stepwise letter de-masking was used to support sequential reading, whereas fixation-dependent initial masking of non-central letters stimulated a lexical (parallel) reading strategy. Four lexical and four segmental readers with acquired central dyslexia received our intensive reading intervention. All participants showed remarkable improvements as evident in reduced total reading time, a reduced number of fixations per word and improved reading accuracy. Both types of intervention led to item-specific training effects in all subjects. A generalisation to untrained items was only found in segmental readers after the lexical training. Eye movement analyses were also used to compare word processing before and after therapy, indicating that all patients, with one exclusion, maintained their preferred reading strategy. However, in several cases the balance between sequential and lexical processing became less extreme, indicating a more effective individual interplay of both word processing routes.
Missing observations in multiyear rotation sampling designs
NASA Technical Reports Server (NTRS)
Gbur, E. E.; Sielken, R. L., Jr. (Principal Investigator)
1982-01-01
Because Multiyear estimation of at-harvest stratum crop proportions is more efficient than single year estimation, the behavior of multiyear estimators in the presence of missing acquisitions was studied. Only the (worst) case when a segment proportion cannot be estimated for the entire year is considered. The effect of these missing segments on the variance of the at-harvest stratum crop proportion estimator is considered when missing segments are not replaced, and when missing segments are replaced by segments not sampled in previous years. The principle recommendations are to replace missing segments according to some specified strategy, and to use a sequential procedure for selecting a sampling design; i.e., choose an optimal two year design and then, based on the observed two year design after segment losses have been taken into account, choose the best possible three year design having the observed two year parent design.
Feature Selection based on Machine Learning in MRIs for Hippocampal Segmentation
NASA Astrophysics Data System (ADS)
Tangaro, Sabina; Amoroso, Nicola; Brescia, Massimo; Cavuoti, Stefano; Chincarini, Andrea; Errico, Rosangela; Paolo, Inglese; Longo, Giuseppe; Maglietta, Rosalia; Tateo, Andrea; Riccio, Giuseppe; Bellotti, Roberto
2015-01-01
Neurodegenerative diseases are frequently associated with structural changes in the brain. Magnetic resonance imaging (MRI) scans can show these variations and therefore can be used as a supportive feature for a number of neurodegenerative diseases. The hippocampus has been known to be a biomarker for Alzheimer disease and other neurological and psychiatric diseases. However, it requires accurate, robust, and reproducible delineation of hippocampal structures. Fully automatic methods are usually the voxel based approach; for each voxel a number of local features were calculated. In this paper, we compared four different techniques for feature selection from a set of 315 features extracted for each voxel: (i) filter method based on the Kolmogorov-Smirnov test; two wrapper methods, respectively, (ii) sequential forward selection and (iii) sequential backward elimination; and (iv) embedded method based on the Random Forest Classifier on a set of 10 T1-weighted brain MRIs and tested on an independent set of 25 subjects. The resulting segmentations were compared with manual reference labelling. By using only 23 feature for each voxel (sequential backward elimination) we obtained comparable state-of-the-art performances with respect to the standard tool FreeSurfer.
Re-animation of muscle flaps for improved function in dynamic myoplasty.
Stremel, R W; Zonnevijlle, E D
2001-01-01
The authors report on a series of experiments designed to produce a skeletal muscle contraction functional for dynamic myoplasties. Conventional stimulation techniques recruit all or most of the muscle fibers simultaneously and with maximal strength. This approach has limitations in free dynamic muscle flap transfers that require the muscle to contract immediately after transfer and before re-innervation. Sequential stimulation of segments of the transferred muscle provides a means of producing non-fatiguing contractions of the muscle in the presence or absence of innervation. The muscles studied were the canine gracilis, and all experiments were acute studies in anesthetized animals. Comparison of conventional and sequential segmental neuromuscular stimulation revealed an increase in muscle fatigue resistance and muscle blood flow with the new approach. This approach offers the opportunity for development of physiologically animated tissue and broadening the abilities of reconstructive surgeons in the repair of functional defects. Copyright 2001 Wiley-Liss, Inc.
Spine Patterning Is Guided by Segmentation of the Notochord Sheath.
Wopat, Susan; Bagwell, Jennifer; Sumigray, Kaelyn D; Dickson, Amy L; Huitema, Leonie F A; Poss, Kenneth D; Schulte-Merker, Stefan; Bagnat, Michel
2018-02-20
The spine is a segmented axial structure made of alternating vertebral bodies (centra) and intervertebral discs (IVDs) assembled around the notochord. Here, we show that, prior to centra formation, the outer epithelial cell layer of the zebrafish notochord, the sheath, segments into alternating domains corresponding to the prospective centra and IVD areas. This process occurs sequentially in an anteroposterior direction via the activation of Notch signaling in alternating segments of the sheath, which transition from cartilaginous to mineralizing domains. Subsequently, osteoblasts are recruited to the mineralized domains of the notochord sheath to form mature centra. Tissue-specific manipulation of Notch signaling in sheath cells produces notochord segmentation defects that are mirrored in the spine. Together, our findings demonstrate that notochord sheath segmentation provides a template for vertebral patterning in the zebrafish spine. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odéen, Henrik, E-mail: h.odeen@gmail.com; Diakite, Mahamadou; Todd, Nick
2014-09-15
Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemesmore » utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm{sup 3} FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations.« less
Update on Bayesian Blocks: Segmented Models for Sequential Data
NASA Technical Reports Server (NTRS)
Scargle, Jeff
2017-01-01
The Bayesian Block algorithm, in wide use in astronomy and other areas, has been improved in several ways. The model for block shape has been generalized to include other than constant signal rate - e.g., linear, exponential, or other parametric models. In addition the computational efficiency has been improved, so that instead of O(N**2) the basic algorithm is O(N) in most cases. Other improvements in the theory and application of segmented representations will be described.
PYROTRON WITH TRANSLATIONAL CLOSURE FIELDS
Hartwig, E.C.; Cummings, D.B.; Post, R.F.
1962-01-01
Circuit means is described for effecting inward transla- ' tory motion of the intensified terminal reflector field regions of a magnetic mirror plasma containment field with a simultaneous intensification of the over-all field configuration. The circuit includes a segmented magnetic field generating solenoid and sequentially actuated switch means to consecutively short-circuit the solenoid segments and place charged capacitor banks in shunt with the segments in an appropriate correlated sequence such that electrical energy is transferred inwardly between adjacent segments from the opposite ends of the solenoid. The resulting magnetic field is effective in both radially and axially adiabatically compressing a plasma in a reaction chamber disposed concentrically within the solenoid. In addition, one half of the circuit may be employed to unidirectionally accelerate plasma. (AEC)
Timing Embryo Segmentation: Dynamics and Regulatory Mechanisms of the Vertebrate Segmentation Clock
Resende, Tatiana P.; Andrade, Raquel P.; Palmeirim, Isabel
2014-01-01
All vertebrate species present a segmented body, easily observed in the vertebrate column and its associated components, which provides a high degree of motility to the adult body and efficient protection of the internal organs. The sequential formation of the segmented precursors of the vertebral column during embryonic development, the somites, is governed by an oscillating genetic network, the somitogenesis molecular clock. Herein, we provide an overview of the molecular clock operating during somite formation and its underlying molecular regulatory mechanisms. Human congenital vertebral malformations have been associated with perturbations in these oscillatory mechanisms. Thus, a better comprehension of the molecular mechanisms regulating somite formation is required in order to fully understand the origin of human skeletal malformations. PMID:24895605
Research into automatic recognition of joints in human symmetrical movements
NASA Astrophysics Data System (ADS)
Fan, Yifang; Li, Zhiyu
2008-03-01
High speed photography is a major means of collecting data from human body movement. It enables the automatic identification of joints, which brings great significance to the research, treatment and recovery of injuries, the analysis to the diagnosis of sport techniques and the ergonomics. According to the features that when the adjacent joints of human body are in planetary motion, their distance remains the same, and according to the human body joint movement laws (such as the territory of the articular anatomy and the kinematic features), a new approach is introduced to process the image thresholding of joints filmed by the high speed camera, to automatically identify the joints and to automatically trace the joint points (by labeling markers at the joints). Based upon the closure of marking points, automatic identification can be achieved through thresholding treatment. Due to the screening frequency and the laws of human segment movement, when the marking points have been initialized, their automatic tracking can be achieved with the progressive sequential images.Then the testing results, the data from three-dimensional force platform and the characteristics that human body segment will only rotate around the closer ending segment when the segment has no boding force and only valid to the conservative force all tell that after being analyzed kinematically, the approach is approved to be valid.
Amorim, C M P G; Albert-García, J R; Montenegro, M C B S; Araújo, A N; Calatayud, J Martínez
2007-01-17
The present paper deals with the chemiluminescent determination of the herbicide Karbutilate on the basis of its previous photodegradation by using a low-pressure Hg lamp as UV source in a continuous-flow multicommutation assembly (a solenoid valves set). The pesticide solution was segmented by a solenoid valve and sequentially alternated with segments of the 0.001 mol l(-1) of NaOH solution, the suitable media for the formation of photo-fragments; then it passes through the photo-reactor and was lead to the flow-cell after being divided in small segments which were sequentially alternated with the oxidizing system; 2 x 10(-5) mol l(-1) of potassium permanganate in 0.2% pyrophosphoric acid. The studied calibration range, from 0.1 microg l(-1) to 65 mg l(-1), resulted in a linear behaviour over the range 20 microg l(-1)-20 mg l(-1) and fitting the linear equation: I=(1180+/-30)C+(15+/-5) with the correlation coefficient 0.9998. The limit of detection was 10 microg l(-1) and the sample throughput 17 h(-1). After testing the influence of a large series of potential interfering species, the method was applied to water and human urine samples.
Bravo, Ignacio; Mazo, Manuel; Lázaro, José L.; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel
2010-01-01
This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices. PMID:22163406
Bravo, Ignacio; Mazo, Manuel; Lázaro, José L; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel
2010-01-01
This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices.
Automated podosome identification and characterization in fluorescence microscopy images.
Meddens, Marjolein B M; Rieger, Bernd; Figdor, Carl G; Cambi, Alessandra; van den Dries, Koen
2013-02-01
Podosomes are cellular adhesion structures involved in matrix degradation and invasion that comprise an actin core and a ring of cytoskeletal adaptor proteins. They are most often identified by staining with phalloidin, which binds F-actin and therefore visualizes the core. However, not only podosomes, but also many other cytoskeletal structures contain actin, which makes podosome segmentation by automated image processing difficult. Here, we have developed a quantitative image analysis algorithm that is optimized to identify podosome cores within a typical sample stained with phalloidin. By sequential local and global thresholding, our analysis identifies up to 76% of podosome cores excluding other F-actin-based structures. Based on the overlap in podosome identifications and quantification of podosome numbers, our algorithm performs equally well compared to three experts. Using our algorithm we show effects of actin polymerization and myosin II inhibition on the actin intensity in both podosome core and associated actin network. Furthermore, by expanding the core segmentations, we reveal a previously unappreciated differential distribution of cytoskeletal adaptor proteins within the podosome ring. These applications illustrate that our algorithm is a valuable tool for rapid and accurate large-scale analysis of podosomes to increase our understanding of these characteristic adhesion structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsui, O.; Takashima, T.; Kadoya, M.
Peripheral obstruction of intrahepatic portal vein branches was detected by dynamic sequential computed tomography during arterial portography and subsequently confirmed surgically in 9 patients with hepatic neoplasm (7 hepatocellular carcinomas, 1 cholangiocarcinoma, and 1 metastatic lymphadenopathy from gastric carcinoma). Eight of these 10 segments showed more dense staining than other regions of the liver during infusion hepatic angiography. This pattern was attributed to trans-sinusoidal or peripheral arterioportal shunting. In 5 cases, the segmental staining obscured the tumor stain, making the tumor appear larger than it actually was or causing it to be missed altogether.
Cherry, Kevin M; Peplinski, Brandon; Kim, Lauren; Wang, Shijun; Lu, Le; Zhang, Weidong; Liu, Jianfei; Wei, Zhuoshi; Summers, Ronald M
2015-01-01
Given the potential importance of marginal artery localization in automated registration in computed tomography colonography (CTC), we have devised a semi-automated method of marginal vessel detection employing sequential Monte Carlo tracking (also known as particle filtering tracking) by multiple cue fusion based on intensity, vesselness, organ detection, and minimum spanning tree information for poorly enhanced vessel segments. We then employed a random forest algorithm for intelligent cue fusion and decision making which achieved high sensitivity and robustness. After applying a vessel pruning procedure to the tracking results, we achieved statistically significantly improved precision compared to a baseline Hessian detection method (2.7% versus 75.2%, p<0.001). This method also showed statistically significantly improved recall rate compared to a 2-cue baseline method using fewer vessel cues (30.7% versus 67.7%, p<0.001). These results demonstrate that marginal artery localization on CTC is feasible by combining a discriminative classifier (i.e., random forest) with a sequential Monte Carlo tracking mechanism. In so doing, we present the effective application of an anatomical probability map to vessel pruning as well as a supplementary spatial coordinate system for colonic segmentation and registration when this task has been confounded by colon lumen collapse. Published by Elsevier B.V.
Improved and Robust Detection of Cell Nuclei from Four Dimensional Fluorescence Images
Bashar, Md. Khayrul; Yamagata, Kazuo; Kobayashi, Tetsuya J.
2014-01-01
Segmentation-free direct methods are quite efficient for automated nuclei extraction from high dimensional images. A few such methods do exist but most of them do not ensure algorithmic robustness to parameter and noise variations. In this research, we propose a method based on multiscale adaptive filtering for efficient and robust detection of nuclei centroids from four dimensional (4D) fluorescence images. A temporal feedback mechanism is employed between the enhancement and the initial detection steps of a typical direct method. We estimate the minimum and maximum nuclei diameters from the previous frame and feed back them as filter lengths for multiscale enhancement of the current frame. A radial intensity-gradient function is optimized at positions of initial centroids to estimate all nuclei diameters. This procedure continues for processing subsequent images in the sequence. Above mechanism thus ensures proper enhancement by automated estimation of major parameters. This brings robustness and safeguards the system against additive noises and effects from wrong parameters. Later, the method and its single-scale variant are simplified for further reduction of parameters. The proposed method is then extended for nuclei volume segmentation. The same optimization technique is applied to final centroid positions of the enhanced image and the estimated diameters are projected onto the binary candidate regions to segment nuclei volumes.Our method is finally integrated with a simple sequential tracking approach to establish nuclear trajectories in the 4D space. Experimental evaluations with five image-sequences (each having 271 3D sequential images) corresponding to five different mouse embryos show promising performances of our methods in terms of nuclear detection, segmentation, and tracking. A detail analysis with a sub-sequence of 101 3D images from an embryo reveals that the proposed method can improve the nuclei detection accuracy by 9 over the previous methods, which used inappropriate large valued parameters. Results also confirm that the proposed method and its variants achieve high detection accuracies ( 98 mean F-measure) irrespective of the large variations of filter parameters and noise levels. PMID:25020042
NASA Astrophysics Data System (ADS)
Ringenberg, Jordan; Deo, Makarand; Devabhaktuni, Vijay; Filgueiras-Rama, David; Pizarro, Gonzalo; Ibañez, Borja; Berenfeld, Omer; Boyers, Pamela; Gold, Jeffrey
2012-12-01
This paper presents an automated method to segment left ventricle (LV) tissues from functional and delayed-enhancement (DE) cardiac magnetic resonance imaging (MRI) scans using a sequential multi-step approach. First, a region of interest (ROI) is computed to create a subvolume around the LV using morphological operations and image arithmetic. From the subvolume, the myocardial contours are automatically delineated using difference of Gaussians (DoG) filters and GSV snakes. These contours are used as a mask to identify pathological tissues, such as fibrosis or scar, within the DE-MRI. The presented automated technique is able to accurately delineate the myocardium and identify the pathological tissue in patient sets. The results were validated by two expert cardiologists, and in one set the automated results are quantitatively and qualitatively compared with expert manual delineation. Furthermore, the method is patient-specific, performed on an entire patient MRI series. Thus, in addition to providing a quick analysis of individual MRI scans, the fully automated segmentation method is used for effectively tagging regions in order to reconstruct computerized patient-specific 3D cardiac models. These models can then be used in electrophysiological studies and surgical strategy planning.
[Ebstein's "like" anomaly ventricular double inlet. A rare association].
Muñoz Castellanos, Luis; Kuri Nivon, Magdalena
The association of univentricular heart with double inlet and Ebstein's "like" anomaly of the common atrioventricular valve is extremely rare. Two hearts with this association are described with the segmental sequential system which determine the atrial situs, the types of atrioventricular and ventriculoarterial connections and associated anomalies. Both hearts had atrial situs solitus, and a univentricular heart with common atrioventricular valve, a foramen primum and double outlet ventricle with normal crossed great arteries. In the fiefirst heart the four leaflets of the atrioventricular valve were displaced and fused to the ventricular walls, from the atrioventricular union roward the apex with atrialization of the inlet and trabecular zones and there was stenosis in the infundibulum and in the pulmonary valve. In the second heart the proximal segment of the atrioventricular valve was displaced and fused to the ventricular whith shot atrialization and the distal segment was dysplastic with fibromixoid nodules and tendinous cords short and thick; the pulmonary artery was dilate. Both hearts are grouped in the atrioventricular univentricular connection in the segmental sequential system. The application of this method in the diagnosis of congenital heart disease demonstrates its usefulness. The associations of complex anomalies in these hearts show us the infinite spectrum of presentation of congenital heart disease which expands our knowledge of pediatric cardiology. Copyright © 2016 Instituto Nacional de Cardiología Ignacio Chávez. Publicado por Masson Doyma México S.A. All rights reserved.
Albert-García, J R; Calatayud, J Martínez
2008-05-15
The present paper deals with an analytical strategy based on coupling photo-induced chemiluminescence in a multicommutation continuous-flow methodology for the determination of the herbicide benfuresate. The solenoid valve inserted as small segments of the analyte solution was sequentially alternated with segments of the NaOH solution for adjusting the medium for the photodegradation. Both flow rates (sample and medium) were adjusted to required time for photodegradation, 90 s; and then, the resulting solution was also sequentially inserted as segments alternated with segments of the oxidizing solution system, hexacyanoferrate (III) in alkaline medium. The calibration range from 1 microg L(-1) to 95 mg L(-1), resulted in a linear behaviour over the range 1 microg L(-1) to 4 mg L(-1) and fitting the linear equation: I=4555.7x+284.2, correlation coefficient 0.9999. The limit of detection was 0.1 microg L(-1) (n=5, criteria 3 sigma) and the sample throughput was 22 h(-1). The consumption of solutions was very small; per peak were 0.66 mL, 0.16 mL and 0. 32 mL sample, medium and oxidant, respectively. Inter- and intra-day reproducibility resulted in a R.S.D. of 3.9% and 3.4%, respectively. After testing the influence of a large series of potential interferents the method is applied to water samples obtained from different places, human urine and to one formulation.
Sequential addition of short DNA oligos in DNA-polymerase-based synthesis reactions
Gardner, Shea N [San Leandro, CA; Mariella, Jr., Raymond P.; Christian, Allen T [Tracy, CA; Young, Jennifer A [Berkeley, CA; Clague, David S [Livermore, CA
2011-01-18
A method of fabricating a DNA molecule of user-defined sequence. The method comprises the steps of preselecting a multiplicity of DNA sequence segments that will comprise the DNA molecule of user-defined sequence, separating the DNA sequence segments temporally, and combining the multiplicity of DNA sequence segments with at least one polymerase enzyme wherein the multiplicity of DNA sequence segments join to produce the DNA molecule of user-defined sequence. Sequence segments may be of length n, where n is an even or odd integer. In one embodiment the length of desired hybridizing overlap is specified by the user and the sequences and the protocol for combining them are guided by computational (bioinformatics) predictions. In one embodiment sequence segments are combined from multiple reading frames to span the same region of a sequence, so that multiple desired hybridizations may occur with different overlap lengths. In one embodiment starting sequence fragments are of different lengths, n, n+1, n+2, etc.
Magnetization Ratchet in Cylindrical Nanowires.
Bran, Cristina; Berganza, Eider; Fernandez-Roldan, Jose A; Palmero, Ester M; Meier, Jessica; Calle, Esther; Jaafar, Miriam; Foerster, Michael; Aballe, Lucia; Fraile Rodriguez, Arantxa; P Del Real, Rafael; Asenjo, Agustina; Chubykalo-Fesenko, Oksana; Vazquez, Manuel
2018-05-31
The unidirectional motion of information carriers such as domain walls in magnetic nanostrips is a key feature for many future spintronic applications based on shift registers. This magnetic ratchet effect has so far been achieved in a limited number of complex nanomagnetic structures, for example, by lithographically engineered pinning sites. Here we report on a simple remagnetization ratchet originated in the asymmetric potential from the designed increasing lengths of magnetostatically coupled ferromagnetic segments in FeCo/Cu cylindrical nanowires. The magnetization reversal in neighboring segments propagates sequentially in steps starting from the shorter segments, irrespective of the applied field direction. This natural and efficient ratchet offers alternatives for the design of three-dimensional advanced storage and logic devices.
Temporally consistent probabilistic detection of new multiple sclerosis lesions in brain MRI.
Elliott, Colm; Arnold, Douglas L; Collins, D Louis; Arbel, Tal
2013-08-01
Detection of new Multiple Sclerosis (MS) lesions on magnetic resonance imaging (MRI) is important as a marker of disease activity and as a potential surrogate for relapses. We propose an approach where sequential scans are jointly segmented, to provide a temporally consistent tissue segmentation while remaining sensitive to newly appearing lesions. The method uses a two-stage classification process: 1) a Bayesian classifier provides a probabilistic brain tissue classification at each voxel of reference and follow-up scans, and 2) a random-forest based lesion-level classification provides a final identification of new lesions. Generative models are learned based on 364 scans from 95 subjects from a multi-center clinical trial. The method is evaluated on sequential brain MRI of 160 subjects from a separate multi-center clinical trial, and is compared to 1) semi-automatically generated ground truth segmentations and 2) fully manual identification of new lesions generated independently by nine expert raters on a subset of 60 subjects. For new lesions greater than 0.15 cc in size, the classifier has near perfect performance (99% sensitivity, 2% false detection rate), as compared to ground truth. The proposed method was also shown to exceed the performance of any one of the nine expert manual identifications.
Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Abotteen, K. M. (Principal Investigator)
1980-01-01
The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.
Modified SSCP method using sequential electrophoresis of multiple nucleic acid segments
Gatti, Richard A.
2002-10-01
The present invention is directed to a method of screening large, complex, polyexonic eukaryotic genes such as the ATM gene for mutations and polymorphisms by an improved version of single strand conformation polymorphism (SSCP) electrophoresis that allows electrophoresis of two or three amplified segments in a single lane. The present invention also is directed to new mutations and polymorphisms in the ATM gene that are useful in performing more accurate screening of human DNA samples for mutations and in distinguishing mutations from polymorphisms, thereby improving the efficiency of automated screening methods.
Time-resolved non-sequential ray-tracing modelling of non-line-of-sight picosecond pulse LIDAR
NASA Astrophysics Data System (ADS)
Sroka, Adam; Chan, Susan; Warburton, Ryan; Gariepy, Genevieve; Henderson, Robert; Leach, Jonathan; Faccio, Daniele; Lee, Stephen T.
2016-05-01
The ability to detect motion and to track a moving object that is hidden around a corner or behind a wall provides a crucial advantage when physically going around the obstacle is impossible or dangerous. One recently demonstrated approach to achieving this goal makes use of non-line-of-sight picosecond pulse laser ranging. This approach has recently become interesting due to the availability of single-photon avalanche diode (SPAD) receivers with picosecond time resolution. We present a time-resolved non-sequential ray-tracing model and its application to indirect line-of-sight detection of moving targets. The model makes use of the Zemax optical design programme's capabilities in stray light analysis where it traces large numbers of rays through multiple random scattering events in a 3D non-sequential environment. Our model then reconstructs the generated multi-segment ray paths and adds temporal analysis. Validation of this model against experimental results is shown. We then exercise the model to explore the limits placed on system design by available laser sources and detectors. In particular we detail the requirements on the laser's pulse energy, duration and repetition rate, and on the receiver's temporal response and sensitivity. These are discussed in terms of the resulting implications for achievable range, resolution and measurement time while retaining eye-safety with this technique. Finally, the model is used to examine potential extensions to the experimental system that may allow for increased localisation of the position of the detected moving object, such as the inclusion of multiple detectors and/or multiple emitters.
Uncertainty Analysis for Angle Calibrations Using Circle Closure
Estler, W. Tyler
1998-01-01
We analyze two types of full-circle angle calibrations: a simple closure in which a single set of unknown angular segments is sequentially compared with an unknown reference angle, and a dual closure in which two divided circles are simultaneously calibrated by intercomparison. In each case, the constraint of circle closure provides auxiliary information that (1) enables a complete calibration process without reference to separately calibrated reference artifacts, and (2) serves to reduce measurement uncertainty. We derive closed-form expressions for the combined standard uncertainties of angle calibrations, following guidelines published by the International Organization for Standardization (ISO) and NIST. The analysis includes methods for the quantitative evaluation of the standard uncertainty of small angle measurement using electronic autocollimators, including the effects of calibration uncertainty and air turbulence. PMID:28009359
AdOn HDP-HMM: An Adaptive Online Model for Segmentation and Classification of Sequential Data.
Bargi, Ava; Xu, Richard Yi Da; Piccardi, Massimo
2017-09-21
Recent years have witnessed an increasing need for the automated classification of sequential data, such as activities of daily living, social media interactions, financial series, and others. With the continuous flow of new data, it is critical to classify the observations on-the-fly and without being limited by a predetermined number of classes. In addition, a model should be able to update its parameters in response to a possible evolution in the distributions of the classes. This compelling problem, however, does not seem to have been adequately addressed in the literature, since most studies focus on offline classification over predefined class sets. In this paper, we present a principled solution for this problem based on an adaptive online system leveraging Markov switching models and hierarchical Dirichlet process priors. This adaptive online approach is capable of classifying the sequential data over an unlimited number of classes while meeting the memory and delay constraints typical of streaming contexts. In this paper, we introduce an adaptive ''learning rate'' that is responsible for balancing the extent to which the model retains its previous parameters or adapts to new observations. Experimental results on stationary and evolving synthetic data and two video data sets, TUM Assistive Kitchen and collated Weizmann, show a remarkable performance in terms of segmentation and classification, particularly for sequences from evolutionary distributions and/or those containing previously unseen classes.
Results for both sequential and simultaneous calibration of exchange flows between segments of a 10-box, one-dimensional, well-mixed, bifurcated tidal mixing model for Tampa Bay are reported. Calibrations were conducted for three model options with different mathematical expressi...
A Computational Model of Event Segmentation from Perceptual Prediction
ERIC Educational Resources Information Center
Reynolds, Jeremy R.; Zacks, Jeffrey M.; Braver, Todd S.
2007-01-01
People tend to perceive ongoing continuous activity as series of discrete events. This partitioning of continuous activity may occur, in part, because events correspond to dynamic patterns that have recurred across different contexts. Recurring patterns may lead to reliable sequential dependencies in observers' experiences, which then can be used…
Computer vision for driver assistance systems
NASA Astrophysics Data System (ADS)
Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner
1998-07-01
Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.
Chamoli, Uphar; Korkusuz, Mert H; Sabnis, Ashutosh B; Manolescu, Andrei R; Tsafnat, Naomi; Diwan, Ashish D
2015-11-01
Lumbar spinal surgeries may compromise the integrity of posterior osteoligamentous structures implicating mechanical stability. Circumstances necessitating a concomitant surgery to achieve restabilisation are not well understood. The main objective of this in vitro study was to quantify global and segmental (index and adjacent levels) kinematic changes in the lumbar spine following sequential resection of the posterior osteoligamentous structures using pure moment testing protocols. Six fresh frozen cadaveric kangaroo lumbar spines (T12-S1) were tested under a bending moment in flexion-extension, bilateral bending, and axial torsion in a 6-degree-of-freedom Kinematic Spine Simulator. Specimens were tested in the following order: intact state (D0), after interspinous and supraspinous ligaments transection between L4 and L5 (D1), further after a total bilateral facetectomy between L4 and L5 (D2). Segmental motions at the cephalad, damaged, and caudal levels were recorded using an infrared-based motion tracking device. Following D1, no significant change in the global range of motion was observed in any of the bending planes. Following D2, a significant increase in the global range of motion from the baseline (D0) was observed in axial torsion (median normalised change +20%). At the damaged level, D2 resulted in a significant increase in the segmental range of motion in flexion-extension (+77%) and axial torsion (+492%). Additionally, a significant decrease in the segmental range of motion in axial torsion (-35%) was observed at the caudal level following D2. These results suggest that a multi-segment lumbar spine acts as a mechanism for transmitting motions, and that a compromised joint may significantly alter motion transfer to adjacent segments. We conclude that the interspinous and supraspinous ligaments play a modest role in restricting global spinal motions within physiologic limits. Following interspinous and supraspinous ligaments transection, a total bilateral facetectomy resulted in a significant increase in axial torsion motion, both at global and damaged levels, accompanied with a compensatory decrease in motion at the caudal level. © IMechE 2015.
Webb, Alexis B; Lengyel, Iván M; Jörg, David J; Valentin, Guillaume; Jülicher, Frank; Morelli, Luis G; Oates, Andrew C
2016-01-01
In vertebrate development, the sequential and rhythmic segmentation of the body axis is regulated by a “segmentation clock”. This clock is comprised of a population of coordinated oscillating cells that together produce rhythmic gene expression patterns in the embryo. Whether individual cells autonomously maintain oscillations, or whether oscillations depend on signals from neighboring cells is unknown. Using a transgenic zebrafish reporter line for the cyclic transcription factor Her1, we recorded single tailbud cells in vitro. We demonstrate that individual cells can behave as autonomous cellular oscillators. We described the observed variability in cell behavior using a theory of generic oscillators with correlated noise. Single cells have longer periods and lower precision than the tissue, highlighting the role of collective processes in the segmentation clock. Our work reveals a population of cells from the zebrafish segmentation clock that behave as self-sustained, autonomous oscillators with distinctive noisy dynamics. DOI: http://dx.doi.org/10.7554/eLife.08438.001 PMID:26880542
Multiple Active Contours Guided by Differential Evolution for Medical Image Segmentation
Cruz-Aceves, I.; Avina-Cervantes, J. G.; Lopez-Hernandez, J. M.; Rostro-Gonzalez, H.; Garcia-Capulin, C. H.; Torres-Cisneros, M.; Guzman-Cabrera, R.
2013-01-01
This paper presents a new image segmentation method based on multiple active contours guided by differential evolution, called MACDE. The segmentation method uses differential evolution over a polar coordinate system to increase the exploration and exploitation capabilities regarding the classical active contour model. To evaluate the performance of the proposed method, a set of synthetic images with complex objects, Gaussian noise, and deep concavities is introduced. Subsequently, MACDE is applied on datasets of sequential computed tomography and magnetic resonance images which contain the human heart and the human left ventricle, respectively. Finally, to obtain a quantitative and qualitative evaluation of the medical image segmentations compared to regions outlined by experts, a set of distance and similarity metrics has been adopted. According to the experimental results, MACDE outperforms the classical active contour model and the interactive Tseng method in terms of efficiency and robustness for obtaining the optimal control points and attains a high accuracy segmentation. PMID:23983809
Brain tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN).
Iqbal, Sajid; Ghani, M Usman; Saba, Tanzila; Rehman, Amjad
2018-04-01
A tumor could be found in any area of the brain and could be of any size, shape, and contrast. There may exist multiple tumors of different types in a human brain at the same time. Accurate tumor area segmentation is considered primary step for treatment of brain tumors. Deep Learning is a set of promising techniques that could provide better results as compared to nondeep learning techniques for segmenting timorous part inside a brain. This article presents a deep convolutional neural network (CNN) to segment brain tumors in MRIs. The proposed network uses BRATS segmentation challenge dataset which is composed of images obtained through four different modalities. Accordingly, we present an extended version of existing network to solve segmentation problem. The network architecture consists of multiple neural network layers connected in sequential order with the feeding of Convolutional feature maps at the peer level. Experimental results on BRATS 2015 benchmark data thus show the usability of the proposed approach and its superiority over the other approaches in this area of research. © 2018 Wiley Periodicals, Inc.
Probing the transition state for nucleic acid hybridization using phi-value analysis.
Kim, Jandi; Shin, Jong-Shik
2010-04-27
Genetic regulation by noncoding RNA elements such as microRNA and small interfering RNA (siRNA) involves hybridization of a short single-stranded RNA with a complementary segment in a target mRNA. The physical basis of the hybridization process between the structured nucleic acids is not well understood primarily because of the lack of information about the transition-state structure. Here we use transition-state theory, inspired by phi-value analysis in protein folding studies, to provide quantitative analysis of the relationship between changes in the secondary structure stability and the activation free energy. Time course monitoring of the hybridization reaction was performed under pseudo-steady-state conditions using a single fluorophore. The phi-value analysis indicates that the native secondary structure remains intact in the transition state. The nativelike transition state was confirmed via examination of the salt dependence of the hybridization kinetics, indicating that the number of sodium ions associated with the transition state was not substantially affected by changes in the native secondary structure. These results propose that hybridization between structured nucleic acids undergoes a transition state leading to formation of a nucleation complex and then is followed by sequential displacement of preexisting base pairings involving successive small energy barriers. The proposed mechanism might provide new insight into physical processes during small RNA-mediated gene silencing, which is essential to selection of a target mRNA segment for siRNA design.
Anderson, Donald D; Segal, Neil A; Kern, Andrew M; Nevitt, Michael C; Torner, James C; Lynch, John A
2012-01-01
Recent findings suggest that contact stress is a potent predictor of subsequent symptomatic osteoarthritis development in the knee. However, much larger numbers of knees (likely on the order of hundreds, if not thousands) need to be reliably analyzed to achieve the statistical power necessary to clarify this relationship. This study assessed the reliability of new semiautomated computational methods for estimating contact stress in knees from large population-based cohorts. Ten knees of subjects from the Multicenter Osteoarthritis Study were included. Bone surfaces were manually segmented from sequential 1.0 Tesla magnetic resonance imaging slices by three individuals on two nonconsecutive days. Four individuals then registered the resulting bone surfaces to corresponding bone edges on weight-bearing radiographs, using a semi-automated algorithm. Discrete element analysis methods were used to estimate contact stress distributions for each knee. Segmentation and registration reliabilities (day-to-day and interrater) for peak and mean medial and lateral tibiofemoral contact stress were assessed with Shrout-Fleiss intraclass correlation coefficients (ICCs). The segmentation and registration steps of the modeling approach were found to have excellent day-to-day (ICC 0.93-0.99) and good inter-rater reliability (0.84-0.97). This approach for estimating compartment-specific tibiofemoral contact stress appears to be sufficiently reliable for use in large population-based cohorts.
Hanging-wall deformation above a normal fault: sequential limit analyses
NASA Astrophysics Data System (ADS)
Yuan, Xiaoping; Leroy, Yves M.; Maillot, Bertrand
2015-04-01
The deformation in the hanging wall above a segmented normal fault is analysed with the sequential limit analysis (SLA). The method combines some predictions on the dip and position of the active fault and axial surface, with geometrical evolution à la Suppe (Groshong, 1989). Two problems are considered. The first followed the prototype proposed by Patton (2005) with a pre-defined convex, segmented fault. The orientation of the upper segment of the normal fault is an unknown in the second problem. The loading in both problems consists of the retreat of the back wall and the sedimentation. This sedimentation starts from the lowest point of the topography and acts at the rate rs relative to the wall retreat rate. For the first problem, the normal fault either has a zero friction or a friction value set to 25o or 30o to fit the experimental results (Patton, 2005). In the zero friction case, a hanging wall anticline develops much like in the experiments. In the 25o friction case, slip on the upper segment is accompanied by rotation of the axial plane producing a broad shear zone rooted at the fault bend. The same observation is made in the 30o case, but without slip on the upper segment. Experimental outcomes show a behaviour in between these two latter cases. For the second problem, mechanics predicts a concave fault bend with an upper segment dip decreasing during extension. The axial surface rooting at the normal fault bend sees its dips increasing during extension resulting in a curved roll-over. Softening on the normal fault leads to a stepwise rotation responsible for strain partitioning into small blocks in the hanging wall. The rotation is due to the subsidence of the topography above the hanging wall. Sedimentation in the lowest region thus reduces the rotations. Note that these rotations predicted by mechanics are not accounted for in most geometrical approaches (Xiao and Suppe, 1992) and are observed in sand box experiments (Egholm et al., 2007, referring to Dahl, 1987). References: Egholm, D. L., M. Sandiford, O. R. Clausen, and S. B. Nielsen (2007), A new strategy for discrete element numerical models: 2. sandbox applications, Journal of Geophysical Research, 112 (B05204), doi:10.1029/2006JB004558. Groshong, R. H. (1989), Half-graben structures: Balanced models of extensional fault-bend folds, Geological Society of America Bulletin, 101 (1), 96-105. Patton, T. L. (2005), Sandbox models of downward-steepening normal faults, AAPG Bulletin, 89 (6), 781-797. Xiao, H.-B., and J. Suppe (1992), Orgin of rollover, AAPG Bulletin, 76 (4), 509-529.
Considering User's Access Pattern in Multimedia File Systems
NASA Astrophysics Data System (ADS)
Cho, KyoungWoon; Ryu, YeonSeung; Won, Youjip; Koh, Kern
2002-12-01
Legacy buffer cache management schemes for multimedia server are grounded at the assumption that the application sequentially accesses the multimedia file. However, user access pattern may not be sequential in some circumstances, for example, in distance learning application, where the user may exploit the VCR-like function(rewind and play) of the system and accesses the particular segments of video repeatedly in the middle of sequential playback. Such a looping reference can cause a significant performance degradation of interval-based caching algorithms. And thus an appropriate buffer cache management scheme is required in order to deliver desirable performance even under the workload that exhibits looping reference behavior. We propose Adaptive Buffer cache Management(ABM) scheme which intelligently adapts to the file access characteristics. For each opened file, ABM applies either the LRU replacement or the interval-based caching depending on the Looping Reference Indicator, which indicates that how strong temporally localized access pattern is. According to our experiment, ABM exhibits better buffer cache miss ratio than interval-based caching or LRU, especially when the workload exhibits not only sequential but also looping reference property.
Liao, Xiaolei; Zhao, Juanjuan; Jiao, Cheng; Lei, Lei; Qiang, Yan; Cui, Qiang
2016-01-01
Background Lung parenchyma segmentation is often performed as an important pre-processing step in the computer-aided diagnosis of lung nodules based on CT image sequences. However, existing lung parenchyma image segmentation methods cannot fully segment all lung parenchyma images and have a slow processing speed, particularly for images in the top and bottom of the lung and the images that contain lung nodules. Method Our proposed method first uses the position of the lung parenchyma image features to obtain lung parenchyma ROI image sequences. A gradient and sequential linear iterative clustering algorithm (GSLIC) for sequence image segmentation is then proposed to segment the ROI image sequences and obtain superpixel samples. The SGNF, which is optimized by a genetic algorithm (GA), is then utilized for superpixel clustering. Finally, the grey and geometric features of the superpixel samples are used to identify and segment all of the lung parenchyma image sequences. Results Our proposed method achieves higher segmentation precision and greater accuracy in less time. It has an average processing time of 42.21 seconds for each dataset and an average volume pixel overlap ratio of 92.22 ± 4.02% for four types of lung parenchyma image sequences. PMID:27532214
Dmitriev, Anton E; Kuklo, Timothy R; Lehman, Ronald A; Rosner, Michael K
2007-03-15
This is an in vitro biomechanical study. The current investigation was performed to evaluate the stabilizing potential of anterior, posterior, and circumferential cervical fixation on operative and adjacent segment motion following 2 and 3-level reconstructions. Previous studies reported increases in adjacent level range of motion (ROM) and intradiscal pressure following single-level cervical arthrodesis; however, no studies have compared adjacent level effects following multilevel anterior versus posterior reconstructions. Ten human cadaveric cervical spines were biomechanically tested using an unconstrained spine simulator under axial rotation, flexion-extension, and lateral bending loading. After intact analysis, all specimens were sequentially instrumented from C3 to C5 with: (1) lateral mass fixation, (2) anterior cervical plate with interbody cages, and (3) combined anterior and posterior fixation. Following biomechanical analysis of 2-level constructs, fixation was extended to C6 and testing repeated. Full ROM was monitored at the operative and adjacent levels, and data normalized to the intact (100%). All reconstructive methods reduced operative level ROM relative to intact specimens under all loading methods (P < 0.05). However, circumferential fixation provided the greatest segmental stability among 2 and 3-level constructs (P < 0.05). Moreover, anterior cervical plate fixation was least efficient at stabilizing operative segments following C3-C6 arthrodesis (P < 0.05). Supradjacent ROM was increased for all treatment groups compared to normal data during flexion-extension testing (P < 0.05). Similar trends were observed under axial rotation and lateral bending loading. At the distal level, flexion-extension and axial rotation testing revealed comparable intergroup differences (P < 0.05), while lateral bending loading indicated greater ROM following 2-level circumferential fixation (P < 0.05). Results from our study revealed greater adjacent level motion following all 3 fixation types. No consistent significant intergroup differences in neighboring segment kinematics were detected among reconstructions. Circumferential fixation provided the greatest level of segmental stability without additional significant increase in adjacent level ROM.
Simultaneous determination of rutin and ascorbic acid in a sequential injection lab-at-valve system.
Al-Shwaiyat, Mohammed Khair E A; Miekh, Yuliia V; Denisenko, Tatyana A; Vishnikin, Andriy B; Andruch, Vasil; Bazel, Yaroslav R
2018-02-05
A green, simple, accurate and highly sensitive sequential injection lab-at-valve procedure has been developed for the simultaneous determination of ascorbic acid (Asc) and rutin using 18-molybdo-2-phosphate Wells-Dawson heteropoly anion (18-MPA). The method is based on the dependence of the reaction rate between 18-MPA and reducing agents on the solution pH. Only Asc is capable of interacting with 18-MPA at pH 4.7, while at pH 7.4 the reaction with both Asc and rutin proceeds simultaneously. In order to improve the precision and sensitivity of the analysis, to minimize reagent consumption and to remove the Schlieren effect, the manifold for the sequential injection analysis was supplemented with external reaction chamber, and the reaction mixture was segmented. By the reduction of 18-MPA with reducing agents one- and two-electron heteropoly blues are formed. The fraction of one-electron heteropoly blue increases at low concentrations of the reducer. Measurement of the absorbance at a wavelength corresponding to the isobestic point allows strictly linear calibration graphs to be obtained. The calibration curves were linear in the concentration ranges of 0.3-24mgL -1 and 0.2-14mgL -1 with detection limits of 0.13mgL -1 and 0.09mgL -1 for rutin and Asc, respectively. The determination of rutin was possible in the presence of up to a 20-fold molar excess of Asc. The method was applied to the determination of Asc and rutin in ascorutin tablets with acceptable accuracy and precision (1-2%). Copyright © 2017 Elsevier B.V. All rights reserved.
Amaris, M A; Rashev, P Z; Mintchev, M P; Bowes, K L
2002-01-01
Background and aims: Invoked peristaltic contractions and movement of solid content have not been attempted in normal canine colon. The purpose of this study was to determine if movement of solid content through the colon could be produced by microprocessor controlled sequential stimulation. Methods: The study was performed on six anaesthetised dogs. At laparotomy, a 15 cm segment of descending colon was selected, the proximal end closed with a purse string suture, and the distal end opened into a collecting container. Four sets of subserosal stimulating electrodes were implanted at 3 cm intervals. The segment of bowel was filled with a mixture of dog food and 50 plastic pellets before each of 2–5 random sessions of non-stimulated or stimulated emptying. Propagated contractions were generated using microprocessor controlled bipolar trains of 50 Hz rectangular voltage having 20 V (peak to peak) amplitude, 18 second stimulus duration, and a nine second phase lag between stimulation trains in sequential electrode sets. Results: Electrical stimulation using the above mentioned parameters resulted in powerful phasic contractions that closed the lumen. By phase locking the stimulation voltage between adjacent sets of electrodes, propagated contractions could be produced in an aboral or orad direction. The number of evacuated pellets during the stimulation sessions was significantly higher than during the non-stimulated sessions (p<0.01). Conclusions: Microprocessor controlled electrical stimulation accelerated movement of colonic content suggesting the possibility of future implantable colonic stimulators. PMID:11889065
Rysava, K; McGill, R A R; Matthiopoulos, J; Hopcraft, J G C
2016-07-15
Nutritional bottlenecks often limit the abundance of animal populations and alter individual behaviours; however, establishing animal condition over extended periods of time using non-invasive techniques has been a major limitation in population ecology. We test if the sequential measurement of δ(15) N values in a continually growing tissue, such as hair, can be used as a natural bio-logger akin to tree rings or ice cores to provide insights into nutritional stress. Nitrogen stable isotope ratios were measured by continuous-flow isotope-ratio mass spectrometry (IRMS) from 20 sequential segments along the tail hairs of 15 migratory wildebeest. Generalized Linear Models were used to test for variation between concurrent segments of hair from the same individual, and to compare the δ(15) N values of starved and non-starved animals. Correlations between δ(15) N values in the hair and periods of above-average energy demand during the annual cycle were tested using Generalized Additive Mixed Models. The time series of nitrogen isotope ratios in the tail hair are comparable between strands from the same individual. The most likely explanation for the pattern of (15) N enrichment between individuals is determined by life phase, and especially the energetic demands associated with reproduction. The mean δ(15) N value of starved animals was greater than that of non-starved animals, suggesting that higher δ(15) N values correlate with periods of nutritional stress. High δ(15) N values in the tail hair of wildebeest are correlated with periods of negative energy balance, suggesting they may be used as a reliable indicator of the animal's nutritional history. This technique might be applicable to other obligate grazers. Most importantly, the sequential isotopic analysis of hair offers a continuous record of the chronic condition of wildebeest (effectively converting point data into time series) and allows researchers to establish the animal's nutritional diary. © 2016 The Authors. Rapid Communications in Mass Spectrometry Published by John Wiley & Sons Ltd.
High-contrast imaging with an arbitrary aperture: active correction of aperture discontinuities
NASA Astrophysics Data System (ADS)
Pueyo, Laurent; Norman, Colin; Soummer, Rémi; Perrin, Marshall; N'Diaye, Mamadou; Choquet, Elodie
2013-09-01
We present a new method to achieve high-contrast images using segmented and/or on-axis telescopes. Our approach relies on using two sequential Deformable Mirrors to compensate for the large amplitude excursions in the telescope aperture due to secondary support structures and/or segment gaps. In this configuration the parameter landscape of Deformable Mirror Surfaces that yield high contrast Point Spread Functions is not linear, and non-linear methods are needed to find the true minimum in the optimization topology. We solve the highly non-linear Monge-Ampere equation that is the fundamental equation describing the physics of phase induced amplitude modulation. We determine the optimum configuration for our two sequential Deformable Mirror system and show that high-throughput and high contrast solutions can be achieved using realistic surface deformations that are accessible using existing technologies. We name this process Active Compensation of Aperture Discontinuities (ACAD). We show that for geometries similar to JWST, ACAD can attain at least 10-7 in contrast and an order of magnitude higher for future Extremely Large Telescopes, even when the pupil features a missing segment" . We show that the converging non-linear mappings resulting from our Deformable Mirror shapes actually damp near-field diffraction artifacts in the vicinity of the discontinuities. Thus ACAD actually lowers the chromatic ringing due to diffraction by segment gaps and strut's while not amplifying the diffraction at the aperture edges beyond the Fresnel regime and illustrate the broadband properties of ACAD in the case of the pupil configuration corresponding to the Astrophysics Focused Telescope Assets. Since details about these telescopes are not yet available to the broader astronomical community, our test case is based on a geometry mimicking the actual one, to the best of our knowledge.
Fully automated chest wall line segmentation in breast MRI by using context information
NASA Astrophysics Data System (ADS)
Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.; Localio, A. Russell; Schnall, Mitchell D.; Kontos, Despina
2012-03-01
Breast MRI has emerged as an effective modality for the clinical management of breast cancer. Evidence suggests that computer-aided applications can further improve the diagnostic accuracy of breast MRI. A critical and challenging first step for automated breast MRI analysis, is to separate the breast as an organ from the chest wall. Manual segmentation or user-assisted interactive tools are inefficient, tedious, and error-prone, which is prohibitively impractical for processing large amounts of data from clinical trials. To address this challenge, we developed a fully automated and robust computerized segmentation method that intensively utilizes context information of breast MR imaging and the breast tissue's morphological characteristics to accurately delineate the breast and chest wall boundary. A critical component is the joint application of anisotropic diffusion and bilateral image filtering to enhance the edge that corresponds to the chest wall line (CWL) and to reduce the effect of adjacent non-CWL tissues. A CWL voting algorithm is proposed based on CWL candidates yielded from multiple sequential MRI slices, in which a CWL representative is generated and used through a dynamic time warping (DTW) algorithm to filter out inferior candidates, leaving the optimal one. Our method is validated by a representative dataset of 20 3D unilateral breast MRI scans that span the full range of the American College of Radiology (ACR) Breast Imaging Reporting and Data System (BI-RADS) fibroglandular density categorization. A promising performance (average overlay percentage of 89.33%) is observed when the automated segmentation is compared to manually segmented ground truth obtained by an experienced breast imaging radiologist. The automated method runs time-efficiently at ~3 minutes for each breast MR image set (28 slices).
ERIC Educational Resources Information Center
Mainela-Arnold, Elina; Evans, Julia L.
2014-01-01
This study tested the predictions of the procedural deficit hypothesis by investigating the relationship between sequential statistical learning and two aspects of lexical ability, lexical-phonological and lexical-semantic, in children with and without specific language impairment (SLI). Participants included forty children (ages 8;5-12;3), twenty…
NASA Astrophysics Data System (ADS)
Selvaraj, A.; Nambi, I. M.
2014-12-01
In this study, an innovative technique of ZVI mediated 'coupling of Fenton like oxidation of phenol and Cr(VI) reduction technique' was attempted. The hypothesis is that Fe3+ generated from Cr(VI) reduction process acts as electron acceptor and catalyst for Fenton's Phenol oxidation process. The Fe2+ formed from Fenton reactions can be reused for Cr(VI) reduction. Thus iron can be made to recycle between two reactions, changing back and forth between Fe2+ and Fe3+ forms, makes treatment sustainable.(Fig 1) This approach advances current Fenton like oxidation process by (i)single system removal of heavy metal and organic matter (ii)recycling of iron species; hence no additional iron required (iii)more contaminant removal to ZVI ratio (iv)eliminating sludge related issues. Preliminary batch studies were conducted at different modes i) concurrent removal ii) sequential removal. The sequential removal was found better for in-situ PRB applications. PRB was designed based on kinetic rate slope and half-life time, obtained from primary column study. This PRB has two segments (i)ZVI segment[Cr(VI)] (ii)iron species segment[phenol]. This makes treatment sustainable by (i) having no iron ions in outlet stream (ii)meeting hypothesis and elongates the life span of PRB. Sequential removal of contaminates were tested in pilot scale PRB(Fig 2) and its life span was calculated based on the exhaustion of filling material. Aqueous, sand and iron aliquots were collected at various segments of PRB and analyzed for precipitation and chemical speciation thoroughly (UV spectrometer, XRD, FTIR, electron microscope). Chemical speciation profile eliminates the uncertainties over in-situ PRB's long term performance. Based on the pilot scale PRB study, 'field level PRB wall construction' was suggested to remove heavy metal and organic compounds from Pallikaranai marshland(Fig 3)., which is contaminated with leachate coming from nearby Perungudi dumpsite. This research provides (i)deeper insight into the environmental friendly, accelerated, sustainable technique for combined removal of organic matter and heavy metal (ii)evaluation of the novel technique in PRB, which resulted in PRB's increased life span (iii)designing of PRB to remediate the marshland and its ecosystem, thus save the habitats related to it.
Content-based management service for medical videos.
Mendi, Engin; Bayrak, Coskun; Cecen, Songul; Ermisoglu, Emre
2013-01-01
Development of health information technology has had a dramatic impact to improve the efficiency and quality of medical care. Developing interoperable health information systems for healthcare providers has the potential to improve the quality and equitability of patient-centered healthcare. In this article, we describe an automated content-based medical video analysis and management service that provides convenience and ease in accessing the relevant medical video content without sequential scanning. The system facilitates effective temporal video segmentation and content-based visual information retrieval that enable a more reliable understanding of medical video content. The system is implemented as a Web- and mobile-based service and has the potential to offer a knowledge-sharing platform for the purpose of efficient medical video content access.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wemmer, D.E.; Kumar, N.V.; Metrione, R.M.
Toxin II from Radianthus paumotensis (Rp/sub II/) has been investigated by high-resolution NMR and chemical sequencing methods. Resonance assignments have been obtained for this protein by the sequential approach. NMR assignments could not be made consistent with the previously reported primary sequence for this protein, and chemical methods have been used to determine a sequence with which the NMR data are consistent. Analysis of the 2D NOE spectra shows that the protein secondary structure is comprised of two sequences of ..beta..-sheet, probably joined into a distorted continuous sheet, connected by turns and extended loops, without any regular ..cap alpha..-helical segments.more » The residues previously implicated in activity in this class of proteins, D8 and R13, occur in a loop region.« less
2017-01-01
Drosophila segmentation is a well-established paradigm for developmental pattern formation. However, the later stages of segment patterning, regulated by the “pair-rule” genes, are still not well understood at the system level. Building on established genetic interactions, I construct a logical model of the Drosophila pair-rule system that takes into account the demonstrated stage-specific architecture of the pair-rule gene network. Simulation of this model can accurately recapitulate the observed spatiotemporal expression of the pair-rule genes, but only when the system is provided with dynamic “gap” inputs. This result suggests that dynamic shifts of pair-rule stripes are essential for segment patterning in the trunk and provides a functional role for observed posterior-to-anterior gap domain shifts that occur during cellularisation. The model also suggests revised patterning mechanisms for the parasegment boundaries and explains the aetiology of the even-skipped null mutant phenotype. Strikingly, a slightly modified version of the model is able to pattern segments in either simultaneous or sequential modes, depending only on initial conditions. This suggests that fundamentally similar mechanisms may underlie segmentation in short-germ and long-germ arthropods. PMID:28953896
NASA Astrophysics Data System (ADS)
Jin, Dakai; Lu, Jia; Zhang, Xiaoliu; Chen, Cheng; Bai, ErWei; Saha, Punam K.
2017-03-01
Osteoporosis is associated with increased fracture risk. Recent advancement in the area of in vivo imaging allows segmentation of trabecular bone (TB) microstructures, which is a known key determinant of bone strength and fracture risk. An accurate biomechanical modelling of TB micro-architecture provides a comprehensive summary measure of bone strength and fracture risk. In this paper, a new direct TB biomechanical modelling method using nonlinear manifold-based volumetric reconstruction of trabecular network is presented. It is accomplished in two sequential modules. The first module reconstructs a nonlinear manifold-based volumetric representation of TB networks from three-dimensional digital images. Specifically, it starts with the fuzzy digital segmentation of a TB network, and computes its surface and curve skeletons. An individual trabecula is identified as a topological segment in the curve skeleton. Using geometric analysis, smoothing and optimization techniques, the algorithm generates smooth, curved, and continuous representations of individual trabeculae glued at their junctions. Also, the method generates a geometrically consistent TB volume at junctions. In the second module, a direct computational biomechanical stress-strain analysis is applied on the reconstructed TB volume to predict mechanical measures. The accuracy of the method was examined using micro-CT imaging of cadaveric distal tibia specimens (N = 12). A high linear correlation (r = 0.95) between TB volume computed using the new manifold-modelling algorithm and that directly derived from the voxel-based micro-CT images was observed. Young's modulus (YM) was computed using direct mechanical analysis on the TB manifold-model over a cubical volume of interest (VOI), and its correlation with the YM, computed using micro-CT based conventional finite-element analysis over the same VOI, was examined. A moderate linear correlation (r = 0.77) was observed between the two YM measures. This preliminary results show the accuracy of the new nonlinear manifold modelling algorithm for TB, and demonstrate the feasibility of a new direct mechanical strain-strain analysis on a nonlinear manifold model of a highly complex biological structure.
Shot noise enhancement from non-equilibrium plasmons in Luttinger liquid junctions.
Kim, Jaeuk U; Kinaret, Jari M; Choi, Mahn-Soo
2005-06-29
We consider a quantum wire double junction system with each wire segment described by a spinless Luttinger model, and study theoretically shot noise in this system in the sequential tunnelling regime. We find that the non-equilibrium plasmonic excitations in the central wire segment give rise to qualitatively different behaviour compared to the case with equilibrium plasmons. In particular, shot noise is greatly enhanced by them, and exceeds the Poisson limit. We show that the enhancement can be explained by the emergence of several current-carrying processes, and that the effect disappears if the channels effectively collapse to one because of fast plasmon relaxation processes, for example.
Shot noise enhancement from non-equilibrium plasmons in Luttinger liquid junctions
NASA Astrophysics Data System (ADS)
Kim, Jaeuk U.; Kinaret, Jari M.; Choi, Mahn-Soo
2005-06-01
We consider a quantum wire double junction system with each wire segment described by a spinless Luttinger model, and study theoretically shot noise in this system in the sequential tunnelling regime. We find that the non-equilibrium plasmonic excitations in the central wire segment give rise to qualitatively different behaviour compared to the case with equilibrium plasmons. In particular, shot noise is greatly enhanced by them, and exceeds the Poisson limit. We show that the enhancement can be explained by the emergence of several current-carrying processes, and that the effect disappears if the channels effectively collapse to one because of fast plasmon relaxation processes, for example.
Neuromuscular disease classification system
NASA Astrophysics Data System (ADS)
Sáez, Aurora; Acha, Begoña; Montero-Sánchez, Adoración; Rivas, Eloy; Escudero, Luis M.; Serrano, Carmen
2013-06-01
Diagnosis of neuromuscular diseases is based on subjective visual assessment of biopsies from patients by the pathologist specialist. A system for objective analysis and classification of muscular dystrophies and neurogenic atrophies through muscle biopsy images of fluorescence microscopy is presented. The procedure starts with an accurate segmentation of the muscle fibers using mathematical morphology and a watershed transform. A feature extraction step is carried out in two parts: 24 features that pathologists take into account to diagnose the diseases and 58 structural features that the human eye cannot see, based on the assumption that the biopsy is considered as a graph, where the nodes are represented by each fiber, and two nodes are connected if two fibers are adjacent. A feature selection using sequential forward selection and sequential backward selection methods, a classification using a Fuzzy ARTMAP neural network, and a study of grading the severity are performed on these two sets of features. A database consisting of 91 images was used: 71 images for the training step and 20 as the test. A classification error of 0% was obtained. It is concluded that the addition of features undetectable by the human visual inspection improves the categorization of atrophic patterns.
Walker, Ray A.; Reich, Fred R.; Russell, James T.
1978-01-01
An optical extensometer is described using sequentially pulsed light beams for measuring the dimensions of objects by detecting two opposite edges of the object without contacting the object. The light beams may be of different distinguishable light characteristics, such as polarization or wave length, and are time modulated in an alternating manner at a reference frequency. The light characteristics are of substantially the same total light energy and are distributed symmetrically. In the preferred embodiment two light beam segments of one characteristic are on opposite sides of a middle segment of another characteristic. As a result, when the beam segments are scanned sequentially across two opposite edges of the object, they produce a readout signal at the output of a photoelectric detector that is compared with the reference signal by a phase comparator to produce a measurement signal with a binary level transition when the light beams cross an edge. The light beams may be of different cross sectional geometries, including two superimposed and concentric circular beam cross sections of different diameter, or two rectangular cross sections which intersect with each other substantially perpendicular so only their central portions are superimposed. Alternately, a row of three light beams can be used including two outer beams on opposite sides and separate from a middle beam. The three beams may all be of the same light characteristic. However it is preferable that the middle beam be of a different characteristic but of the same total energy as the two outer beams.
Lou, Jigang; Li, Yuanchao; Wang, Beiyu; Meng, Yang; Wu, Tingkui; Liu, Hao
2017-01-01
Abstract In vitro biomechanical analysis after cervical disc replacement (CDR) with a novel artificial disc prosthesis (mobile core) was conducted and compared with the intact model, simulated fusion, and CDR with a fixed-core prosthesis. The purpose of this experimental study was to analyze the biomechanical changes after CDR with a novel prosthesis and the differences between fixed- and mobile-core prostheses. Six human cadaveric C2–C7 specimens were biomechanically tested sequentially in 4 different spinal models: intact specimens, simulated fusion, CDR with a fixed-core prosthesis (Discover, DePuy), and CDR with a mobile-core prosthesis (Pretic-I, Trauson). Moments up to 2 Nm with a 75 N follower load were applied in flexion–extension, left and right lateral bending, and left and right axial rotation. The total range of motion (ROM), segmental ROM, and adjacent intradiscal pressure (IDP) were calculated and analyzed in 4 different spinal models, as well as the differences between 2 disc prostheses. Compared with the intact specimens, the total ROM, segmental ROM, and IDP at the adjacent segments showed no significant difference after arthroplasty. Moreover, CDR with a mobile-core prosthesis presented a little higher values of target segment (C5/6) and total ROM than CDR with a fixed-core prosthesis (P > .05). Besides, the difference in IDP at C4/5 after CDR with 2 prostheses was without statistical significance in all the directions of motion. However, the IDP at C6/7 after CDR with a mobile-core prosthesis was lower than CDR with a fixed-core prosthesis in flexion, extension, and lateral bending, with significant difference (P < .05), but not under axial rotation. CDR with a novel prosthesis was effective to maintain the ROM at the target segment and did not affect the ROM and IDP at the adjacent segments. Moreover, CDR with a mobile-core prosthesis presented a little higher values of target segment and total ROM, but lower IDP at the inferior adjacent segment than CDR with a fixed-core prosthesis. PMID:29019902
Zikmund, T; Kvasnica, L; Týč, M; Křížová, A; Colláková, J; Chmelík, R
2014-11-01
Transmitted light holographic microscopy is particularly used for quantitative phase imaging of transparent microscopic objects such as living cells. The study of the cell is based on extraction of the dynamic data on cell behaviour from the time-lapse sequence of the phase images. However, the phase images are affected by the phase aberrations that make the analysis particularly difficult. This is because the phase deformation is prone to change during long-term experiments. Here, we present a novel algorithm for sequential processing of living cells phase images in a time-lapse sequence. The algorithm compensates for the deformation of a phase image using weighted least-squares surface fitting. Moreover, it identifies and segments the individual cells in the phase image. All these procedures are performed automatically and applied immediately after obtaining every single phase image. This property of the algorithm is important for real-time cell quantitative phase imaging and instantaneous control of the course of the experiment by playback of the recorded sequence up to actual time. Such operator's intervention is a forerunner of process automation derived from image analysis. The efficiency of the propounded algorithm is demonstrated on images of rat fibrosarcoma cells using an off-axis holographic microscope. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.
Fizeau interferometric cophasing of segmented mirrors: experimental validation.
Cheetham, Anthony; Cvetojevic, Nick; Norris, Barnaby; Sivaramakrishnan, Anand; Tuthill, Peter
2014-06-02
We present an optical testbed demonstration of the Fizeau Interferometric Cophasing of Segmented Mirrors (FICSM) algorithm. FICSM allows a segmented mirror to be phased with a science imaging detector and three filters (selected among the normal science complement). It requires no specialised, dedicated wavefront sensing hardware. Applying random piston and tip/tilt aberrations of more than 5 wavelengths to a small segmented mirror array produced an initial unphased point spread function with an estimated Strehl ratio of 9% that served as the starting point for our phasing algorithm. After using the FICSM algorithm to cophase the pupil, we estimated a Strehl ratio of 94% based on a comparison between our data and simulated encircled energy metrics. Our final image quality is limited by the accuracy of our segment actuation, which yields a root mean square (RMS) wavefront error of 25 nm. This is the first hardware demonstration of coarse and fine phasing an 18-segment pupil with the James Webb Space Telescope (JWST) geometry using a single algorithm. FICSM can be implemented on JWST using any of its scientic imaging cameras making it useful as a fall-back in the event that accepted phasing strategies encounter problems. We present an operational sequence that would co-phase such an 18-segment primary in 3 sequential iterations of the FICSM algorithm. Similar sequences can be readily devised for any segmented mirror.
β-Helical architecture of cytoskeletal bactofilin filaments revealed by solid-state NMR
Vasa, Suresh; Lin, Lin; Shi, Chaowei; Habenstein, Birgit; Riedel, Dietmar; Kühn, Juliane; Thanbichler, Martin; Lange, Adam
2015-01-01
Bactofilins are a widespread class of bacterial filament-forming proteins, which serve as cytoskeletal scaffolds in various cellular pathways. They are characterized by a conserved architecture, featuring a central conserved domain (DUF583) that is flanked by variable terminal regions. Here, we present a detailed investigation of bactofilin filaments from Caulobacter crescentus by high-resolution solid-state NMR spectroscopy. De novo sequential resonance assignments were obtained for residues Ala39 to Phe137, spanning the conserved DUF583 domain. Analysis of the secondary chemical shifts shows that this core region adopts predominantly β-sheet secondary structure. Mutational studies of conserved hydrophobic residues located in the identified β-strand segments suggest that bactofilin folding and polymerization is mediated by an extensive and redundant network of hydrophobic interactions, consistent with the high intrinsic stability of bactofilin polymers. Transmission electron microscopy revealed a propensity of bactofilin to form filament bundles as well as sheet-like, 2D crystalline assemblies, which may represent the supramolecular arrangement of bactofilin in the native context. Based on the diffraction pattern of these 2D crystalline assemblies, scanning transmission electron microscopy measurements of the mass per length of BacA filaments, and the distribution of β-strand segments identified by solid-state NMR, we propose that the DUF583 domain adopts a β-helical architecture, in which 18 β-strand segments are arranged in six consecutive windings of a β-helix. PMID:25550503
β-Helical architecture of cytoskeletal bactofilin filaments revealed by solid-state NMR.
Vasa, Suresh; Lin, Lin; Shi, Chaowei; Habenstein, Birgit; Riedel, Dietmar; Kühn, Juliane; Thanbichler, Martin; Lange, Adam
2015-01-13
Bactofilins are a widespread class of bacterial filament-forming proteins, which serve as cytoskeletal scaffolds in various cellular pathways. They are characterized by a conserved architecture, featuring a central conserved domain (DUF583) that is flanked by variable terminal regions. Here, we present a detailed investigation of bactofilin filaments from Caulobacter crescentus by high-resolution solid-state NMR spectroscopy. De novo sequential resonance assignments were obtained for residues Ala39 to Phe137, spanning the conserved DUF583 domain. Analysis of the secondary chemical shifts shows that this core region adopts predominantly β-sheet secondary structure. Mutational studies of conserved hydrophobic residues located in the identified β-strand segments suggest that bactofilin folding and polymerization is mediated by an extensive and redundant network of hydrophobic interactions, consistent with the high intrinsic stability of bactofilin polymers. Transmission electron microscopy revealed a propensity of bactofilin to form filament bundles as well as sheet-like, 2D crystalline assemblies, which may represent the supramolecular arrangement of bactofilin in the native context. Based on the diffraction pattern of these 2D crystalline assemblies, scanning transmission electron microscopy measurements of the mass per length of BacA filaments, and the distribution of β-strand segments identified by solid-state NMR, we propose that the DUF583 domain adopts a β-helical architecture, in which 18 β-strand segments are arranged in six consecutive windings of a β-helix.
Targeted Segment Transfer from Rye Chromosome 2R to Wheat Chromosomes 2A, 2B, and 7B.
Ren, Tianheng; Li, Zhi; Yan, Benju; Tan, Feiquan; Tang, Zongxiang; Fu, Shulan; Yang, Manyu; Ren, Zhenglong
2017-01-01
Increased chromosome instability was induced by a rye (Secale cereale L.) monosomic 2R chromosome into wheat (Triticum aestivum L.). Centromere breakage and telomere dysfunction result in high rates of chromosome aberrations, including breakages, fissions, fusions, deletions, and translocations. Plants with target traits were sequentially selected to produce a breeding population, from which 3 translocation lines with target traits have been selected. In these lines, wheat chromosomes 2A, 2B, and 7B recombined with segments of the rye chromosome arm 2RL. This was detected by FISH analysis using repeat sequences pSc119.2, pAs1 and genomic DNA of rye together as probes. The translocation chromosomes in these lines were named as 2ASMR, 2BSMR, and 7BSMR. The small segments that were transferred into wheat consisted of pSc119.2 repeats and other chromatin regions that conferred resistance to stripe rust and expressed target traits. These translocation lines were highly resistant to stripe rust, and expressed several typical traits that were associated with chromosome arm 2RL, which are better than those of its wheat parent, disomic addition, and substitution lines that show agronomic characteristics. The integration of molecular methods and conventional techniques to improve wheat breeding schemes are discussed. © 2017 S. Karger AG, Basel.
Students' conceptual performance on synthesis physics problems with varying mathematical complexity
NASA Astrophysics Data System (ADS)
Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.; White, Daniel R.; Badeau, Ryan
2017-06-01
A body of research on physics problem solving has focused on single-concept problems. In this study we use "synthesis problems" that involve multiple concepts typically taught in different chapters. We use two types of synthesis problems, sequential and simultaneous synthesis tasks. Sequential problems require a consecutive application of fundamental principles, and simultaneous problems require a concurrent application of pertinent concepts. We explore students' conceptual performance when they solve quantitative synthesis problems with varying mathematical complexity. Conceptual performance refers to the identification, follow-up, and correct application of the pertinent concepts. Mathematical complexity is determined by the type and the number of equations to be manipulated concurrently due to the number of unknowns in each equation. Data were collected from written tasks and individual interviews administered to physics major students (N =179 ) enrolled in a second year mechanics course. The results indicate that mathematical complexity does not impact students' conceptual performance on the sequential tasks. In contrast, for the simultaneous problems, mathematical complexity negatively influences the students' conceptual performance. This difference may be explained by the students' familiarity with and confidence in particular concepts coupled with cognitive load associated with manipulating complex quantitative equations. Another explanation pertains to the type of synthesis problems, either sequential or simultaneous task. The students split the situation presented in the sequential synthesis tasks into segments but treated the situation in the simultaneous synthesis tasks as a single event.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalbitzer, H.R.; Neidig, K.P.; Hengstenberg, W.
1991-11-19
Complete sequence-specific assignments of the {sup 1}H NMR spectrum of HPr protein from Staphylococcus aureus were obtained by two-dimensional NMR methods. Important secondary structure elements that can be derived from the observed nuclear Overhauser effects are a large antiparallel {beta}-pleated sheet consisting of four strands, A, B, C, D, a segment S{sub AB} consisting of an extended region around the active-center histidine (His-15) and an {alpha}-helix, a half-turn between strands B and C, a segment S{sub CD} which shows no typical secondary structure, and the {alpha}-helical, C-terminal segment S{sub term}. These general structural features are similar to those found earliermore » in HPr proteins from different microorganisms such as Escherichia coli, Bacillus subtilis, and Streptococcus faecalis.« less
NASA Technical Reports Server (NTRS)
Yakimovsky, Y.
1974-01-01
An approach to simultaneous interpretation of objects in complex structures so as to maximize a combined utility function is presented. Results of the application of a computer software system to assign meaning to regions in a segmented image based on the principles described in this paper and on a special interactive sequential classification learning system, which is referenced, are demonstrated.
Zhang, Wei; Zhang, Xiaolong; Qiang, Yan; Tian, Qi; Tang, Xiaoxian
2017-01-01
The fast and accurate segmentation of lung nodule image sequences is the basis of subsequent processing and diagnostic analyses. However, previous research investigating nodule segmentation algorithms cannot entirely segment cavitary nodules, and the segmentation of juxta-vascular nodules is inaccurate and inefficient. To solve these problems, we propose a new method for the segmentation of lung nodule image sequences based on superpixels and density-based spatial clustering of applications with noise (DBSCAN). First, our method uses three-dimensional computed tomography image features of the average intensity projection combined with multi-scale dot enhancement for preprocessing. Hexagonal clustering and morphological optimized sequential linear iterative clustering (HMSLIC) for sequence image oversegmentation is then proposed to obtain superpixel blocks. The adaptive weight coefficient is then constructed to calculate the distance required between superpixels to achieve precise lung nodules positioning and to obtain the subsequent clustering starting block. Moreover, by fitting the distance and detecting the change in slope, an accurate clustering threshold is obtained. Thereafter, a fast DBSCAN superpixel sequence clustering algorithm, which is optimized by the strategy of only clustering the lung nodules and adaptive threshold, is then used to obtain lung nodule mask sequences. Finally, the lung nodule image sequences are obtained. The experimental results show that our method rapidly, completely and accurately segments various types of lung nodule image sequences. PMID:28880916
Trial Sequential Methods for Meta-Analysis
ERIC Educational Resources Information Center
Kulinskaya, Elena; Wood, John
2014-01-01
Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…
NASA Astrophysics Data System (ADS)
Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; Moore, Kathleen; Liu, Hong; Zheng, Bin
2017-03-01
Abdominal obesity is strongly associated with a number of diseases and accurately assessment of subtypes of adipose tissue volume plays a significant role in predicting disease risk, diagnosis and prognosis. The objective of this study is to develop and evaluate a new computer-aided detection (CAD) scheme based on deep learning models to automatically segment subcutaneous fat areas (SFA) and visceral (VFA) fat areas depicting on CT images. A dataset involving CT images from 40 patients were retrospectively collected and equally divided into two independent groups (i.e. training and testing group). The new CAD scheme consisted of two sequential convolutional neural networks (CNNs) namely, Selection-CNN and Segmentation-CNN. Selection-CNN was trained using 2,240 CT slices to automatically select CT slices belonging to abdomen areas and SegmentationCNN was trained using 84,000 fat-pixel patches to classify fat-pixels as belonging to SFA or VFA. Then, data from the testing group was used to evaluate the performance of the optimized CAD scheme. Comparing to manually labelled results, the classification accuracy of CT slices selection generated by Selection-CNN yielded 95.8%, while the accuracy of fat pixel segmentation using Segmentation-CNN yielded 96.8%. Therefore, this study demonstrated the feasibility of using deep learning based CAD scheme to recognize human abdominal section from CT scans and segment SFA and VFA from CT slices with high agreement compared with subjective segmentation results.
Characterization of the L4-L5-S1 motion segment using the stepwise reduction method.
Jaramillo, Héctor Enrique; Puttlitz, Christian M; McGilvray, Kirk; García, José J
2016-05-03
The two aims of this study were to generate data for a more accurate calibration of finite element models including the L5-S1 segment, and to find mechanical differences between the L4-L5 and L5-S1 segments. Then, the range of motion (ROM) and facet forces for the L4-S1 segment were measured using the stepwise reduction method. This consists of sequentially testing and reducing each segment in nine stages by cutting the ligaments, facet capsules, and removing the nucleus. Five L4-S1 human segments (median: 65 years, range: 53-84 years, SD=11.0 years) were loaded under a maximum pure moment of 8Nm. The ROM was measured using stereo-photogrammetry via tracking of three markers and the facet contact forces (CF) were measured using a Tekscan system. The ROM for the L4-L5 segment and all stages showed good agreement with published data. The major differences in ROM between the L4-L5 and L5-S1 segments were found for lateral bending and all stages, for which the L4-L5 ROM was about 1.5-3 times higher than that of the L5-S1 segment, consistent with L5-S1 facet CF about 1.3 to 4 times higher than those measured for the L4-L5 segment. For the other movements and few stages, the L4-L5 ROM was significantly lower that of the L5-S1 segment. ROM and CF provide important baseline data for more accurate calibration of FE models and to understand the role that their structures play in lower lumbar spine mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Flow analysis techniques for phosphorus: an overview.
Estela, José Manuel; Cerdà, Víctor
2005-04-15
A bibliographical review on the implementation and the results obtained in the use of different flow analytical techniques for the determination of phosphorus is carried out. The sources, occurrence and importance of phosphorus together with several aspects regarding the analysis and terminology used in the determination of this element are briefly described. A classification as well as a brief description of the basis, advantages and disadvantages of the different existing flow techniques, namely; segmented flow analysis (SFA), flow injection analysis (FIA), sequential injection analysis (SIA), all injection analysis (AIA), batch injection analysis (BIA), multicommutated FIA (MCFIA), multisyringe FIA (MSFIA) and multipumped FIA (MPFIA) is also carried out. The most relevant manuscripts regarding the analysis of phosphorus by means of flow techniques are herein classified according to the detection instrumental technique used with the aim to facilitate their study and obtain an overall scope. Finally, the analytical characteristics of numerous flow-methods reported in the literature are provided in the form of a table and their applicability to samples with different matrixes, namely water samples (marine, river, estuarine, waste, industrial, drinking, etc.), soils leachates, plant leaves, toothpaste, detergents, foodstuffs (wine, orange juice, milk), biological samples, sugars, fertilizer, hydroponic solutions, soils extracts and cyanobacterial biofilms are tabulated.
Drawing the line between constituent structure and coherence relations in visual narratives
Cohn, Neil; Bender, Patrick
2016-01-01
Theories of visual narrative understanding have often focused on the changes in meaning across a sequence, like shifts in characters, spatial location, and causation, as cues for breaks in the structure of a discourse. In contrast, the theory of Visual Narrative Grammar posits that hierarchic “grammatical” structures operate at the discourse level using categorical roles for images, which may or may not co-occur with shifts in coherence. We therefore examined the relationship between narrative structure and coherence shifts in the segmentation of visual narrative sequences using a “segmentation task” where participants drew lines between images in order to divide them into sub-episodes. We used regressions to analyze the influence of the expected constituent structure boundary, narrative categories, and semantic coherence relationships on the segmentation of visual narrative sequences. Narrative categories were a stronger predictor of segmentation than linear coherence relationships between panels, though both influenced participants’ divisions. Altogether, these results support the theory that meaningful sequential images use a narrative grammar that extends above and beyond linear semantic shifts between discourse units. PMID:27709982
Drawing the line between constituent structure and coherence relations in visual narratives.
Cohn, Neil; Bender, Patrick
2017-02-01
Theories of visual narrative understanding have often focused on the changes in meaning across a sequence, like shifts in characters, spatial location, and causation, as cues for breaks in the structure of a discourse. In contrast, the theory of visual narrative grammar posits that hierarchic "grammatical" structures operate at the discourse level using categorical roles for images, which may or may not co-occur with shifts in coherence. We therefore examined the relationship between narrative structure and coherence shifts in the segmentation of visual narrative sequences using a "segmentation task" where participants drew lines between images in order to divide them into subepisodes. We used regressions to analyze the influence of the expected constituent structure boundary, narrative categories, and semantic coherence relationships on the segmentation of visual narrative sequences. Narrative categories were a stronger predictor of segmentation than linear coherence relationships between panels, though both influenced participants' divisions. Altogether, these results support the theory that meaningful sequential images use a narrative grammar that extends above and beyond linear semantic shifts between discourse units. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Feizizadeh, Bakhtiar; Blaschke, Thomas; Tiede, Dirk; Moghaddam, Mohammad Hossein Rezaei
2017-09-01
This article presents a method of object-based image analysis (OBIA) for landslide delineation and landslide-related change detection from multi-temporal satellite images. It uses both spatial and spectral information on landslides, through spectral analysis, shape analysis, textural measurements using a gray-level co-occurrence matrix (GLCM), and fuzzy logic membership functionality. Following an initial segmentation step, particular combinations of various information layers were investigated to generate objects. This was achieved by applying multi-resolution segmentation to IRS-1D, SPOT-5, and ALOS satellite imagery in sequential steps of feature selection and object classification, and using slope and flow direction derivatives from a digital elevation model together with topographically-oriented gray level co-occurrence matrices. Fuzzy membership values were calculated for 11 different membership functions using 20 landslide objects from a landslide training data. Six fuzzy operators were used for the final classification and the accuracies of the resulting landslide maps were compared. A Fuzzy Synthetic Evaluation (FSE) approach was adapted for validation of the results and for an accuracy assessment using the landslide inventory database. The FSE approach revealed that the AND operator performed best with an accuracy of 93.87% for 2005 and 94.74% for 2011, closely followed by the MEAN Arithmetic operator, while the OR and AND (*) operators yielded relatively low accuracies. An object-based change detection was then applied to monitor landslide-related changes that occurred in northern Iran between 2005 and 2011. Knowledge rules to detect possible landslide-related changes were developed by evaluating all possible landslide-related objects for both time steps.
McCracken, D Jay; Higginbotham, Raymond A; Boulter, Jason H; Liu, Yuan; Wells, John A; Halani, Sameer H; Saindane, Amit M; Oyesiku, Nelson M; Barrow, Daniel L; Olson, Jeffrey J
2017-06-01
Sphenoid wing meningiomas (SWMs) can encase arteries of the circle of Willis, increasing their susceptibility to intraoperative vascular injury and severe ischemic complications. To demonstrate the effect of circumferential vascular encasement in SWM on postoperative ischemia. A retrospective review of 75 patients surgically treated for SWM from 2009 to 2015 was undertaken to determine the degree of circumferential vascular encasement (0°-360°) as assessed by preoperative magnetic resonance imaging (MRI). A novel grading system describing "maximum" and "total" arterial encasement scores was created. Postoperative MRIs were reviewed for total ischemia volume measured on sequential diffusion-weighted images. Of the 75 patients, 89.3% had some degree of vascular involvement with a median maximum encasement score of 3.0 (2.0-3.0) in the internal carotid artery (ICA), M1, M2, and A1 segments; 76% of patients had some degree of ischemia with median infarct volume of 3.75 cm 3 (0.81-9.3 cm 3 ). Univariate analysis determined risk factors associated with larger infarction volume, which were encasement of the supraclinoid ICA ( P < .001), M1 segment ( P < .001), A1 segment ( P = .015), and diabetes ( P = .019). As the maximum encasement score increased from 1 to 5 in each of the significant arterial segments, so did mean and median infarction volume ( P < .001). Risk for devastating ischemic injury >62 cm 3 was found when the ICA, M1, and A1 vessels all had ≥360° involvement ( P = .001). Residual tumor was associated with smaller infarct volumes ( P = .022). As infarction volume increased, so did modified Rankin Score at discharge ( P = .025). Subtotal resection should be considered in SWM with significant vascular encasement of proximal arteries to limit postoperative ischemic complications. Copyright © 2017 by the Congress of Neurological Surgeons
Martin, Mario; Béjar, Javier; Esposito, Gennaro; Chávez, Diógenes; Contreras-Hernández, Enrique; Glusman, Silvio; Cortés, Ulises; Rudomín, Pablo
2017-01-01
In a previous study we developed a Machine Learning procedure for the automatic identification and classification of spontaneous cord dorsum potentials ( CDPs ). This study further supported the proposal that in the anesthetized cat, the spontaneous CDPs recorded from different lumbar spinal segments are generated by a distributed network of dorsal horn neurons with structured (non-random) patterns of functional connectivity and that these configurations can be changed to other non-random and stable configurations after the noceptive stimulation produced by the intradermic injection of capsaicin in the anesthetized cat. Here we present a study showing that the sequence of identified forms of the spontaneous CDPs follows a Markov chain of at least order one. That is, the system has memory in the sense that the spontaneous activation of dorsal horn neuronal ensembles producing the CDPs is not independent of the most recent activity. We used this markovian property to build a procedure to identify portions of signals as belonging to a specific functional state of connectivity among the neuronal networks involved in the generation of the CDPs . We have tested this procedure during acute nociceptive stimulation produced by the intradermic injection of capsaicin in intact as well as spinalized preparations. Altogether, our results indicate that CDP sequences cannot be generated by a renewal stochastic process. Moreover, it is possible to describe some functional features of activity in the cord dorsum by modeling the CDP sequences as generated by a Markov order one stochastic process. Finally, these Markov models make possible to determine the functional state which produced a CDP sequence. The proposed identification procedures appear to be useful for the analysis of the sequential behavior of the ongoing CDPs recorded from different spinal segments in response to a variety of experimental procedures including the changes produced by acute nociceptive stimulation. They are envisaged as a useful tool to examine alterations of the patterns of functional connectivity between dorsal horn neurons under normal and different pathological conditions, an issue of potential clinical concern.
Classical and sequential limit analysis revisited
NASA Astrophysics Data System (ADS)
Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi
2018-04-01
Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.
ERIC Educational Resources Information Center
Sharp, Lanette
Developed specifically for classroom teachers with a limited background in music, oral music lessons are designed to be taught in short, daily instruction segments to help students gain the most from music and transfer that knowledge to other parts of the curriculum. The lessons, a master degree project, were developed to support the Utah music…
Hidden Markov model approach for identifying the modular framework of the protein backbone.
Camproux, A C; Tuffery, P; Chevrolat, J P; Boisvieux, J F; Hazout, S
1999-12-01
The hidden Markov model (HMM) was used to identify recurrent short 3D structural building blocks (SBBs) describing protein backbones, independently of any a priori knowledge. Polypeptide chains are decomposed into a series of short segments defined by their inter-alpha-carbon distances. Basically, the model takes into account the sequentiality of the observed segments and assumes that each one corresponds to one of several possible SBBs. Fitting the model to a database of non-redundant proteins allowed us to decode proteins in terms of 12 distinct SBBs with different roles in protein structure. Some SBBs correspond to classical regular secondary structures. Others correspond to a significant subdivision of their bounding regions previously considered to be a single pattern. The major contribution of the HMM is that this model implicitly takes into account the sequential connections between SBBs and thus describes the most probable pathways by which the blocks are connected to form the framework of the protein structures. Validation of the SBBs code was performed by extracting SBB series repeated in recoding proteins and examining their structural similarities. Preliminary results on the sequence specificity of SBBs suggest promising perspectives for the prediction of SBBs or series of SBBs from the protein sequences.
Todoroki, Shin-ichi
2008-01-01
Background Fiber fuse is a process of optical fiber destruction under the action of laser radiation, found 20 years ago. Once initiated, opical discharge runs along the fiber core region to the light source and leaves periodic voids whose shape looks like a bullet pointing the direction of laser beam. The relation between damage pattern and propagation mode of optical discharge is still unclear even after the first in situ observation three years ago. Methodology/Principal Findings Fiber fuse propagation over hetero-core splice point (Corning SMF-28e and HI 1060) was observed in situ. Sequential photographs obtained at intervals of 2.78 µs recorded a periodic emission at the tail of an optical discharge pumped by 1070 nm and 9 W light. The signal stopped when the discharge ran over the splice point. The corresponding damage pattern left in the fiber core region included a segment free of periodicity. Conclusions The spatial modulation pattern of the light emission agreed with the void train formed over the hetero-core splice point. Some segments included a bullet-shaped void pointing in the opposite direction to the laser beam propagation although the sequential photographs did not reveal any directional change in the optical discharge propagation. PMID:18815621
Performance review using sequential sampling and a practice computer.
Difford, F
1988-06-01
The use of sequential sample analysis for repeated performance review is described with examples from several areas of practice. The value of a practice computer in providing a random sample from a complete population, evaluating the parameters of a sequential procedure, and producing a structured worksheet is discussed. It is suggested that sequential analysis has advantages over conventional sampling in the area of performance review in general practice.
Analysis of filter tuning techniques for sequential orbit determination
NASA Technical Reports Server (NTRS)
Lee, T.; Yee, C.; Oza, D.
1995-01-01
This paper examines filter tuning techniques for a sequential orbit determination (OD) covariance analysis. Recently, there has been a renewed interest in sequential OD, primarily due to the successful flight qualification of the Tracking and Data Relay Satellite System (TDRSS) Onboard Navigation System (TONS) using Doppler data extracted onboard the Extreme Ultraviolet Explorer (EUVE) spacecraft. TONS computes highly accurate orbit solutions onboard the spacecraft in realtime using a sequential filter. As the result of the successful TONS-EUVE flight qualification experiment, the Earth Observing System (EOS) AM-1 Project has selected TONS as the prime navigation system. In addition, sequential OD methods can be used successfully for ground OD. Whether data are processed onboard or on the ground, a sequential OD procedure is generally favored over a batch technique when a realtime automated OD system is desired. Recently, OD covariance analyses were performed for the TONS-EUVE and TONS-EOS missions using the sequential processing options of the Orbit Determination Error Analysis System (ODEAS). ODEAS is the primary covariance analysis system used by the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD). The results of these analyses revealed a high sensitivity of the OD solutions to the state process noise filter tuning parameters. The covariance analysis results show that the state estimate error contributions from measurement-related error sources, especially those due to the random noise and satellite-to-satellite ionospheric refraction correction errors, increase rapidly as the state process noise increases. These results prompted an in-depth investigation of the role of the filter tuning parameters in sequential OD covariance analysis. This paper analyzes how the spacecraft state estimate errors due to dynamic and measurement-related error sources are affected by the process noise level used. This information is then used to establish guidelines for determining optimal filter tuning parameters in a given sequential OD scenario for both covariance analysis and actual OD. Comparisons are also made with corresponding definitive OD results available from the TONS-EUVE analysis.
Sequential analysis applied to clinical trials in dentistry: a systematic review.
Bogowicz, P; Flores-Mir, C; Major, P W; Heo, G
2008-01-01
Clinical trials employ sequential analysis for the ethical and economic benefits it brings. In dentistry, as in other fields, resources are scarce and efforts are made to ensure that patients are treated ethically. The objective of this systematic review was to characterise the use of sequential analysis for clinical trials in dentistry. We searched various databases from 1900 through to January 2008. Articles were selected for review if they were clinical trials in the field of dentistry that had applied some form of sequential analysis. Selection was carried out independently by two of the authors. We included 18 trials from various specialties, which involved many different interventions. We conclude that sequential analysis seems to be underused in this field but that there are sufficient methodological resources in place for future applications.Evidence-Based Dentistry (2008) 9, 55-62. doi:10.1038/sj.ebd.6400587.
Stochastic Control of Multi-Scale Networks: Modeling, Analysis and Algorithms
2014-10-20
Theory, (02 2012): 0. doi: B. T. Swapna, Atilla Eryilmaz, Ness B. Shroff. Throughput-Delay Analysis of Random Linear Network Coding for Wireless ... Wireless Sensor Networks and Effects of Long-Range Dependent Data, Sequential Analysis , (10 2012): 0. doi: 10.1080/07474946.2012.719435 Stefano...Sequential Analysis , (10 2012): 0. doi: John S. Baras, Shanshan Zheng. Sequential Anomaly Detection in Wireless Sensor Networks andEffects of Long
NASA Astrophysics Data System (ADS)
Ogiela, Marek R.; Tadeusiewicz, Ryszard
2000-04-01
This paper presents and discusses possibilities of application of selected algorithms belonging to the group of syntactic methods of patten recognition used to analyze and extract features of shapes and to diagnose morphological lesions seen on selected medical images. This method is particularly useful for specialist morphological analysis of shapes of selected organs of abdominal cavity conducted to diagnose disease symptoms occurring in the main pancreatic ducts, upper segments of ureters and renal pelvis. Analysis of the correct morphology of these organs is possible with the application of the sequential and tree method belonging to the group of syntactic methods of pattern recognition. The objective of this analysis is to support early diagnosis of disease lesions, mainly characteristic for carcinoma and pancreatitis, based on examinations of ERCP images and a diagnosis of morphological lesions in ureters as well as renal pelvis based on an analysis of urograms. In the analysis of ERCP images the main objective is to recognize morphological lesions in pancreas ducts characteristic for carcinoma and chronic pancreatitis, while in the case of kidney radiogram analysis the aim is to diagnose local irregularities of ureter lumen and to examine the morphology of renal pelvis and renal calyxes. Diagnosing the above mentioned lesion has been conducted with the use of syntactic methods of pattern recognition, in particular the languages of description of features of shapes and context-free sequential attributed grammars. These methods allow to recognize and describe in a very efficient way the aforementioned lesions on images obtained as a result of initial image processing of width diagrams of the examined structures. Additionally, in order to support the analysis of the correct structure of renal pelvis a method using the tree grammar for syntactic pattern recognition to define its correct morphological shapes has been presented.
Random Amplification and Pyrosequencing for Identification of Novel Viral Genome Sequences
Hang, Jun; Forshey, Brett M.; Kochel, Tadeusz J.; Li, Tao; Solórzano, Víctor Fiestas; Halsey, Eric S.; Kuschner, Robert A.
2012-01-01
ssRNA viruses have high levels of genomic divergence, which can lead to difficulty in genomic characterization of new viruses using traditional PCR amplification and sequencing methods. In this study, random reverse transcription, anchored random PCR amplification, and high-throughput pyrosequencing were used to identify orthobunyavirus sequences from total RNA extracted from viral cultures of acute febrile illness specimens. Draft genome sequence for the orthobunyavirus L segment was assembled and sequentially extended using de novo assembly contigs from pyrosequencing reads and orthobunyavirus sequences in GenBank as guidance. Accuracy and continuous coverage were achieved by mapping all reads to the L segment draft sequence. Subsequently, RT-PCR and Sanger sequencing were used to complete the genome sequence. The complete L segment was found to be 6936 bases in length, encoding a 2248-aa putative RNA polymerase. The identified L segment was distinct from previously published South American orthobunyaviruses, sharing 63% and 54% identity at the nucleotide and amino acid level, respectively, with the complete Oropouche virus L segment and 73% and 81% identity at the nucleotide and amino acid level, respectively, with a partial Caraparu virus L segment. The result demonstrated the effectiveness of a sequence-independent amplification and next-generation sequencing approach for obtaining complete viral genomes from total nucleic acid extracts and its use in pathogen discovery. PMID:22468136
ERIC Educational Resources Information Center
Lin, Yi-Chun; Hsieh, Ya-Hui; Hou, Huei-Tse
2015-01-01
The development of a usability evaluation method for educational systems or applications, called the self-report-based sequential analysis, is described herein. The method aims to extend the current practice by proposing self-report-based sequential analysis as a new usability method, which integrates the advantages of self-report in survey…
Biomechanical considerations for distraction of the monobloc, Le Fort III, and Le Fort I segments.
Figueroa, Alvaro A; Polley, John W; Figueroa, Aaron D
2010-09-01
Distraction osteogenesis is effective for correction of severe maxillary and midface hypoplasia. The vectors controlling the segment to be moved must be planned. This requires knowledge of the physical characteristics of the osteotomized bone segment, including the location of the center of mass (free body) and the center of resistance (restrained body). The purpose of this study was to determine the center of mass of the osteotomized monobloc, Le Fort III, and Le Fort I bone segments. A dry human skull was used to sequentially isolate three bone segments: monobloc, Le Fort III, and Le Fort I. Each segment was suspended from three different points, and digital photographs were obtained from each suspension. The photographs were digitally superimposed. The center of mass was determined by calculating the intersection of the suspension lines. The center of mass for the monobloc segment was located at a point 43.5 percent of the total height from the occlusal plane to the superior edge of the frontal bone supraorbital osteotomy. For the Le Fort III, it was located 38 percent of the total height from the occlusal plane to the superior edge of the osteotomized base of the nasal bones. For the Le Fort I, it was 53 percent of the total height from the occlusal plane to the superior edge of the osteotomized maxillary bone. Knowledge of the location of the center of mass in the monobloc, Le Fort III, and Le Fort I segments provides a starting point for the clinician when planning vectors for advancement with distraction.
Topology and Dynamics of the Zebrafish Segmentation Clock Core Circuit
Schröter, Christian; Isakova, Alina; Hens, Korneel; Soroldoni, Daniele; Gajewski, Martin; Jülicher, Frank; Maerkl, Sebastian J.; Deplancke, Bart; Oates, Andrew C.
2012-01-01
During vertebrate embryogenesis, the rhythmic and sequential segmentation of the body axis is regulated by an oscillating genetic network termed the segmentation clock. We describe a new dynamic model for the core pace-making circuit of the zebrafish segmentation clock based on a systematic biochemical investigation of the network's topology and precise measurements of somitogenesis dynamics in novel genetic mutants. We show that the core pace-making circuit consists of two distinct negative feedback loops, one with Her1 homodimers and the other with Her7:Hes6 heterodimers, operating in parallel. To explain the observed single and double mutant phenotypes of her1, her7, and hes6 mutant embryos in our dynamic model, we postulate that the availability and effective stability of the dimers with DNA binding activity is controlled in a “dimer cloud” that contains all possible dimeric combinations between the three factors. This feature of our model predicts that Hes6 protein levels should oscillate despite constant hes6 mRNA production, which we confirm experimentally using novel Hes6 antibodies. The control of the circuit's dynamics by a population of dimers with and without DNA binding activity is a new principle for the segmentation clock and may be relevant to other biological clocks and transcriptional regulatory networks. PMID:22911291
The Relevance of Visual Sequential Memory to Reading.
ERIC Educational Resources Information Center
Crispin, Lisa; And Others
1984-01-01
Results of three visual sequential memory tests and a group reading test given to 19 elementary students are discussed in terms of task analysis and structuralist approaches to analysis of reading skills. Relation of visual sequential memory to other reading subskills is considered in light of current reasearch. (CMG)
Wang, Qian; Song, Enmin; Jin, Renchao; Han, Ping; Wang, Xiaotong; Zhou, Yanying; Zeng, Jianchao
2009-06-01
The aim of this study was to develop a novel algorithm for segmenting lung nodules on three-dimensional (3D) computed tomographic images to improve the performance of computer-aided diagnosis (CAD) systems. The database used in this study consists of two data sets obtained from the Lung Imaging Database Consortium. The first data set, containing 23 nodules (22% irregular nodules, 13% nonsolid nodules, 17% nodules attached to other structures), was used for training. The second data set, containing 64 nodules (37% irregular nodules, 40% nonsolid nodules, 62% nodules attached to other structures), was used for testing. Two key techniques were developed in the segmentation algorithm: (1) a 3D extended dynamic programming model, with a newly defined internal cost function based on the information between adjacent slices, allowing parameters to be adapted to each slice, and (2) a multidirection fusion technique, which makes use of the complementary relationships among different directions to improve the final segmentation accuracy. The performance of this approach was evaluated by the overlap criterion, complemented by the true-positive fraction and the false-positive fraction criteria. The mean values of the overlap, true-positive fraction, and false-positive fraction for the first data set achieved using the segmentation scheme were 66%, 75%, and 15%, respectively, and the corresponding values for the second data set were 58%, 71%, and 22%, respectively. The experimental results indicate that this segmentation scheme can achieve better performance for nodule segmentation than two existing algorithms reported in the literature. The proposed 3D extended dynamic programming model is an effective way to segment sequential images of lung nodules. The proposed multidirection fusion technique is capable of reducing segmentation errors especially for no-nodule and near-end slices, thus resulting in better overall performance.
Automated segmentation of dental CBCT image with prior-guided sequential random forests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Li; Gao, Yaozong; Shi, Feng
Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimatemore » the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method for CBCT segmentation.« less
All words are not created equal: Expectations about word length guide infant statistical learning
Lew-Williams, Casey; Saffran, Jenny R.
2011-01-01
Infants have been described as ‘statistical learners’ capable of extracting structure (such as words) from patterned input (such as language). Here, we investigated whether prior knowledge influences how infants track transitional probabilities in word segmentation tasks. Are infants biased by prior experience when engaging in sequential statistical learning? In a laboratory simulation of learning across time, we exposed 9- and 10-month-old infants to a list of either bisyllabic or trisyllabic nonsense words, followed by a pause-free speech stream composed of a different set of bisyllabic or trisyllabic nonsense words. Listening times revealed successful segmentation of words from fluent speech only when words were uniformly bisyllabic or trisyllabic throughout both phases of the experiment. Hearing trisyllabic words during the pre-exposure phase derailed infants’ abilities to segment speech into bisyllabic words, and vice versa. We conclude that prior knowledge about word length equips infants with perceptual expectations that facilitate efficient processing of subsequent language input. PMID:22088408
Fast Segmentation From Blurred Data in 3D Fluorescence Microscopy.
Storath, Martin; Rickert, Dennis; Unser, Michael; Weinmann, Andreas
2017-10-01
We develop a fast algorithm for segmenting 3D images from linear measurements based on the Potts model (or piecewise constant Mumford-Shah model). To that end, we first derive suitable space discretizations of the 3D Potts model, which are capable of dealing with 3D images defined on non-cubic grids. Our discretization allows us to utilize a specific splitting approach, which results in decoupled subproblems of moderate size. The crucial point in the 3D setup is that the number of independent subproblems is so large that we can reasonably exploit the parallel processing capabilities of the graphics processing units (GPUs). Our GPU implementation is up to 18 times faster than the sequential CPU version. This allows to process even large volumes in acceptable runtimes. As a further contribution, we extend the algorithm in order to deal with non-negativity constraints. We demonstrate the efficiency of our method for combined image deconvolution and segmentation on simulated data and on real 3D wide field fluorescence microscopy data.
Colon transit scintiraphy in health and constipation using oral iodine-131-cellulose
DOE Office of Scientific and Technical Information (OSTI.GOV)
McLean, R.G.; Smart, R.C.; Gaston-Parry, D.
1990-06-01
The purpose of the study was to assess if a new scintigraphic method for noninvasive assessment of colonic transit could differentiate between subjects with normal bowel transit and those with constipation. Eleven normal subjects and 29 constipated patients were given 4 MBq iodine-131-cellulose ({sup 131}I-cellulose) orally and sequential abdominal scans were performed at 6, 24, 48, 72, and 96 hr from which total and segmental percent retentions were calculated. There were clear differences between the normal subjects and the constipated patients for the total percent retention at all time intervals, on a segmental basis in the right colon at 24more » hr, and in all segments at 48 and 72 hr. Three-day urinary excretion of radioiodine was minimal; 2.4% +/- 1.2% (mean +/- s.d.) in constipated patients and 3.1% +/- 0.8% in normals, with approximately 75% occurring in the first day. The use of oral radiotracers in the investigation of constipation appears promising.« less
Polysaccharide compositions of collenchyma cell walls from celery (Apium graveolens L.) petioles.
Chen, Da; Harris, Philip J; Sims, Ian M; Zujovic, Zoran; Melton, Laurence D
2017-06-15
Collenchyma serves as a mechanical support tissue for many herbaceous plants. Previous work based on solid-state NMR and immunomicroscopy suggested collenchyma cell walls (CWs) may have similar polysaccharide compositions to those commonly found in eudicotyledon parenchyma walls, but no detailed chemical analysis was available. In this study, compositions and structures of cell wall polysaccharides of peripheral collenchyma from celery petioles were investigated. This is the first detailed investigation of the cell wall composition of collenchyma from any plant. Celery petioles were found to elongate throughout their length during early growth, but as they matured elongation was increasingly confined to the upper region, until elongation ceased. Mature, fully elongated, petioles were divided into three equal segments, upper, middle and lower, and peripheral collenchyma strands isolated from each. Cell walls (CWs) were prepared from the strands, which also yielded a HEPES buffer soluble fraction. The CWs were sequentially extracted with CDTA, Na 2 CO 3 , 1 M KOH and 4 M KOH. Monosaccharide compositions of the CWs showed that pectin was the most abundant polysaccharide [with homogalacturonan (HG) more abundant than rhamnogalacturonan I (RG-I) and rhamnogalacturonan II (RG-II)], followed by cellulose, and other polysaccharides, mainly xyloglucans, with smaller amounts of heteroxylans and heteromannans. CWs from different segments had similar compositions, but those from the upper segments had slightly more pectin than those from the lower two segments. Further, the pectin in the CWs of the upper segment had a higher degree of methyl esterification than the other segments. In addition to the anticipated water-soluble pectins, the HEPES-soluble fractions surprisingly contained large amounts of heteroxylans. The CDTA and Na 2 CO 3 fractions were rich in HG and RG-I, the 1 M KOH fraction had abundant heteroxylans, the 4 M KOH fraction was rich in xyloglucan and heteromannans, and cellulose was predominant in the final residue. The structures of the xyloglucans, heteroxylans and heteromannans were deduced from the linkage analysis and were similar to those present in most eudicotyledon parenchyma CWs. Cross polarization with magic angle spinning (CP/MAS) NMR spectroscopy showed no apparent difference in the rigid and semi-rigid polysaccharides in the CWs of the three segments. Single-pulse excitation with magic-angle spinning (SPE/MAS) NMR spectroscopy, which detects highly mobile polysaccharides, showed the presence of arabinan, the detailed structure of which varied among the cell walls from the three segments. Celery collenchyma CWs have similar polysaccharide compositions to most eudicotyledon parenchyma CWs. However, celery collenchyma CWs have much higher XG content than celery parenchyma CWs. The degree of methyl esterification of pectin and the structures of the arabinan side chains of RG-I show some variation in the collenchyma CWs from the different segments. Unexpectedly, the HEPES-soluble fraction contained a large amount of heteroxylans.
Meisters, Julia; Diedenhofen, Birk; Musch, Jochen
2018-04-20
For decades, sequential lineups have been considered superior to simultaneous lineups in the context of eyewitness identification. However, most of the research leading to this conclusion was based on the analysis of diagnosticity ratios that do not control for the respondent's response criterion. Recent research based on the analysis of ROC curves has found either equal discriminability for sequential and simultaneous lineups, or higher discriminability for simultaneous lineups. Some evidence for potential position effects and for criterion shifts in sequential lineups has also been reported. Using ROC curve analysis, we investigated the effects of the suspect's position on discriminability and response criteria in both simultaneous and sequential lineups. We found that sequential lineups suffered from an unwanted position effect. Respondents employed a strict criterion for the earliest lineup positions, and shifted to a more liberal criterion for later positions. No position effects and no criterion shifts were observed in simultaneous lineups. This result suggests that sequential lineups are not superior to simultaneous lineups, and may give rise to unwanted position effects that have to be considered when conducting police lineups.
Trial Sequential Analysis in systematic reviews with meta-analysis.
Wetterslev, Jørn; Jakobsen, Janus Christian; Gluud, Christian
2017-03-06
Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors) and too many false negative conclusions (type II errors). We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D 2 ) measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated that the Trial Sequential Analysis provides better control of type I errors and of type II errors than the traditional naïve meta-analysis. Trial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naïve unadjusted confidence intervals.
Aydogdu, Ibrahim; Tanriverdi, Zeynep; Ertekin, Cumhur
2011-06-01
The aim of this study is to investigate a probable dysfunction of the central pattern generator (CPG) in dysphagic patients with ALS. We investigated 58 patients with ALS, 23 patients with PD, and 33 normal subjects. The laryngeal movements and EMG of the submental muscles were recorded during sequential water swallowing (SWS) of 100ml of water. The coordination of SWS and respiration was also studied in some normal cases and ALS patients. Normal subjects could complete the SWS optimally within 10s using 7 swallows, while in dysphagic ALS patients, the total duration and the number of swallows were significantly increased. The novel finding was that the regularity and rhythmicity of the swallowing pattern during SWS was disorganized to irregular and arhythmic pattern in 43% of the ALS patients. The duration and speed of swallowing were the most sensitive parameters for the disturbed oropharyngeal motility during SWS. The corticobulbar control of swallowing is insufficient in ALS, and the swallowing CPG cannot work very well to produce segmental muscle activation and sequential swallowing. CPG dysfunction can result in irregular and arhythmical sequential swallowing in ALS patients with bulbar plus pseudobulbar types. The arhythmical SWS pattern can be considered as a kind of dysfunction of CPG in human ALS cases with dysphagia. Copyright © 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Brown, C. David; Ih, Charles S.; Arce, Gonzalo R.; Fertell, David A.
1987-01-01
Vision systems for mobile robots or autonomous vehicles navigating in an unknown terrain environment must provide a rapid and accurate method of segmenting the scene ahead into regions of pathway and background. A major distinguishing feature between the pathway and background is the three dimensional texture of these two regions. Typical methods of textural image segmentation are very computationally intensive, often lack the required robustness, and are incapable of sensing the three dimensional texture of various regions of the scene. A method is presented where scanned laser projected lines of structured light, viewed by a stereoscopically located single video camera, resulted in an image in which the three dimensional characteristics of the scene were represented by the discontinuity of the projected lines. This image was conducive to processing with simple regional operators to classify regions as pathway or background. Design of some operators and application methods, and demonstration on sample images are presented. This method provides rapid and robust scene segmentation capability that has been implemented on a microcomputer in near real time, and should result in higher speed and more reliable robotic or autonomous navigation in unstructured environments.
Science documentary video slides to enhance education and communication
NASA Astrophysics Data System (ADS)
Byrne, J. M.; Little, L. J.; Dodgson, K.
2010-12-01
Documentary production can convey powerful messages using a combination of authentic science and reinforcing video imagery. Conventional documentary production contains too much information for many viewers to follow; hence many powerful points may be lost. But documentary productions that are re-edited into short video sequences and made available through web based video servers allow the teacher/viewer to access the material as video slides. Each video slide contains one critical discussion segment of the larger documentary. A teacher/viewer can review the documentary one segment at a time in a class room, public forum, or in the comfort of home. The sequential presentation of the video slides allows the viewer to best absorb the documentary message. The website environment provides space for additional questions and discussion to enhance the video message.
Kinetic chain abnormalities in the athletic shoulder.
Sciascia, Aaron; Thigpen, Charles; Namdari, Surena; Baldwin, Keith
2012-03-01
Overhead activities require the shoulder to be exposed to and sustain repetitive loads. The segmental activation of the body's links, known as the kinetic chain, allows this to occur effectively. Proper muscle activation is achieved through generation of energy from the central segment or core, which then transfers the energy to the terminal links of the shoulder, elbow, and hand. The kinetic chain is best characterized by 3 components: optimized anatomy, reproducible efficient motor patterns, and the sequential generation of forces. However, tissue injury and anatomic deficits such as weakness and/or tightness in the leg, pelvic core, or scapular musculature can lead to overuse shoulder injuries. These injuries can be prevented and maladaptations can be detected with a thorough understanding of biomechanics of the kinetic chain as it relates to overhead activity.
Real-time high dynamic range laser scanning microscopy
NASA Astrophysics Data System (ADS)
Vinegoni, C.; Leon Swisher, C.; Fumene Feruglio, P.; Giedt, R. J.; Rousso, D. L.; Stapleton, S.; Weissleder, R.
2016-04-01
In conventional confocal/multiphoton fluorescence microscopy, images are typically acquired under ideal settings and after extensive optimization of parameters for a given structure or feature, often resulting in information loss from other image attributes. To overcome the problem of selective data display, we developed a new method that extends the imaging dynamic range in optical microscopy and improves the signal-to-noise ratio. Here we demonstrate how real-time and sequential high dynamic range microscopy facilitates automated three-dimensional neural segmentation. We address reconstruction and segmentation performance on samples with different size, anatomy and complexity. Finally, in vivo real-time high dynamic range imaging is also demonstrated, making the technique particularly relevant for longitudinal imaging in the presence of physiological motion and/or for quantification of in vivo fast tracer kinetics during functional imaging.
Neel, Sean T
2014-11-01
A cost analysis was performed to evaluate the effect on physicians in the United States of a transition from delayed sequential cataract surgery to immediate sequential cataract surgery. Financial and efficiency impacts of this change were evaluated to determine whether efficiency gains could offset potential reduced revenue. A cost analysis using Medicare cataract surgery volume estimates, Medicare 2012 physician cataract surgery reimbursement schedules, and estimates of potential additional office visit revenue comparing immediate sequential cataract surgery with delayed sequential cataract surgery for a single specialty ophthalmology practice in West Tennessee. This model should give an indication of the effect on physicians on a national basis. A single specialty ophthalmology practice in West Tennessee was found to have a cataract surgery revenue loss of $126,000, increased revenue from office visits of $34,449 to $106,271 (minimum and maximum offset methods), and a net loss of $19,900 to $91,700 (base case) with the conversion to immediate sequential cataract surgery. Physicians likely stand to lose financially, and this loss cannot be offset by increased patient visits under the current reimbursement system. This may result in physician resistance to converting to immediate sequential cataract surgery, gaming, and supplier-induced demand.
Reverse control for humanoid robot task recognition.
Hak, Sovannara; Mansard, Nicolas; Stasse, Olivier; Laumond, Jean Paul
2012-12-01
Efficient methods to perform motion recognition have been developed using statistical tools. Those methods rely on primitive learning in a suitable space, for example, the latent space of the joint angle and/or adequate task spaces. Learned primitives are often sequential: A motion is segmented according to the time axis. When working with a humanoid robot, a motion can be decomposed into parallel subtasks. For example, in a waiter scenario, the robot has to keep some plates horizontal with one of its arms while placing a plate on the table with its free hand. Recognition can thus not be limited to one task per consecutive segment of time. The method presented in this paper takes advantage of the knowledge of what tasks the robot is able to do and how the motion is generated from this set of known controllers, to perform a reverse engineering of an observed motion. This analysis is intended to recognize parallel tasks that have been used to generate a motion. The method relies on the task-function formalism and the projection operation into the null space of a task to decouple the controllers. The approach is successfully applied on a real robot to disambiguate motion in different scenarios where two motions look similar but have different purposes.
ERIC Educational Resources Information Center
Willson, Victor L.; And Others
1985-01-01
Presents results of confirmatory factor analysis of the Kaufman Assessment Battery for children which is based on the underlying theoretical model of sequential, simultaneous, and achievement factors. Found support for the two-factor, simultaneous and sequential processing model. (MCF)
Sequential Testing: Basics and Benefits
1978-03-01
Eii~TARADC6M and x _..TECHNICAL REPORT NO. 12325 SEQUENTIAL TESTING: BASICS AND BENEFITS / i * p iREFERENCE CP...Sequential Testing: Basics and Benefits Contents Page I. Introduction and Summary II. Sequential Analysis 2 III. Mathematics of Sequential Testing 4 IV...testing. The added benefit of reduced energy needs are inherent in this testing method. The text was originally released by the authors in 1972. The text
Estimation of 3D reconstruction errors in a stereo-vision system
NASA Astrophysics Data System (ADS)
Belhaoua, A.; Kohler, S.; Hirsch, E.
2009-06-01
The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.
Sequential analysis of child pain behavior and maternal responses: an observational study.
Langer, Shelby L; Romano, Joan; Brown, Jonathon D; Nielson, Heather; Ou, Bobby; Rauch, Christina; Zullo, Lirra; Levy, Rona L
2017-09-01
This laboratory-based study examined lagged associations between child pain behavior and maternal responses as a function of maternal catastrophizing (CAT). Mothers completed the parent version of the Pain Catastrophizing Scale. Children participated in a validated water ingestion procedure to induce abdominal discomfort with mothers present. Video recordings of their interactions were edited into 30-second segments and coded by 2 raters for presence of child pain behavior, maternal solicitousness, and nontask conversation. Kappa reliabilities ranged from 0.83 to 0.95. Maternal CAT was positively associated with child pain behavior and maternal solicitousness, P values <0.05. In lagged analyses, child pain behavior during a given segment (T) was positively associated with child pain behavior during the subsequent segment (T + 1), P <0.05. Maternal CAT moderated the association between (1) child pain behavior at T and maternal solicitousness at T + 1, and (2) solicitousness at T and child pain behavior at T + 1, P values <0.05. Mothers higher in CAT responded solicitously at T + 1 irrespective of their child's preceding pain behavior, and their children exhibited pain behavior at T + 1 irrespective of the mother's preceding solicitousness. Mothers lower in CAT were more likely to respond solicitously at T + 1 after child pain behavior, and their children were more likely to exhibit pain behavior at T + 1 after maternal solicitousness. These findings indicate that high CAT mothers and their children exhibit inflexible patterns of maternal solicitousness and child pain behavior, and that such families may benefit from interventions to decrease CAT and develop more adaptive responses.
Textural feature calculated from segmental fluences as a modulation index for VMAT.
Park, So-Yeon; Park, Jong Min; Kim, Jung-In; Kim, Hyoungnyoun; Kim, Il Han; Ye, Sung-Joon
2015-12-01
Textural features calculated from various segmental fluences of volumetric modulated arc therapy (VMAT) plans were optimized to enhance its performance to predict plan delivery accuracy. Twenty prostate and twenty head and neck VMAT plans were selected retrospectively. Fluences were generated for each VMAT plan by summations of segments at sequential groups of control points. The numbers of summed segments were 5, 10, 20, 45, 90, 178 and 356. For each fluence, we investigated 6 textural features: angular second moment, inverse difference moment, contrast, variance, correlation and entropy (particular displacement distances, d = 1, 5 and 10). Spearman's rank correlation coefficients (rs) were calculated between each textural feature and several different measures of VMAT delivery accuracy. The values of rs of contrast (d = 10) with 10 segments to both global and local gamma passing rates with 2%/2 mm were 0.666 (p <0.001) and 0.573 (p <0.001), respectively. It showed rs values of -0.895 (p <0.001) and 0.727 (p <0.001) to multi-leaf collimator positional errors and gantry angle errors during delivery, respectively. The number of statistically significant rs values (p <0.05) to the changes in dose-volumetric parameters during delivery was 14 among a total of 35 tested parameters. Contrast (d = 10) with 10 segments showed higher correlations to the VMAT delivery accuracy than did the conventional modulation indices. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Analysing playing using the note-time playing path.
de Graaff, Deborah L E; Schubert, Emery
2011-03-01
This article introduces a new method of data analysis that represents the playing of written music as a graph. The method, inspired by Miklaszewski, charts low-level note timings from a sound recording of a single-line instrument using high-precision audio-to-MIDI conversion software. Note onset times of pitch sequences are then plotted against the score-predicted timings to produce a Note-Time Playing Path (NTPP). The score-predicted onset time of each sequentially performed note (horizontal axis) unfolds in performed time down the page (vertical axis). NTPPs provide a visualisation that shows (1) tempo variations, (2) repetitive practice behaviours, (3) segmenting of material, (4) precise note time positions, and (5) time spent on playing or not playing. The NTPP can provide significant new insights into behaviour and cognition of music performance and may also be used to complement established traditional approaches such as think-alouds, interviews, and video coding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolton, Justin; Rzayev, Javid
Polystyrene–poly(methyl methacrylate)–polylactide (PS–PMMA–PLA) triblock bottlebrush copolymer with nearly symmetric volume fractions was synthesized by grafting from a symmetrical triblock backbone and the resulting melt was characterized by scanning electron microscopy and small-angle X-ray scattering. The copolymer backbone was prepared by sequential reversible addition–fragmentation chain transfer (RAFT) polymerization of solketal methacrylate (SM), 2-(bromoisobutyryl)ethyl methacrylate (BIEM), and 5-(trimethylsilyl)-4-pentyn-1-ol methacrylate (TPYM). PMMA branches were grafted by atom transfer radical polymerization from the poly(BIEM) segment, PS branches were grafted by RAFT polymerization from the poly(TPYM) block after installment of the RAFT agents, while PLA side chains were grafted from the deprotected poly(SM) block. Themore » resulting copolymer was found to exhibit a lamellae morphology with a domain spacing of 79 nm. Differential scanning calorimetry analysis indicated that PMMA was preferentially mixing with PS while phase separating from PLA domains.« less
Liu, Xiaoxia; Tian, Miaomiao; Camara, Mohamed Amara; Guo, Liping; Yang, Li
2015-10-01
We present sequential CE analysis of amino acids and L-asparaginase-catalyzed enzyme reaction, by combing the on-line derivatization, optically gated (OG) injection and commercial-available UV-Vis detection. Various experimental conditions for sequential OG-UV/vis CE analysis were investigated and optimized by analyzing a standard mixture of amino acids. High reproducibility of the sequential CE analysis was demonstrated with RSD values (n = 20) of 2.23, 2.57, and 0.70% for peak heights, peak areas, and migration times, respectively, and the LOD of 5.0 μM (for asparagine) and 2.0 μM (for aspartic acid) were obtained. With the application of the OG-UV/vis CE analysis, sequential online CE enzyme assay of L-asparaginase-catalyzed enzyme reaction was carried out by automatically and continuously monitoring the substrate consumption and the product formation every 12 s from the beginning to the end of the reaction. The Michaelis constants for the reaction were obtained and were found to be in good agreement with the results of traditional off-line enzyme assays. The study demonstrated the feasibility and reliability of integrating the OG injection with UV/vis detection for sequential online CE analysis, which could be of potential value for online monitoring various chemical reaction and bioprocesses. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Liu, Wei; Ma, Shunjian; Sun, Mingwei; Yi, Haidong; Wang, Zenghui; Chen, Zengqiang
2016-08-01
Path planning plays an important role in aircraft guided systems. Multiple no-fly zones in the flight area make path planning a constrained nonlinear optimization problem. It is necessary to obtain a feasible optimal solution in real time. In this article, the flight path is specified to be composed of alternate line segments and circular arcs, in order to reformulate the problem into a static optimization one in terms of the waypoints. For the commonly used circular and polygonal no-fly zones, geometric conditions are established to determine whether or not the path intersects with them, and these can be readily programmed. Then, the original problem is transformed into a form that can be solved by the sequential quadratic programming method. The solution can be obtained quickly using the Sparse Nonlinear OPTimizer (SNOPT) package. Mathematical simulations are used to verify the effectiveness and rapidity of the proposed algorithm.
ADS: A FORTRAN program for automated design synthesis: Version 1.10
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1985-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.
Moshirfar, Majid; Fenzl, Carlton R; Meyer, Jay J; Neuffer, Marcus C; Espandar, Ladan; Mifflin, Mark D
2011-02-01
To evaluate the safety, efficacy, and visual outcomes of simultaneous and sequential implantation of Intacs (Addition Technology, Inc, Sunnyvale, CA) and Verisyse phakic intraocular lens (AMO, Santa Ana, CA) in selected cases of ectatic corneal disease. John A. Moran Eye Center, University of Utah, UT. Prospective data were collected from 19 eyes of 12 patients (5 eyes, post-laser in situ keratomileusis ectasia and 14 eyes, keratoconus). Intacs segments were implanted followed by insertion of a phakic Verisyse lens at the same session (12 eyes) in the simultaneous group or several months later (7 eyes) in the sequential group. The uncorrected visual acuity, best spectacle-corrected visual acuity (BSCVA), and manifest refraction were recorded at each visit. No intraoperative or postoperative complications were observed. At the last follow-up (19 ± 6 months), in the simultaneous group, mean spherical error was -0.79 ± 1.0 diopter (D) (range, -2.0 to +1.50 D) and cylindrical error +2.06 ± 1.21 D (range, +0.5 to +3.75 D). In the sequential group, at the last follow-up, at 36 ± 21 months, the mean spherical error was -1.64 ± 1.31 D (range, -3.25 to +1.0 D) and cylindrical error +2.07 ± 1.03 D (range, +0.75 to +3.25 D). There were no significant differences in mean uncorrected visual acuity or BSCVA between the 2 groups preoperatively or postoperatively. No eye lost lines of preoperative BSCVA. Combined insertion of Intacs and Verisyse was safe and effective in all cases. The outcomes of the simultaneous implantation of the Intacs and Verisyse lens in 1 surgery were similar to the results achieved with sequential implantation using 2 surgeries.
Real-time high dynamic range laser scanning microscopy
Vinegoni, C.; Leon Swisher, C.; Fumene Feruglio, P.; Giedt, R. J.; Rousso, D. L.; Stapleton, S.; Weissleder, R.
2016-01-01
In conventional confocal/multiphoton fluorescence microscopy, images are typically acquired under ideal settings and after extensive optimization of parameters for a given structure or feature, often resulting in information loss from other image attributes. To overcome the problem of selective data display, we developed a new method that extends the imaging dynamic range in optical microscopy and improves the signal-to-noise ratio. Here we demonstrate how real-time and sequential high dynamic range microscopy facilitates automated three-dimensional neural segmentation. We address reconstruction and segmentation performance on samples with different size, anatomy and complexity. Finally, in vivo real-time high dynamic range imaging is also demonstrated, making the technique particularly relevant for longitudinal imaging in the presence of physiological motion and/or for quantification of in vivo fast tracer kinetics during functional imaging. PMID:27032979
Osterhoff, Georg; Ryang, Yu-Mi; von Oelhafen, Judith; Meyer, Bernhard; Ringel, Florian
2017-07-01
Multisegmental cervical instrumentations ending at the cervicothoracic junction may lead to significant adjacent segment degeneration. The purpose of this study was to compare the extent of sequential pathologies in the lower adjacent segment between patient groups with a primarily cervical instrumentation ending at C7 versus an instrumentation including the cervicothoracic junction ending at T1 or T2. A retrospective analysis of 98 consecutive patients with multisegmental posterior cervical fusion surgery ending either at C7 or at T1 or T2 was performed. Radiographic parameters of degeneration at the adjacent segment below the instrumentation were determined postoperatively and at follow-up (FU), and the need for secondary interventions was documented. A total of 74 patients had a FU of at least 6 months (C7: n = 58, age 63 ± 11 years, FU 36 ± 26 months; T1/T2: n = 16, age 65 ± 13 years, FU 37 ± 21 months). There were no significant differences between the C7 and T1/T2 groups with regard to the change in kyphosis angle (P = 0.162), disc height (P = 0.204), or disc degeneration according to the Mimura grading system (P = 0.718). Secondary interventions due to adjacent segmental pathology or implant failure were necessary in 18 of 58 (31%) of the C7 cases and in 1 of 16 (6.3%) of the T1/T2 cases (P = 0.038). Patients with multisegmental posterior cervical fusions ending at C7 showed a greater rate of clinically symptomatic pathologies at the adjacent level below the instrumentation. On the basis of our data and with its limitations in mind, one may consider to bridge the cervicothoracic junction and to end the instrumentation at T1 or T2 in those cases. Copyright © 2017 Elsevier Inc. All rights reserved.
Segmentation and Visual Analysis of Whole-Body Mouse Skeleton microSPECT
Khmelinskii, Artem; Groen, Harald C.; Baiker, Martin; de Jong, Marion; Lelieveldt, Boudewijn P. F.
2012-01-01
Whole-body SPECT small animal imaging is used to study cancer, and plays an important role in the development of new drugs. Comparing and exploring whole-body datasets can be a difficult and time-consuming task due to the inherent heterogeneity of the data (high volume/throughput, multi-modality, postural and positioning variability). The goal of this study was to provide a method to align and compare side-by-side multiple whole-body skeleton SPECT datasets in a common reference, thus eliminating acquisition variability that exists between the subjects in cross-sectional and multi-modal studies. Six whole-body SPECT/CT datasets of BALB/c mice injected with bone targeting tracers 99mTc-methylene diphosphonate (99mTc-MDP) and 99mTc-hydroxymethane diphosphonate (99mTc-HDP) were used to evaluate the proposed method. An articulated version of the MOBY whole-body mouse atlas was used as a common reference. Its individual bones were registered one-by-one to the skeleton extracted from the acquired SPECT data following an anatomical hierarchical tree. Sequential registration was used while constraining the local degrees of freedom (DoFs) of each bone in accordance to the type of joint and its range of motion. The Articulated Planar Reformation (APR) algorithm was applied to the segmented data for side-by-side change visualization and comparison of data. To quantitatively evaluate the proposed algorithm, bone segmentations of extracted skeletons from the correspondent CT datasets were used. Euclidean point to surface distances between each dataset and the MOBY atlas were calculated. The obtained results indicate that after registration, the mean Euclidean distance decreased from 11.5±12.1 to 2.6±2.1 voxels. The proposed approach yielded satisfactory segmentation results with minimal user intervention. It proved to be robust for “incomplete” data (large chunks of skeleton missing) and for an intuitive exploration and comparison of multi-modal SPECT/CT cross-sectional mouse data. PMID:23152834
Lashkari, AmirEhsan; Pak, Fatemeh; Firouzmand, Mohammad
2016-01-01
Breast cancer is the most common type of cancer among women. The important key to treat the breast cancer is early detection of it because according to many pathological studies more than 75% – 80% of all abnormalities are still benign at primary stages; so in recent years, many studies and extensive research done to early detection of breast cancer with higher precision and accuracy. Infra-red breast thermography is an imaging technique based on recording temperature distribution patterns of breast tissue. Compared with breast mammography technique, thermography is more suitable technique because it is noninvasive, non-contact, passive and free ionizing radiation. In this paper, a full automatic high accuracy technique for classification of suspicious areas in thermogram images with the aim of assisting physicians in early detection of breast cancer has been presented. Proposed algorithm consists of four main steps: pre-processing & segmentation, feature extraction, feature selection and classification. At the first step, using full automatic operation, region of interest (ROI) determined and the quality of image improved. Using thresholding and edge detection techniques, both right and left breasts separated from each other. Then relative suspected areas become segmented and image matrix normalized due to the uniqueness of each person's body temperature. At feature extraction stage, 23 features, including statistical, morphological, frequency domain, histogram and Gray Level Co-occurrence Matrix (GLCM) based features are extracted from segmented right and left breast obtained from step 1. To achieve the best features, feature selection methods such as minimum Redundancy and Maximum Relevance (mRMR), Sequential Forward Selection (SFS), Sequential Backward Selection (SBS), Sequential Floating Forward Selection (SFFS), Sequential Floating Backward Selection (SFBS) and Genetic Algorithm (GA) have been used at step 3. Finally to classify and TH labeling procedures, different classifiers such as AdaBoost, Support Vector Machine (SVM), k-Nearest Neighbors (kNN), Naïve Bayes (NB) and probability Neural Network (PNN) are assessed to find the best suitable one. These steps are applied on different thermogram images degrees. The results obtained on native database showed the best and significant performance of the proposed algorithm in comprise to the similar studies. According to experimental results, GA combined with AdaBoost with the mean accuracy of 85.33% and 87.42% on the left and right breast images with 0 degree, GA combined with AdaBoost with mean accuracy of 85.17% on the left breast images with 45 degree and mRMR combined with AdaBoost with mean accuracy of 85.15% on the right breast images with 45 degree, and also GA combined with AdaBoost with a mean accuracy of 84.67% and 86.21%, on the left and right breast images with 90 degree, are the best combinations of feature selection and classifier for evaluation of breast images. PMID:27014608
A technique for sequential segmental neuromuscular stimulation with closed loop feedback control.
Zonnevijlle, Erik D H; Abadia, Gustavo Perez; Somia, Naveen N; Kon, Moshe; Barker, John H; Koenig, Steven; Ewert, D L; Stremel, Richard W
2002-01-01
In dynamic myoplasty, dysfunctional muscle is assisted or replaced with skeletal muscle from a donor site. Electrical stimulation is commonly used to train and animate the skeletal muscle to perform its new task. Due to simultaneous tetanic contractions of the entire myoplasty, muscles are deprived of perfusion and fatigue rapidly, causing long-term problems such as excessive scarring and muscle ischemia. Sequential stimulation contracts part of the muscle while other parts rest, thus significantly improving blood perfusion. However, the muscle still fatigues. In this article, we report a test of the feasibility of using closed-loop control to economize the contractions of the sequentially stimulated myoplasty. A simple stimulation algorithm was developed and tested on a sequentially stimulated neo-sphincter designed from a canine gracilis muscle. Pressure generated in the lumen of the myoplasty neo-sphincter was used as feedback to regulate the stimulation signal via three control parameters, thereby optimizing the performance of the myoplasty. Additionally, we investigated and compared the efficiency of amplitude and frequency modulation techniques. Closed-loop feedback enabled us to maintain target pressures within 10% deviation using amplitude modulation and optimized control parameters (correction frequency = 4 Hz, correction threshold = 4%, and transition time = 0.3 s). The large-scale stimulation/feedback setup was unfit for chronic experimentation, but can be used as a blueprint for a small-scale version to unveil the theoretical benefits of closed-loop control in chronic experimentation.
Rubin, Jacob
1992-01-01
The feed forward (FF) method derives efficient operational equations for simulating transport of reacting solutes. It has been shown to be applicable in the presence of networks with any number of homogeneous and/or heterogeneous, classical reaction segments that consist of three, at most binary participants. Using a sequential (network type after network type) exploration approach and, independently, theoretical explanations, it is demonstrated for networks with classical reaction segments containing more than three, at most binary participants that if any one of such networks leads to a solvable transport problem then the FF method is applicable. Ways of helping to avoid networks that produce problem insolvability are developed and demonstrated. A previously suggested algebraic, matrix rank procedure has been adapted and augmented to serve as the main, easy-to-apply solvability test for already postulated networks. Four network conditions that often generate insolvability have been identified and studied. Their early detection during network formulation may help to avoid postulation of insolvable networks.
NASA Astrophysics Data System (ADS)
Ferroud, Anouck; Chesnaux, Romain; Rafini, Silvain
2018-01-01
The flow dimension parameter n, derived from the Generalized Radial Flow model, is a valuable tool to investigate the actual flow regimes that really occur during a pumping test rather than suppose them to be radial, as postulated by the Theis-derived models. A numerical approach has shown that, when the flow dimension is not radial, using the derivative analysis rather than the conventional Theis and Cooper-Jacob methods helps to estimate much more accurately the hydraulic conductivity of the aquifer. Although n has been analysed in numerous studies including field-based studies, there is a striking lack of knowledge about its occurrence in nature and how it may be related to the hydrogeological setting. This study provides an overview of the occurrence of n in natural aquifers located in various geological contexts including crystalline rock, carbonate rock and granular aquifers. A comprehensive database is compiled from governmental and industrial sources, based on 69 constant-rate pumping tests. By means of a sequential analysis approach, we systematically performed a flow dimension analysis in which straight segments on drawdown-log derivative time series are interpreted as successive, specific and independent flow regimes. To reduce the uncertainties inherent in the identification of n sequences, we used the proprietary SIREN code to execute a dual simultaneous fit on both the drawdown and the drawdown-log derivative signals. Using the stated database, we investigate the frequency with which the radial and non-radial flow regimes occur in fractured rock and granular aquifers, and also provide outcomes that indicate the lack of applicability of Theis-derived models in representing nature. The results also emphasize the complexity of hydraulic signatures observed in nature by pointing out n sequential signals and non-integer n values that are frequently observed in the database.
Population segmentation: an approach to reducing childhood obesity inequalities.
Mahmood, Hashum; Lowe, Susan
2017-05-01
The aims of this study are threefold: (1) to investigate the relationship between socio-economic status (inequality) and childhood obesity prevalence within Birmingham local authority, (2) to identify any change in childhood obesity prevalence between deprivation quintiles and (3) to analyse individualised Birmingham National Child Measurement Programme (NCMP) data using a population segmentation tool to better inform obesity prevention strategies. Data from the NCMP for Birmingham (2010/2011 and 2014/2015) were analysed using the deprivation scores from the Income Domain Affecting Children Index (IDACI 2010). The percentage of children with excess weight was calculated for each local deprivation quintile. Population segmentation was carried out using the Experian's Mosaic Public Sector 6 (MPS6) segmentation tool. Childhood obesity levels have remained static at the national and Birmingham level. For Year 6 pupils, obesity levels have increased in the most deprived deprivation quintiles for boys and girls. The most affluent quintile shows a decreasing trend of obesity prevalence for boys and girls in both year groups. For the middle quintiles, the results show fluctuating trends. This research highlighted the link in Birmingham between obesity and socio-economic factors with the gap increasing between deprivation quintiles. Obesity is a complex problem that cannot simply be addressed through targeting most deprived populations, rather through a range of effective interventions tailored for the various population segments that reside within communities. Using population segmentation enables a more nuanced understanding of the potential barriers and levers within populations on their readiness for change. The segmentation of childhood obesity data will allow utilisation of social marketing methodology that will facilitate identification of suitable methods for interventions and motivate individuals to sustain behavioural change. Sequentially, it will also inform policy makers to commission the most appropriate interventions.
Fajardo, Teodoro; Sung, Po-Yu; Roy, Polly
2015-01-01
Bluetongue virus (BTV) causes hemorrhagic disease in economically important livestock. The BTV genome is organized into ten discrete double-stranded RNA molecules (S1-S10) which have been suggested to follow a sequential packaging pathway from smallest to largest segment during virus capsid assembly. To substantiate and extend these studies, we have investigated the RNA sorting and packaging mechanisms with a new experimental approach using inhibitory oligonucleotides. Putative packaging signals present in the 3’untranslated regions of BTV segments were targeted by a number of nuclease resistant oligoribonucleotides (ORNs) and their effects on virus replication in cell culture were assessed. ORNs complementary to the 3’ UTR of BTV RNAs significantly inhibited virus replication without affecting protein synthesis. Same ORNs were found to inhibit complex formation when added to a novel RNA-RNA interaction assay which measured the formation of supramolecular complexes between and among different RNA segments. ORNs targeting the 3’UTR of BTV segment 10, the smallest RNA segment, were shown to be the most potent and deletions or substitution mutations of the targeted sequences diminished the RNA complexes and abolished the recovery of viable viruses using reverse genetics. Cell-free capsid assembly/RNA packaging assay also confirmed that the inhibitory ORNs could interfere with RNA packaging and further substitution mutations within the putative RNA packaging sequence have identified the recognition sequence concerned. Exchange of 3’UTR between segments have further demonstrated that RNA recognition was segment specific, most likely acting as part of the secondary structure of the entire genomic segment. Our data confirm that genome packaging in this segmented dsRNA virus occurs via the formation of supramolecular complexes formed by the interaction of specific sequences located in the 3’ UTRs. Additionally, the inhibition of packaging in-trans with inhibitory ORNs suggests this that interaction is a bona fide target for the design of compounds with antiviral activity. PMID:26646790
Fajardo, Teodoro; Sung, Po-Yu; Roy, Polly
2015-12-01
Bluetongue virus (BTV) causes hemorrhagic disease in economically important livestock. The BTV genome is organized into ten discrete double-stranded RNA molecules (S1-S10) which have been suggested to follow a sequential packaging pathway from smallest to largest segment during virus capsid assembly. To substantiate and extend these studies, we have investigated the RNA sorting and packaging mechanisms with a new experimental approach using inhibitory oligonucleotides. Putative packaging signals present in the 3'untranslated regions of BTV segments were targeted by a number of nuclease resistant oligoribonucleotides (ORNs) and their effects on virus replication in cell culture were assessed. ORNs complementary to the 3' UTR of BTV RNAs significantly inhibited virus replication without affecting protein synthesis. Same ORNs were found to inhibit complex formation when added to a novel RNA-RNA interaction assay which measured the formation of supramolecular complexes between and among different RNA segments. ORNs targeting the 3'UTR of BTV segment 10, the smallest RNA segment, were shown to be the most potent and deletions or substitution mutations of the targeted sequences diminished the RNA complexes and abolished the recovery of viable viruses using reverse genetics. Cell-free capsid assembly/RNA packaging assay also confirmed that the inhibitory ORNs could interfere with RNA packaging and further substitution mutations within the putative RNA packaging sequence have identified the recognition sequence concerned. Exchange of 3'UTR between segments have further demonstrated that RNA recognition was segment specific, most likely acting as part of the secondary structure of the entire genomic segment. Our data confirm that genome packaging in this segmented dsRNA virus occurs via the formation of supramolecular complexes formed by the interaction of specific sequences located in the 3' UTRs. Additionally, the inhibition of packaging in-trans with inhibitory ORNs suggests this that interaction is a bona fide target for the design of compounds with antiviral activity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, X; Gao, H; Sharp, G
2015-06-15
Purpose: The delineation of targets and organs-at-risk is a critical step during image-guided radiation therapy, for which manual contouring is the gold standard. However, it is often time-consuming and may suffer from intra- and inter-rater variability. The purpose of this work is to investigate the automated segmentation. Methods: The automatic segmentation here is based on mutual information (MI), with the atlas from Public Domain Database for Computational Anatomy (PDDCA) with manually drawn contours.Using dice coefficient (DC) as the quantitative measure of segmentation accuracy, we perform leave-one-out cross-validations for all PDDCA images sequentially, during which other images are registered to eachmore » chosen image and DC is computed between registered contour and ground truth. Meanwhile, six strategies, including MI, are selected to measure the image similarity, with MI to be the best. Then given a target image to be segmented and an atlas, automatic segmentation consists of: (a) the affine registration step for image positioning; (b) the active demons registration method to register the atlas to the target image; (c) the computation of MI values between the deformed atlas and the target image; (d) the weighted image fusion of three deformed atlas images with highest MI values to form the segmented contour. Results: MI was found to be the best among six studied strategies in the sense that it had the highest positive correlation between similarity measure (e.g., MI values) and DC. For automated segmentation, the weighted image fusion of three deformed atlas images with highest MI values provided the highest DC among four proposed strategies. Conclusion: MI has the highest correlation with DC, and therefore is an appropriate choice for post-registration atlas selection in atlas-based segmentation. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
Shteingart, Hanan; Loewenstein, Yonatan
2016-01-01
There is a long history of experiments in which participants are instructed to generate a long sequence of binary random numbers. The scope of this line of research has shifted over the years from identifying the basic psychological principles and/or the heuristics that lead to deviations from randomness, to one of predicting future choices. In this paper, we used generalized linear regression and the framework of Reinforcement Learning in order to address both points. In particular, we used logistic regression analysis in order to characterize the temporal sequence of participants' choices. Surprisingly, a population analysis indicated that the contribution of the most recent trial has only a weak effect on behavior, compared to more preceding trials, a result that seems irreconcilable with standard sequential effects that decay monotonously with the delay. However, when considering each participant separately, we found that the magnitudes of the sequential effect are a monotonous decreasing function of the delay, yet these individual sequential effects are largely averaged out in a population analysis because of heterogeneity. The substantial behavioral heterogeneity in this task is further demonstrated quantitatively by considering the predictive power of the model. We show that a heterogeneous model of sequential dependencies captures the structure available in random sequence generation. Finally, we show that the results of the logistic regression analysis can be interpreted in the framework of reinforcement learning, allowing us to compare the sequential effects in the random sequence generation task to those in an operant learning task. We show that in contrast to the random sequence generation task, sequential effects in operant learning are far more homogenous across the population. These results suggest that in the random sequence generation task, different participants adopt different cognitive strategies to suppress sequential dependencies when generating the "random" sequences.
Cost-benefit analysis of sequential warning lights in nighttime work zone tapers.
DOT National Transportation Integrated Search
2011-06-01
Improving safety at nighttime work zones is important because of the extra visibility concerns. The deployment of sequential lights is an innovative method for improving driver recognition of lane closures and work zone tapers. Sequential lights are ...
The limits of boundaries: unpacking localization and cognitive mapping relative to a boundary.
Zhou, Ruojing; Mou, Weimin
2018-05-01
Previous research (Zhou, Mou, Journal of Experimental Psychology: Learning, Memory and Cognition 42(8):1316-1323, 2016) showed that learning individual locations relative to a single landmark, compared to learning relative to a boundary, led to more accurate inferences of inter-object spatial relations (cognitive mapping of multiple locations). Following our past findings, the current study investigated whether the larger number of reference points provided by a homogeneous circular boundary, as well as less accessible knowledge of direct spatial relations among the multiple reference points, would lead to less effective cognitive mapping relative to the boundary. Accordingly, we manipulated (a) the number of primary reference points (one segment drawn from a circular boundary, four such segments, vs. the complete boundary) available when participants were localizing four objects sequentially (Experiment 1) and (b) the extendedness of each of the four segments (Experiment 2). The results showed that cognitive mapping was the least accurate in the whole boundary condition. However, expanding each of the four segments did not affect the accuracy of cognitive mapping until the four were connected to form a continuous boundary. These findings indicate that when encoding locations relative to a homogeneous boundary, participants segmented the boundary into differentiated pieces and subsequently chose the most informative local part (i.e., the segment closest in distance to one location) as the primary reference point for a particular location. During this process, direct spatial relations among the reference points were likely not attended to. These findings suggest that people might encode and represent bounded space in a fragmented fashion when localizing within a homogeneous boundary.
Yang, Xiaopeng; Yang, Jae Do; Yu, Hee Chul; Choi, Younggeun; Yang, Kwangho; Lee, Tae Beom; Hwang, Hong Pil; Ahn, Sungwoo; You, Heecheon
2018-05-01
Manual tracing of the right and left liver lobes from computed tomography (CT) images for graft volumetry in preoperative surgery planning of living donor liver transplantation (LDLT) is common at most medical centers. This study aims to develop an automatic system with advanced image processing algorithms and user-friendly interfaces for liver graft volumetry and evaluate its accuracy and efficiency in comparison with a manual tracing method. The proposed system provides a sequential procedure consisting of (1) liver segmentation, (2) blood vessel segmentation, and (3) virtual liver resection for liver graft volumetry. Automatic segmentation algorithms using histogram analysis, hybrid level-set methods, and a customized region growing method were developed. User-friendly interfaces such as sequential and hierarchical user menus, context-sensitive on-screen hotkey menus, and real-time sound and visual feedback were implemented. Blood vessels were excluded from the liver for accurate liver graft volumetry. A large sphere-based interactive method was developed for dividing the liver into left and right lobes with a customized cutting plane. The proposed system was evaluated using 50 CT datasets in terms of graft weight estimation accuracy and task completion time through comparison to the manual tracing method. The accuracy of liver graft weight estimation was assessed by absolute difference (AD) and percentage of AD (%AD) between preoperatively estimated graft weight and intraoperatively measured graft weight. Intra- and inter-observer agreements of liver graft weight estimation were assessed by intraclass correlation coefficients (ICCs) using ten cases randomly selected. The proposed system showed significantly higher accuracy and efficiency in liver graft weight estimation (AD = 21.0 ± 18.4 g; %AD = 3.1% ± 2.8%; percentage of %AD > 10% = none; task completion time = 7.3 ± 1.4 min) than the manual tracing method (AD = 70.5 ± 52.1 g; %AD = 10.2% ± 7.5%; percentage of %AD > 10% = 46%; task completion time = 37.9 ± 7.0 min). The proposed system showed slightly higher intra- and inter-observer agreements (ICC = 0.996 to 0.998) than the manual tracing method (ICC = 0.979 to 0.999). The proposed system was proved accurate and efficient in liver graft volumetry for preoperative planning of LDLT. Copyright © 2018 Elsevier B.V. All rights reserved.
Naveilhan, P; Baudet, C; Jabbour, W; Wion, D
1994-09-01
A model that may explain the limited division potential of certain cells such as human fibroblasts in culture is presented. The central postulate of this theory is that there exists, prior to certain key exons that code for materials needed for cell division, a unique sequence of specific repeating segments of DNA. One copy of such repeating segments is deleted during each cell cycle in cells that are not protected from such deletion through methylation of their cytosine residues. According to this theory, the means through which such repeated sequences are removed, one per cycle, is through the sequential action of enzymes that act much as bacterial restriction enzymes do--namely to produce scissions in both strands of DNA in areas that correspond to the DNA base sequence recognition specificities of such enzymes. After the first scission early in a replicative cycle, that enzyme becomes inhibited, but the cleavage of the first site exposes the closest site in the repetitive element to the action of a second restriction enzyme after which that enzyme also becomes inhibited. Then repair occurs, regenerating the original first site. Through this sequential activation and inhibition of two different restriction enzymes, only one copy of the repeating sequence is deleted during each cell cycle. In effect, the repeating sequence operates as a precise counter of the numbers of cell doubling that have occurred since the cells involved differentiated during development.
NASA Technical Reports Server (NTRS)
Layland, J. W.
1974-01-01
An approximate analysis of the effect of a noisy carrier reference on the performance of sequential decoding is presented. The analysis uses previously developed techniques for evaluating noisy reference performance for medium-rate uncoded communications adapted to sequential decoding for data rates of 8 to 2048 bits/s. In estimating the ten to the minus fourth power deletion probability thresholds for Helios, the model agrees with experimental data to within the experimental tolerances. The computational problem involved in sequential decoding, carrier loop effects, the main characteristics of the medium-rate model, modeled decoding performance, and perspectives on future work are discussed.
Li, Peng; Tang, Youchao; Li, Jia; Shen, Longduo; Tian, Weidong; Tang, Wei
2013-09-01
The aim of this study is to describe the sequential software processing of computed tomography (CT) dataset for reconstructing the finite element analysis (FEA) mandibular model with custom-made plate, and to provide a theoretical basis for clinical usage of this reconstruction method. A CT scan was done on one patient who had mandibular continuity defects. This CT dataset in DICOM format was imported into Mimics 10.0 software in which a three-dimensional (3-D) model of the facial skeleton was reconstructed and the mandible was segmented out. With Geomagic Studio 11.0, one custom-made plate and nine virtual screws were designed. All parts of the reconstructed mandible were converted into NURBS and saved as IGES format for importing into pro/E 4.0. After Boolean operation and assembly, the model was switched to ANSYS Workbench 12.0. Finally, after applying the boundary conditions and material properties, an analysis was performed. As results, a 3-D FEA model was successfully developed using the softwares above. The stress-strain distribution precisely indicated biomechanical performance of the reconstructed mandible on the normal occlusion load, without stress concentrated areas. The Von-Mises stress in all parts of the model, from the maximum value of 50.9MPa to the minimum value of 0.1MPa, was lower than the ultimate tensile strength. In conclusion, the described strategy could speedily and successfully produce a biomechanical model of a reconstructed mandible with custom-made plate. Using this FEA foundation, the custom-made plate may be improved for an optimal clinical outcome. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Kozak, J; Paluch, J; Węgrzecka, A; Kozak, M; Wieczorek, M; Kochana, J; Kościelniak, P
2016-02-01
Spectrophotometric sequential injection system (SI) is proposed to automate the method of simultaneous determination of Fe(II) and Fe(III) on the basis of parameters of a single peak. In the developed SI system, sample and mixture of reagents (1,10-phenanthroline and sulfosalicylic acid) are introduced into a vessel, where in an acid environment (pH≅3) appropriate compounds of Fe(II) and Fe(III) with 1,10-phenanthroline and sulfosalicylic acid are formed, respectively. Then, in turn, air, sample, EDTA and sample again, are introduced into a holding coil. After the flow reversal, a segment of air is removed from the system by an additional valve and as EDTA replaces sulfosalicylic acid forming a more stable colorless compound with Fe(III), a complex signal is registered. Measurements are performed at wavelength 530 nm. The absorbance measured at minimum of the negative peak and the area or the absorbance measured at maximum of the signal can be used as measures corresponding to Fe(II) and Fe(III) concentrations, respectively. The time of the peak registration is about 2 min. Two-component calibration has been applied to analysis. Fe(II) and Fe(III) can be determined within the concentration ranges of 0.04-4.00 and 0.1-5.00 mg L(-1), with precision less than 2.8% and 1.7% (RSD), respectively and accuracy better than 7% (RE). The detection limit is 0.04 and 0.09 mg L(-1) for Fe(II) and Fe(III), respectively. The method was applied to analysis of artesian water samples. Copyright © 2015 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Economou, A.; Tzanavaras, P. D.; Themelis, D. G.
2005-01-01
The sequential-injection analysis (SIA) is an approach to sample handling that enables the automation of manual wet-chemistry procedures in a rapid, precise and efficient manner. The experiments using SIA fits well in the course of Instrumental Chemical Analysis and especially in the section of Automatic Methods of analysis provided by chemistry…
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2018-02-01
In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.
Efficient partitioning and assignment on programs for multiprocessor execution
NASA Technical Reports Server (NTRS)
Standley, Hilda M.
1993-01-01
The general problem studied is that of segmenting or partitioning programs for distribution across a multiprocessor system. Efficient partitioning and the assignment of program elements are of great importance since the time consumed in this overhead activity may easily dominate the computation, effectively eliminating any gains made by the use of the parallelism. In this study, the partitioning of sequentially structured programs (written in FORTRAN) is evaluated. Heuristics, developed for similar applications are examined. Finally, a model for queueing networks with finite queues is developed which may be used to analyze multiprocessor system architectures with a shared memory approach to the problem of partitioning. The properties of sequentially written programs form obstacles to large scale (at the procedure or subroutine level) parallelization. Data dependencies of even the minutest nature, reflecting the sequential development of the program, severely limit parallelism. The design of heuristic algorithms is tied to the experience gained in the parallel splitting. Parallelism obtained through the physical separation of data has seen some success, especially at the data element level. Data parallelism on a grander scale requires models that accurately reflect the effects of blocking caused by finite queues. A model for the approximation of the performance of finite queueing networks is developed. This model makes use of the decomposition approach combined with the efficiency of product form solutions.
Video Comprehensibility and Attention in Very Young Children
Pempek, Tiffany A.; Kirkorian, Heather L.; Richards, John E.; Anderson, Daniel R.; Lund, Anne F.; Stevens, Michael
2010-01-01
Earlier research established that preschool children pay less attention to television that is sequentially or linguistically incomprehensible. This study determines the youngest age for which this effect can be found. One-hundred and three 6-, 12-, 18-, and 24-month-olds’ looking and heart rate were recorded while they watched Teletubbies, a television program designed for very young children. Comprehensibility was manipulated by either randomly ordering shots or reversing dialogue to become backward speech. Infants watched one normal segment and one distorted version of the same segment. Only 24-month-olds, and to some extent 18-month-olds, distinguished between normal and distorted video by looking for longer durations towards the normal stimuli. The results suggest that it may not be until the middle of the second year that children demonstrate the earliest beginnings of comprehension of video as it is currently produced. PMID:20822238
A summary of image segmentation techniques
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly
1993-01-01
Machine vision systems are often considered to be composed of two subsystems: low-level vision and high-level vision. Low level vision consists primarily of image processing operations performed on the input image to produce another image with more favorable characteristics. These operations may yield images with reduced noise or cause certain features of the image to be emphasized (such as edges). High-level vision includes object recognition and, at the highest level, scene interpretation. The bridge between these two subsystems is the segmentation system. Through segmentation, the enhanced input image is mapped into a description involving regions with common features which can be used by the higher level vision tasks. There is no theory on image segmentation. Instead, image segmentation techniques are basically ad hoc and differ mostly in the way they emphasize one or more of the desired properties of an ideal segmenter and in the way they balance and compromise one desired property against another. These techniques can be categorized in a number of different groups including local vs. global, parallel vs. sequential, contextual vs. noncontextual, interactive vs. automatic. In this paper, we categorize the schemes into three main groups: pixel-based, edge-based, and region-based. Pixel-based segmentation schemes classify pixels based solely on their gray levels. Edge-based schemes first detect local discontinuities (edges) and then use that information to separate the image into regions. Finally, region-based schemes start with a seed pixel (or group of pixels) and then grow or split the seed until the original image is composed of only homogeneous regions. Because there are a number of survey papers available, we will not discuss all segmentation schemes. Rather than a survey, we take the approach of a detailed overview. We focus only on the more common approaches in order to give the reader a flavor for the variety of techniques available yet present enough details to facilitate implementation and experimentation.
Microcomputer Applications in Interaction Analysis.
ERIC Educational Resources Information Center
Wadham, Rex A.
The Timed Interval Categorical Observation Recorder (TICOR), a portable, battery powered microcomputer designed to automate the collection of sequential and simultaneous behavioral observations and their associated durations, was developed to overcome problems in gathering subtle interaction analysis data characterized by sequential flow of…
The composite sequential clustering technique for analysis of multispectral scanner data
NASA Technical Reports Server (NTRS)
Su, M. Y.
1972-01-01
The clustering technique consists of two parts: (1) a sequential statistical clustering which is essentially a sequential variance analysis, and (2) a generalized K-means clustering. In this composite clustering technique, the output of (1) is a set of initial clusters which are input to (2) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by traditional supervised maximum likelihood classification techniques. The mathematical algorithms for the composite sequential clustering program and a detailed computer program description with job setup are given.
Modeling eye gaze patterns in clinician-patient interaction with lag sequential analysis.
Montague, Enid; Xu, Jie; Chen, Ping-Yu; Asan, Onur; Barrett, Bruce P; Chewning, Betty
2011-10-01
The aim of this study was to examine whether lag sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multiuser health care settings in which trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Nonverbal communication patterns are important aspects of clinician-patient interactions and may affect patient outcomes. The eye gaze behaviors of clinicians and patients in 110 videotaped medical encounters were analyzed using the lag sequential method to identify significant behavior sequences. Lag sequential analysis included both event-based lag and time-based lag. Results from event-based lag analysis showed that the patient's gaze followed that of the clinician, whereas the clinician's gaze did not follow the patient's. Time-based sequential analysis showed that responses from the patient usually occurred within 2 s after the initial behavior of the clinician. Our data suggest that the clinician's gaze significantly affects the medical encounter but that the converse is not true. Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs.
Modeling Eye Gaze Patterns in Clinician-Patient Interaction with Lag Sequential Analysis
Montague, E; Xu, J; Asan, O; Chen, P; Chewning, B; Barrett, B
2011-01-01
Objective The aim of this study was to examine whether lag-sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multi-user health care settings where trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Background Nonverbal communication patterns are important aspects of clinician-patient interactions and may impact patient outcomes. Method Eye gaze behaviors of clinicians and patients in 110-videotaped medical encounters were analyzed using the lag-sequential method to identify significant behavior sequences. Lag-sequential analysis included both event-based lag and time-based lag. Results Results from event-based lag analysis showed that the patients’ gaze followed that of clinicians, while clinicians did not follow patients. Time-based sequential analysis showed that responses from the patient usually occurred within two seconds after the initial behavior of the clinician. Conclusion Our data suggest that the clinician’s gaze significantly affects the medical encounter but not the converse. Application Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs. PMID:22046723
Mazurok, N A; Rubtsova, N V; Isaenko, A A; Nesterova, T B; Meĭer, M N; Zakiian, S M
1998-08-01
With the use of the GTG-banding of prometaphase chromosomes, 503 and 402 segments were revealed in haploid chromosome sets of voles Microtus rossiaemeridionalis and M. transcaspicus, respectively. Based on a detailed study of chromosomes at different condensation levels, idiograms of M. rossiaemeridionalis and M. transcaspicus chromosomes were constructed. Sequential Ag-staining and GTG-banding allowed nucleolar organizer regions (NORs) to be localized in 16 and 11 chromosome pairs of M. rossiaemeridionalis and M. transcaspicus, respectively.
NASA Astrophysics Data System (ADS)
Noda, Isao
2014-07-01
A comprehensive survey review of new and noteworthy developments, which are advancing forward the frontiers in the field of 2D correlation spectroscopy during the last four years, is compiled. This review covers books, proceedings, and review articles published on 2D correlation spectroscopy, a number of significant conceptual developments in the field, data pretreatment methods and other pertinent topics, as well as patent and publication trends and citation activities. Developments discussed include projection 2D correlation analysis, concatenated 2D correlation, and correlation under multiple perturbation effects, as well as orthogonal sample design, predicting 2D correlation spectra, manipulating and comparing 2D spectra, correlation strategy based on segmented data blocks, such as moving-window analysis, features like determination of sequential order and enhanced spectral resolution, statistical 2D spectroscopy using covariance and other statistical metrics, hetero-correlation analysis, and sample-sample correlation technique. Data pretreatment operations prior to 2D correlation analysis are discussed, including the correction for physical effects, background and baseline subtraction, selection of reference spectrum, normalization and scaling of data, derivatives spectra and deconvolution technique, and smoothing and noise reduction. Other pertinent topics include chemometrics and statistical considerations, peak position shift phenomena, variable sampling increments, computation and software, display schemes, such as color coded format, slice and power spectra, tabulation, and other schemes.
Tri-state delta modulation system for Space Shuttle digital TV downlink
NASA Technical Reports Server (NTRS)
Udalov, S.; Huth, G. K.; Roberts, D.; Batson, B. H.
1981-01-01
Future requirements for Shuttle Orbiter downlink communication may include transmission of digital video which, in addition to black and white, may also be either field-sequential or NTSC color format. The use of digitized video could provide for picture privacy at the expense of additional onboard hardware, together with an increased bandwidth due to the digitization process. A general objective for the Space Shuttle application is to develop a digitization technique that is compatible with data rates in the 20-30 Mbps range but still provides good quality pictures. This paper describes a tri-state delta modulation/demodulation (TSDM) technique which is a good compromise between implementation complexity and performance. The unique feature of TSDM is that it provides for efficient run-length encoding of constant-intensity segments of a TV picture. Axiomatix has developed a hardware implementation of a high-speed TSDM transmitter and receiver for black-and-white TV and field-sequential color. The hardware complexity of this TSDM implementation is summarized in the paper.
Snyder, Dalton T; Szalwinski, Lucas J; Cooks, R Graham
2017-10-17
Methods of performing precursor ion scans as well as neutral loss scans in a single linear quadrupole ion trap have recently been described. In this paper we report methodology for performing permutations of MS/MS scan modes, that is, ordered combinations of precursor, product, and neutral loss scans following a single ion injection event. Only particular permutations are allowed; the sequences demonstrated here are (1) multiple precursor ion scans, (2) precursor ion scans followed by a single neutral loss scan, (3) precursor ion scans followed by product ion scans, and (4) segmented neutral loss scans. (5) The common product ion scan can be performed earlier in these sequences, under certain conditions. Simultaneous scans can also be performed. These include multiple precursor ion scans, precursor ion scans with an accompanying neutral loss scan, and multiple neutral loss scans. We argue that the new capability to perform complex simultaneous and sequential MS n operations on single ion populations represents a significant step in increasing the selectivity of mass spectrometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cadroy, Y.; Horbett, T.A.; Hanson, S.R.
1989-04-01
To study mechanisms of complex thrombus formation in vivo, and to compare the relative antithrombotic effects of anticoagulants and antiplatelet agents, a model was developed in baboons. Segments of collagen-coated tubing followed by two sequentially placed expansion chambers exhibiting disturbed flow patterns were exposed to native blood under laminar flow conditions. The device was incorporated for 1 hour into an exteriorized arteriovenous shunt in baboons under controlled blood flow (20 ml/min). Morphologic evaluation by scanning electron microscopy showed that thrombi associated with collagen were relatively rich in platelets but thrombi in the chambers were rich in fibrin and red cells.more » Deposition of indium 111-labeled platelets was continuously measured with a scintillation camera. Platelet deposition increased in a linear (collagen-coated segment) or exponential (chambers 1 and 2) fashion over time, with values after 40 minutes averaging 24.1 +/- 3.3 x 10(8) platelets (collagen segment), 16.7 +/- 3.4 x 10(8) platelets (chamber 1), and 8.4 +/- 2.4 x 10(8) platelets (chamber 2). Total fibrinogen deposition after 40 minutes was determined by using iodine 125-labeled baboon fibrinogen and averaged 0.58 +/- 0.14 mg in the collagen segment, 1.51 +/- 0.27 mg in chamber 1, and 0.95 +/- 0.25 mg in chamber 2. Plasma levels of beta-thromboglobulin (beta TG), platelet-factor 4 (PF4), and fibrinopeptide A (FPA) increased fourfold to fivefold after 60 minutes of blood exposure to the thrombotic device. Platelet deposition onto the collagen segment, chamber 1, and chamber 2 was linearly dependent on the circulating platelet count. Platelet accumulation in chamber 1 and chamber 2 was also dependent on the presence of the proximal collagen segment.« less
Del Prato, Stefano; Rosenstock, Julio; Garcia-Sanchez, Ricardo; Iqbal, Nayyar; Hansen, Lars; Johnsson, Eva; Chen, Hungta; Mathieu, Chantal
2018-06-01
The safety of triple oral therapy with dapagliflozin plus saxagliptin plus metformin versus dual therapy with dapagliflozin or saxagliptin plus metformin was compared in a post-hoc analysis of 3 randomized trials of sequential or concomitant add-on of dapagliflozin and saxagliptin to metformin. In the concomitant add-on trial, patients with type 2 diabetes on stable metformin received dapagliflozin 10 mg/d plus saxagliptin 5 mg/d. In sequential add-on trials, patients on metformin plus either saxagliptin 5 mg/d or dapagliflozin 10 mg/d received dapagliflozin 10 mg/d or saxagliptin 5 mg/d, respectively, as add-on therapy. After 24 weeks, incidences of adverse events and serious adverse events were similar between triple and dual therapy and between concomitant and sequential add-on regimens. Urinary tract infections were more common with sequential than with concomitant add-on therapy; genital infections were reported only with sequential add-on of dapagliflozin to saxagliptin plus metformin. Hypoglycaemia incidence was <2.0% across all analysis groups. In conclusion, the safety and tolerability of triple therapy with dapagliflozin, saxagliptin and metformin, as either concomitant or sequential add-on, were similar to dual therapy with either agent added to metformin. © 2018 The Authors. Diabetes, Obesity and Metabolism published by John Wiley & Sons Ltd.
Turschner, Oliver; D'hooge, Jan; Dommke, Christoph; Claus, Piet; Verbeken, Erik; De Scheerder, Ivan; Bijnens, Bart; Sutherland, George R
2004-05-01
Successful primary PTCA (with TIMI 3 reflow) in patients with acute transmural infarction has been observed to result in an immediate abnormal increase in wall thickness associated with persisting abnormal post-systolic thickening. To understand the sequential changes in regional deformation during: (i) the development of acute transmural infarction, (ii) upon TIMI grade 3 infarct reperfusion and (iii) during the subsequent expression of reperfusion injury the following correlative experimental study was performed in a pure animal model in which there was no distal dispersion of thrombotic material causing either no reflow or secondary microvascular obstruction. In 10 closed-chest pigs, a 90 min PTCA circumflex occlusion was used to induce a transmural infarction. This was followed by 60 min of TIMI 3 infarct reperfusion. M-mode ultrasound data from the "at risk" posterior wall infarct segment and from a control remote non-ischemic septal segment were acquired at standardized time intervals. Changes in regional deformation (end-diastolic (EDWT), end-systolic (ESWT) and post-systolic (PSWT) wall thickness, end-systolic strain (epsilonES) and post-systolic strain (epsilonps)) were measured. In this pure animal model of acute transmural infarction/infarct reperfusion (with no pre-existing intra-luminal thrombus), the induced changes in wall thickness and thickening were complex. During prolonged occlusion, after an initial acute fall in ESWT, there was no further change in systolic deformation to indicate the progression of ischaemia to infarction. Both transmurally infarcted and reperfused-infarcted myocardium retained post-systolic thickening indicating that this parameter, taken in isolation, is not a consistent marker of segmental viability and, in this regard, should be interpreted only in combination with other indices of segmental function. The most striking abnormality induced by reperfusion was an immediate increase in EDWT which then increased logarithmically over a 60 min period as reperfusion injury was further expressed. PS did not change significantly during reperfusion. Histology confirmed the wall thickness changes on reperfusion to be due to massive extra-cellular oedema. The identification of an acute increase in regional wall thickness in a reperfused infarct zone by cardiac ultrasound following primary PTCA might be used in patients to both identify successful infarct reperfusion and to monitor the presence, extent and resolution of the oedema associated with reperfusion injury.
A Pocock Approach to Sequential Meta-Analysis of Clinical Trials
ERIC Educational Resources Information Center
Shuster, Jonathan J.; Neu, Josef
2013-01-01
Three recent papers have provided sequential methods for meta-analysis of two-treatment randomized clinical trials. This paper provides an alternate approach that has three desirable features. First, when carried out prospectively (i.e., we only have the results up to the time of our current analysis), we do not require knowledge of the…
Sequential-Simultaneous Analysis of Japanese Children's Performance on the Japanese McCarthy.
ERIC Educational Resources Information Center
Ishikuma, Toshinori; And Others
This study explored the hypothesis that Japanese children perform significantly better on simultaneous processing than on sequential processing. The Kaufman Assessment Battery for Children (K-ABC) served as the criterion of the two types of mental processing. Regression equations to predict Sequential and Simultaneous processing from McCarthy…
Description and effects of sequential behavior practice in teacher education.
Sharpe, T; Lounsbery, M; Bahls, V
1997-09-01
This study examined the effects of a sequential behavior feedback protocol on the practice-teaching experiences of undergraduate teacher trainees. The performance competencies of teacher trainees were analyzed using an alternative opportunities for appropriate action measure. Data support the added utility of sequential (Sharpe, 1997a, 1997b) behavior analysis information in systematic observation approaches to teacher education. One field-based undergraduate practicum using sequential behavior (i.e., field systems analysis) principles was monitored. Summarized are the key elements of the (a) classroom instruction provided as a precursor to the practice teaching experience, (b) practice teaching experience, and (c) field systems observation tool used for evaluation and feedback, including multiple-baseline data (N = 4) to support this approach to teacher education. Results point to (a) the strong relationship between sequential behavior feedback and the positive change in four preservice teachers' day-to-day teaching practices in challenging situational contexts, and (b) the relationship between changes in teacher practices and positive changes in the behavioral practices of gymnasium pupils. Sequential behavior feedback was also socially validated by the undergraduate participants and Professional Development School teacher supervisors in the study.
Patch-Based Generative Shape Model and MDL Model Selection for Statistical Analysis of Archipelagos
NASA Astrophysics Data System (ADS)
Ganz, Melanie; Nielsen, Mads; Brandt, Sami
We propose a statistical generative shape model for archipelago-like structures. These kind of structures occur, for instance, in medical images, where our intention is to model the appearance and shapes of calcifications in x-ray radio graphs. The generative model is constructed by (1) learning a patch-based dictionary for possible shapes, (2) building up a time-homogeneous Markov model to model the neighbourhood correlations between the patches, and (3) automatic selection of the model complexity by the minimum description length principle. The generative shape model is proposed as a probability distribution of a binary image where the model is intended to facilitate sequential simulation. Our results show that a relatively simple model is able to generate structures visually similar to calcifications. Furthermore, we used the shape model as a shape prior in the statistical segmentation of calcifications, where the area overlap with the ground truth shapes improved significantly compared to the case where the prior was not used.
NASA Astrophysics Data System (ADS)
Mendizabal, A.; González-Díaz, J. B.; San Sebastián, M.; Echeverría, A.
2016-07-01
This paper describes the implementation of a simple strategy adopted for the inherent shrinkage method (ISM) to predict welding-induced distortion. This strategy not only makes it possible for the ISM to reach accuracy levels similar to the detailed transient analysis method (considered the most reliable technique for calculating welding distortion) but also significantly reduces the time required for these types of calculations. This strategy is based on the sequential activation of welding blocks to account for welding direction and transient movement of the heat source. As a result, a significant improvement in distortion prediction is achieved. This is demonstrated by experimentally measuring and numerically analyzing distortions in two case studies: a vane segment subassembly of an aero-engine, represented with 3D-solid elements, and a car body component, represented with 3D-shell elements. The proposed strategy proves to be a good alternative for quickly estimating the correct behaviors of large welded components and may have important practical applications in the manufacturing industry.
Near-atomic resolution visualization of human transcription promoter opening
He, Yuan; Yan, Chunli; Fang, Jie; Inouye, Carla; Tjian, Robert; Ivanov, Ivaylo; Nogales, Eva
2016-01-01
In eukaryotic transcription initiation, a large multi-subunit pre-initiation complex (PIC) that assembles at the core promoter is required for the opening of the duplex DNA and identification of the start site for transcription by RNA polymerase II. Here we use cryo-electron microscropy (cryo-EM) to determine near-atomic resolution structures of the human PIC in a closed state (engaged with duplex DNA), an open state (engaged with a transcription bubble), and an initially transcribing complex (containing six base pairs of DNA–RNA hybrid). Our studies provide structures for previously uncharacterized components of the PIC, such as TFIIE and TFIIH, and segments of TFIIA, TFIIB and TFIIF. Comparison of the different structures reveals the sequential conformational changes that accompany the transition from each state to the next throughout the transcription initiation process. This analysis illustrates the key role of TFIIB in transcription bubble stabilization and provides strong structural support for a translocase activity of XPB. PMID:27193682
Optimized doppler optical coherence tomography for choroidal capillary vasculature imaging
NASA Astrophysics Data System (ADS)
Liu, Gangjun; Qi, Wenjuan; Yu, Lingfeng; Chen, Zhongping
2011-03-01
In this paper, we analyzed the retinal and choroidal blood vasculature in the posterior segment of the human eye with optimized color Doppler and Doppler variance optical coherence tomography. Depth-resolved structure, color Doppler and Doppler variance images were compared. Blood vessels down to capillary level were able to be obtained with the optimized optical coherence color Doppler and Doppler variance method. For in-vivo imaging of human eyes, bulkmotion induced bulk phase must be identified and removed before using color Doppler method. It was found that the Doppler variance method is not sensitive to bulk motion and the method can be used without removing the bulk phase. A novel, simple and fast segmentation algorithm to indentify retinal pigment epithelium (RPE) was proposed and used to segment the retinal and choroidal layer. The algorithm was based on the detected OCT signal intensity difference between different layers. A spectrometer-based Fourier domain OCT system with a central wavelength of 890 nm and bandwidth of 150nm was used in this study. The 3-dimensional imaging volume contained 120 sequential two dimensional images with 2048 A-lines per image. The total imaging time was 12 seconds and the imaging area was 5x5 mm2.
Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping
NASA Technical Reports Server (NTRS)
Leberl, F.
1975-01-01
Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garikapati, Venu; Astroza, Sebastian; Pendyala, Ram M.
Travel model systems often adopt a single decision structure that links several activity-travel choices together. The single decision structure is then used to predict activity-travel choices, with those downstream in the decision-making chain influenced by those upstream in the sequence. The adoption of a singular sequential causal structure to depict relationships among activity-travel choices in travel demand model systems ignores the possibility that some choices are made jointly as a bundle as well as the possible presence of structural heterogeneity in the population with respect to decision-making processes. As different segments in the population may adopt and follow different causalmore » decision-making mechanisms when making selected choices jointly, it would be of value to develop simultaneous equations model systems relating multiple endogenous choice variables that are able to identify population subgroups following alternative causal decision structures. Because the segments are not known a priori, they are considered latent and determined endogenously within a joint modeling framework proposed in this paper. The methodology is applied to a national mobility survey data set to identify population segments that follow different causal structures relating residential location choice, vehicle ownership, and car-share and mobility service usage. It is found that the model revealing three distinct latent segments best describes the data, confirming the efficacy of the modeling approach and the existence of structural heterogeneity in decision-making in the population. Future versions of activity-travel model systems should strive to incorporate such structural heterogeneity to better reflect varying decision processes across population subgroups.« less
NASA Astrophysics Data System (ADS)
Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong
2017-10-01
Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.
Schittek Janda, M; Tani Botticelli, A; Mattheos, N; Nebel, D; Wagner, A; Nattestad, A; Attström, R
2005-05-01
Video-based instructions for clinical procedures have been used frequently during the preceding decades. To investigate in a randomised controlled trial the learning effectiveness of fragmented videos vs. the complete sequential video and to analyse the attitudes of the user towards video as a learning aid. An instructional video on surgical hand wash was produced. The video was available in two different forms in two separate web pages: one as a sequential video and one fragmented into eight short clips. Twenty-eight dental students in the second semester were randomised into an experimental (n = 15) and a control group (n = 13). The experimental group used the fragmented form of the video and the control group watched the complete one. The use of the videos was logged and the students were video taped whilst undertaking a test hand wash. The videos were analysed systematically and blindly by two independent clinicians. The students also performed a written test concerning learning outcome from the videos as well as they answered an attitude questionnaire. The students in the experimental group watched the video significantly longer than the control group. There were no significant differences between the groups with regard to the ratings and scores when performing the hand wash. The experimental group had significantly better results in the written test compared with those of the control group. There was no significant difference between the groups with regard to attitudes towards the use of video for learning, as measured by the Visual Analogue Scales. Most students in both groups expressed satisfaction with the use of video for learning. The students demonstrated positive attitudes and acceptable learning outcome from viewing CAL videos as a part of their pre-clinical training. Videos that are part of computer-based learning settings would ideally be presented to the students both as a segmented and as a whole video to give the students the option to choose the form of video which suits the individual student's learning style.
Changes in autonomic activity preceding onset of nonsustained ventricular tachycardia
NASA Technical Reports Server (NTRS)
Osaka, M.; Saitoh, H.; Sasabe, N.; Atarashi, H.; Katoh, T.; Hayakawa, H.; Cohen, R. J.
1996-01-01
Background: The triggering role of the autonomic nervous system in the initiation of ventricular tachycardia has not been established. To investigate the relationship between changes in autonomic activity and the occurrence of nonsustained ventricular tachycardia (NSVT) we examined heart rate variability (HRV) during the 2-hour period preceding spontaneous episodes of NSVT. Twenty-four subjects were identified retrospectively as having had one episode of NSVT during 24-hour Holter ECC recording. Methods: We measured the mean interval between normal heats (meanRR), the standard deviation of the intervals between beats (SD), the percentage of counts of sequential intervals between normal beats with a change of >50 ms (%RR50), the logarithms of low- and high-frequency spectral components (lnLF, lnHF) of HRV for sequential 10-minute segments preceding NSVT. The correlation dimension (CDim) of HRV was calculated similarly for sequential 20-minute segments. We assessed the significance of the time-course change of each marker over the 120-minute period prior to NSVT onset. Results: MeanRR (P < 0.05), lnLF (P < 0.0001), lnHF (P < 0.0001), the natural logarithm of the ratio of LF to HF (ln[LF/HF]; P < 0.05), and CDim (P < 0.05) showed significant time-course changes during that period, while SD and %RR50 did not. MeanRR, lnLF, lnHF, and CDim all decreased prior to the onset of NSVT, whereas ln(LF/HF) increased. We divided the subjects into two groups: one consisting of 12 patients with coronary artery disease; and the second group of 12 patients without known coronary artery disease. Both groups showed significant changes (P < 0.05) of CDim, lnLF, and lnHF preceding the episodes of NSVT. Conclusions: Changes in the pattern of HRV prior to the onset of episodes of NSVT suggest that changes in autonomic activity may commonly play a role in the triggering of spontaneous episodes of NSVT in susceptible patients. The measured changes suggest a reduction in parasympathetic activity, perhaps in conjunction with an increase in sympathetic activity, may trigger NSVT.
Decomposition of Copper (II) Sulfate Pentahydrate: A Sequential Gravimetric Analysis.
ERIC Educational Resources Information Center
Harris, Arlo D.; Kalbus, Lee H.
1979-01-01
Describes an improved experiment of the thermal dehydration of copper (II) sulfate pentahydrate. The improvements described here are control of the temperature environment and a quantitative study of the decomposition reaction to a thermally stable oxide. Data will suffice to show sequential gravimetric analysis. (Author/SA)
ERIC Educational Resources Information Center
Green, Samuel B.; Thompson, Marilyn S.; Levy, Roy; Lo, Wen-Juo
2015-01-01
Traditional parallel analysis (T-PA) estimates the number of factors by sequentially comparing sample eigenvalues with eigenvalues for randomly generated data. Revised parallel analysis (R-PA) sequentially compares the "k"th eigenvalue for sample data to the "k"th eigenvalue for generated data sets, conditioned on"k"-…
NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel
2017-08-01
Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.
Monitoring Change Through Hierarchical Segmentation of Remotely Sensed Image Data
NASA Technical Reports Server (NTRS)
Tilton, James C.; Lawrence, William T.
2005-01-01
NASA's Goddard Space Flight Center has developed a fast and effective method for generating image segmentation hierarchies. These segmentation hierarchies organize image data in a manner that makes their information content more accessible for analysis. Image segmentation enables analysis through the examination of image regions rather than individual image pixels. In addition, the segmentation hierarchy provides additional analysis clues through the tracing of the behavior of image region characteristics at several levels of segmentation detail. The potential for extracting the information content from imagery data based on segmentation hierarchies has not been fully explored for the benefit of the Earth and space science communities. This paper explores the potential of exploiting these segmentation hierarchies for the analysis of multi-date data sets, and for the particular application of change monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmonds, M. J.; Yu, J. H.; Wang, Y. Q.
Simulating the implantation and thermal desorption evolution in a reaction-diffusion model requires solving a set of coupled differential equations that describe the trapping and release of atomic species in Plasma Facing Materials (PFMs). These fundamental equations are well outlined by the Tritium Migration Analysis Program (TMAP) which can model systems with no more than three active traps per atomic species. To overcome this limitation, we have developed a Pseudo Trap and Temperature Partition (PTTP) scheme allowing us to lump multiple inactive traps into one pseudo trap, simplifying the system of equations to be solved. For all temperatures, we show themore » trapping of atoms from solute is exactly accounted for when using a pseudo trap. However, a single effective pseudo trap energy can not well replicate the release from multiple traps, each with its own detrapping energy. However, atoms held in a high energy trap will remain trapped at relatively low temperatures, and thus there is a temperature range in which release from high energy traps is effectively inactive. By partitioning the temperature range into segments, a pseudo trap can be defined for each segment to account for multiple high energy traps that are actively trapping but are effectively not releasing atoms. With increasing temperature, as in controlled thermal desorption, the lowest energy trap is nearly emptied and can be removed from the set of coupled equations, while the next higher energy trap becomes an actively releasing trap. Each segment is thus calculated sequentially, with the last time step of a given segment solution being used as an initial input for the next segment as only the pseudo and actively releasing traps are modeled. This PTTP scheme is then applied to experimental thermal desorption data for tungsten (W) samples damaged with heavy ions, which display six distinct release peaks during thermal desorption. Without modifying the TMAP7 source code the PTTP scheme is shown to successfully model the D retention in all six traps. In conclusion, we demonstrate the full reconstruction from the plasma implantation phase through the controlled thermal desorption phase with detrapping energies near 0.9, 1.1, 1.4, 1.7, 1.9 and 2.1 eV for a W sample damaged at room temperature.« less
Simmonds, M. J.; Yu, J. H.; Wang, Y. Q.; ...
2018-06-04
Simulating the implantation and thermal desorption evolution in a reaction-diffusion model requires solving a set of coupled differential equations that describe the trapping and release of atomic species in Plasma Facing Materials (PFMs). These fundamental equations are well outlined by the Tritium Migration Analysis Program (TMAP) which can model systems with no more than three active traps per atomic species. To overcome this limitation, we have developed a Pseudo Trap and Temperature Partition (PTTP) scheme allowing us to lump multiple inactive traps into one pseudo trap, simplifying the system of equations to be solved. For all temperatures, we show themore » trapping of atoms from solute is exactly accounted for when using a pseudo trap. However, a single effective pseudo trap energy can not well replicate the release from multiple traps, each with its own detrapping energy. However, atoms held in a high energy trap will remain trapped at relatively low temperatures, and thus there is a temperature range in which release from high energy traps is effectively inactive. By partitioning the temperature range into segments, a pseudo trap can be defined for each segment to account for multiple high energy traps that are actively trapping but are effectively not releasing atoms. With increasing temperature, as in controlled thermal desorption, the lowest energy trap is nearly emptied and can be removed from the set of coupled equations, while the next higher energy trap becomes an actively releasing trap. Each segment is thus calculated sequentially, with the last time step of a given segment solution being used as an initial input for the next segment as only the pseudo and actively releasing traps are modeled. This PTTP scheme is then applied to experimental thermal desorption data for tungsten (W) samples damaged with heavy ions, which display six distinct release peaks during thermal desorption. Without modifying the TMAP7 source code the PTTP scheme is shown to successfully model the D retention in all six traps. In conclusion, we demonstrate the full reconstruction from the plasma implantation phase through the controlled thermal desorption phase with detrapping energies near 0.9, 1.1, 1.4, 1.7, 1.9 and 2.1 eV for a W sample damaged at room temperature.« less
Chung, Sukhoon; Rhee, Hyunsill; Suh, Yongmoo
2010-01-01
Objectives This study sought to find answers to the following questions: 1) Can we predict whether a patient will revisit a healthcare center? 2) Can we anticipate diseases of patients who revisit the center? Methods For the first question, we applied 5 classification algorithms (decision tree, artificial neural network, logistic regression, Bayesian networks, and Naïve Bayes) and the stacking-bagging method for building classification models. To solve the second question, we performed sequential pattern analysis. Results We determined: 1) In general, the most influential variables which impact whether a patient of a public healthcare center will revisit it or not are personal burden, insurance bill, period of prescription, age, systolic pressure, name of disease, and postal code. 2) The best plain classification model is dependent on the dataset. 3) Based on average of classification accuracy, the proposed stacking-bagging method outperformed all traditional classification models and our sequential pattern analysis revealed 16 sequential patterns. Conclusions Classification models and sequential patterns can help public healthcare centers plan and implement healthcare service programs and businesses that are more appropriate to local residents, encouraging them to revisit public health centers. PMID:21818426
The sequential structure of brain activation predicts skill.
Anderson, John R; Bothell, Daniel; Fincham, Jon M; Moon, Jungaa
2016-01-29
In an fMRI study, participants were trained to play a complex video game. They were scanned early and then again after substantial practice. While better players showed greater activation in one region (right dorsal striatum) their relative skill was better diagnosed by considering the sequential structure of whole brain activation. Using a cognitive model that played this game, we extracted a characterization of the mental states that are involved in playing a game and the statistical structure of the transitions among these states. There was a strong correspondence between this measure of sequential structure and the skill of different players. Using multi-voxel pattern analysis, it was possible to recognize, with relatively high accuracy, the cognitive states participants were in during particular scans. We used the sequential structure of these activation-recognized states to predict the skill of individual players. These findings indicate that important features about information-processing strategies can be identified from a model-based analysis of the sequential structure of brain activation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Improving material identification by combining x-ray and neutron tomography
NASA Astrophysics Data System (ADS)
LaManna, Jacob M.; Hussey, Daniel S.; Baltic, Eli; Jacobson, David L.
2017-09-01
X-rays and neutrons provide complementary non-destructive probes for the analysis of structure and chemical composition of materials. Contrast differences between the modes arise due to the differences in interaction with matter. Due to the high sensitivity to hydrogen, neutrons excel at separating liquid water or hydrogenous phases from the underlying structure while X-rays resolve the solid structure. Many samples of interest, such as fluid flow in porous materials or curing concrete, are stochastic or slowly changing with time which makes analysis of sequential imaging with X-rays and neutrons difficult as the sample may change between scans. To alleviate this issue, NIST has developed a system for simultaneous X-ray and neutron tomography by orienting a 90 keVpeak micro-focus X-ray tube orthogonally to a thermal neutron beam. This system allows for non-destructive, multimodal tomography of dynamic or stochastic samples while penetrating through sample environment equipment such as pressure and flow vessels. Current efforts are underway to develop methods for 2D histogram based segmentation of reconstructed volumes. By leveraging the contrast differences between X-rays and neutrons, greater histogram peak separation can occur in 2D vs 1D enabling improved material identification.
[Glossary of terms used by radiologists in image processing].
Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P
1995-01-01
We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.
Sequential, progressive, equal-power, reflective beam-splitter arrays
NASA Astrophysics Data System (ADS)
Manhart, Paul K.
2017-11-01
The equations to calculate equal-power reflectivity of a sequential series of beam splitters is presented. Non-sequential optical design examples are offered for uniform illumination using diode lasers. Objects created using Boolean operators and Swept Surfaces can create objects capable of reflecting light into predefined elevation and azimuth angles. Analysis of the illumination patterns for the array are also presented.
NASA Technical Reports Server (NTRS)
Duong, T. A.
2004-01-01
In this paper, we present a new, simple, and optimized hardware architecture sequential learning technique for adaptive Principle Component Analysis (PCA) which will help optimize the hardware implementation in VLSI and to overcome the difficulties of the traditional gradient descent in learning convergence and hardware implementation.
Bevevino, Adam J; Lehman, Ronald A; Kang, Daniel G; Gwinn, David E; Dmitriev, Anton E
2014-09-01
Human cadaveric biomechanical analysis. To investigate the effect on cervical spine segmental stability that results from a posterior foraminotomy after cervical disc arthroplasty (CDA). Posterior foraminotomy offers the ability to decompress cervical nerves roots while avoiding the need to extend a previous fusion or revise an arthroplasty to a fusion. However, the safety of a foraminotomy in the setting of CDA is unknown. Segmental nondestructive range of motion (ROM) was analyzed in 9 human cadaveric cervical spine specimens. After intact testing, each specimen was sequentially tested according to the following 4 experimental groups: group 1=C5-C6 CDA, group 2=C5-C6 CDA with unilateral C5-C6 foraminotomy, group 3=C5-C6 CDA with bilateral C5-C6 foraminotomy, and group 4=C5-C6 CDA with C5-C6 and C4-C5 bilateral foraminotomy. No differences in ROM were found between the intact, CDA, and foraminotomy specimens at C4-C5 or C6-C7. There was a step-wise increase in C5-C6 axial rotation from the intact state (8°) to group 4 (12°), although the difference did not reach statistical significance. At C5-C6, the degree of lateral bending remained relatively constant. Flexion and extension at C5-C6 was significantly higher in the foraminotomy specimens, groups 2 (18.1°), 3 (18.6°), and 4 (18.2°), compared with the intact state, 11.2°. However, no ROM difference was found within foraminotomy groups (2-4) or between the foraminotomy groups and the CDA group (group 1), 15.3°. Our results indicate that cervical stability is not significantly decreased by the presence, number, or level of posterior foraminotomies in the setting of CDA. The addition of foraminotomies to specimens with a pre-existing CDA resulted in small and insignificant increases in segmental ROM. Therefore, biomechanically, posterior foraminotomy/foraminotomies may be considered a safe and viable option in the setting of recurrent or adjacent level radiculopathy after cervical disc replacement. N/A.
Overview of technical trend of optical fiber/cable and research and development strategy of Samsung
NASA Astrophysics Data System (ADS)
Kim, Jin H.
2005-01-01
Fiber-to-the-Premise (FTTP), a keyword in the current fiber and cable industry, leads us variegated directions of the research and development activities. In fact, this momentum of industry seems to be weak yet, since the bandwidth demand by market is still unbalanced to the capacity in the several market segments. However, the recent gradual recovery in metro and access network indicates a positive sign for FTTP deployment projects. It is the very preferable for us to optimize R&D strategy applicable to the current market trend of sequential investment.
NASA Astrophysics Data System (ADS)
Heisler, Morgan; Lee, Sieun; Mammo, Zaid; Jian, Yifan; Ju, Myeong Jin; Miao, Dongkai; Raposo, Eric; Wahl, Daniel J.; Merkur, Andrew; Navajas, Eduardo; Balaratnasingam, Chandrakumar; Beg, Mirza Faisal; Sarunic, Marinko V.
2017-02-01
High quality visualization of the retinal microvasculature can improve our understanding of the onset and development of retinal vascular diseases, which are a major cause of visual morbidity and are increasing in prevalence. Optical Coherence Tomography Angiography (OCT-A) images are acquired over multiple seconds and are particularly susceptible to motion artifacts, which are more prevalent when imaging patients with pathology whose ability to fixate is limited. The acquisition of multiple OCT-A images sequentially can be performed for the purpose of removing motion artifact and increasing the contrast of the vascular network through averaging. Due to the motion artifacts, a robust registration pipeline is needed before feature preserving image averaging can be performed. In this report, we present a novel method for a GPU-accelerated pipeline for acquisition, processing, segmentation, and registration of multiple, sequentially acquired OCT-A images to correct for the motion artifacts in individual images for the purpose of averaging. High performance computing, blending CPU and GPU, was introduced to accelerate processing in order to provide high quality visualization of the retinal microvasculature and to enable a more accurate quantitative analysis in a clinically useful time frame. Specifically, image discontinuities caused by rapid micro-saccadic movements and image warping due to smoother reflex movements were corrected by strip-wise affine registration estimated using Scale Invariant Feature Transform (SIFT) keypoints and subsequent local similarity-based non-rigid registration. These techniques improve the image quality, increasing the value for clinical diagnosis and increasing the range of patients for whom high quality OCT-A images can be acquired.
Segmentation-free image processing and analysis of precipitate shapes in 2D and 3D
NASA Astrophysics Data System (ADS)
Bales, Ben; Pollock, Tresa; Petzold, Linda
2017-06-01
Segmentation based image analysis techniques are routinely employed for quantitative analysis of complex microstructures containing two or more phases. The primary advantage of these approaches is that spatial information on the distribution of phases is retained, enabling subjective judgements of the quality of the segmentation and subsequent analysis process. The downside is that computing micrograph segmentations with data from morphologically complex microstructures gathered with error-prone detectors is challenging and, if no special care is taken, the artifacts of the segmentation will make any subsequent analysis and conclusions uncertain. In this paper we demonstrate, using a two phase nickel-base superalloy microstructure as a model system, a new methodology for analysis of precipitate shapes using a segmentation-free approach based on the histogram of oriented gradients feature descriptor, a classic tool in image analysis. The benefits of this methodology for analysis of microstructure in two and three-dimensions are demonstrated.
Dong, Liang; Xu, Zhengwei; Chen, Xiujin; Wang, Dongqi; Li, Dichen; Liu, Tuanjing; Hao, Dingjun
2017-10-01
Many meta-analyses have been performed to study the efficacy of cervical disc arthroplasty (CDA) compared with anterior cervical discectomy and fusion (ACDF); however, there are few data referring to adjacent segment within these meta-analyses, or investigators are unable to arrive at the same conclusion in the few meta-analyses about adjacent segment. With the increased concerns surrounding adjacent segment degeneration (ASDeg) and adjacent segment disease (ASDis) after anterior cervical surgery, it is necessary to perform a comprehensive meta-analysis to analyze adjacent segment parameters. To perform a comprehensive meta-analysis to elaborate adjacent segment motion, degeneration, disease, and reoperation of CDA compared with ACDF. Meta-analysis of randomized controlled trials (RCTs). PubMed, Embase, and Cochrane Library were searched for RCTs comparing CDA and ACDF before May 2016. The analysis parameters included follow-up time, operative segments, adjacent segment motion, ASDeg, ASDis, and adjacent segment reoperation. The risk of bias scale was used to assess the papers. Subgroup analysis and sensitivity analysis were used to analyze the reason for high heterogeneity. Twenty-nine RCTs fulfilled the inclusion criteria. Compared with ACDF, the rate of adjacent segment reoperation in the CDA group was significantly lower (p<.01), and the advantage of that group in reducing adjacent segment reoperation increases with increasing follow-up time by subgroup analysis. There was no statistically significant difference in ASDeg between CDA and ACDF within the 24-month follow-up period; however, the rate of ASDeg in CDA was significantly lower than that of ACDF with the increase in follow-up time (p<.01). There was no statistically significant difference in ASDis between CDA and ACDF (p>.05). Cervical disc arthroplasty provided a lower adjacent segment range of motion (ROM) than did ACDF, but the difference was not statistically significant. Compared with ACDF, the advantages of CDA were lower ASDeg and adjacent segment reoperation. However, there was no statistically significant difference in ASDis and adjacent segment ROM. Copyright © 2017 Elsevier Inc. All rights reserved.
Sequential analysis in neonatal research-systematic review.
Lava, Sebastiano A G; Elie, Valéry; Ha, Phuong Thi Viet; Jacqz-Aigrain, Evelyne
2018-05-01
As more new drugs are discovered, traditional designs come at their limits. Ten years after the adoption of the European Paediatric Regulation, we performed a systematic review on the US National Library of Medicine and Excerpta Medica database of sequential trials involving newborns. Out of 326 identified scientific reports, 21 trials were included. They enrolled 2832 patients, of whom 2099 were analyzed: the median number of neonates included per trial was 48 (IQR 22-87), median gestational age was 28.7 (IQR 27.9-30.9) weeks. Eighteen trials used sequential techniques to determine sample size, while 3 used continual reassessment methods for dose-finding. In 16 studies reporting sufficient data, the sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674) with respect to a traditional trial. When the number of neonates finally included in the analysis was considered, the difference became significant: 35 (57%) patients (IQR 10 to 136.5, p = 0.0033). Sequential trial designs have not been frequently used in Neonatology. They might potentially be able to reduce the number of patients in drug trials, although this is not always the case. What is known: • In evaluating rare diseases in fragile populations, traditional designs come at their limits. About 20% of pediatric trials are discontinued, mainly because of recruitment problems. What is new: • Sequential trials involving newborns were infrequently used and only a few (n = 21) are available for analysis. • The sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674).
Cost-Utility Analysis of Cochlear Implantation in Australian Adults.
Foteff, Chris; Kennedy, Steven; Milton, Abul Hasnat; Deger, Melike; Payk, Florian; Sanderson, Georgina
2016-06-01
Sequential and simultaneous bilateral cochlear implants are emerging as appropriate treatment options for Australian adults with sensory deficits in both cochleae. Current funding of Australian public hospitals does not provide for simultaneous bilateral cochlear implantation (CI) as a separate surgical procedure. Previous cost-effectiveness studies of sequential and simultaneous bilateral CI assumed 100% of unilaterally treated patients' transition to a sequential bilateral CI. This assumption does not place cochlear implantation in the context of the generally treated population. When mutually exclusive treatment options exist, such as unilateral CI, sequential bilateral CI, and simultaneous bilateral CI, the mean costs of the treated populations are weighted in the calculation of incremental cost-utility ratios. The objective was to evaluate the cost-utility of bilateral hearing aids (HAs) compared with unilateral, sequential, and simultaneous bilateral CI in Australian adults with bilateral severe to profound sensorineural hearing loss. Cost-utility analysis of secondary sources input to a Markov model. Australian health care perspective, lifetime horizon with costs and outcomes discounted 5% annually. Bilateral HAs as treatment for bilateral severe to profound sensorineural hearing loss compared with unilateral, sequential, and simultaneous bilateral CI. Incremental costs per quality adjusted life year (AUD/QALY). When compared with bilateral hearing aids the incremental cost-utility ratio for the CI treatment population was AUD11,160/QALY. The incremental cost-utility ratio was weighted according to the number of patients treated unilaterally, sequentially, and simultaneously, as these were mutually exclusive treatment options. No peer-reviewed articles have reported the incremental analysis of cochlear implantation in a continuum of care for surgically treated populations with bilateral severe to profound sensorineural hearing loss. Unilateral, sequential, and simultaneous bilateral CI were cost-effective when compared with bilateral hearing aids. Technologies that reduce the total number of visits for a patient could introduce additional cost efficiencies into clinical practice.
Lacustrine Paleoseismology Reveals Earthquake Segmentation of the Alpine Fault, New Zealand
NASA Astrophysics Data System (ADS)
Howarth, J. D.; Fitzsimons, S.; Norris, R.; Langridge, R. M.
2013-12-01
Transform plate boundary faults accommodate high rates of strain and are capable of producing large (Mw>7.0) to great (Mw>8.0) earthquakes that pose significant seismic hazard. The Alpine Fault in New Zealand is one of the longest, straightest and fastest slipping plate boundary transform faults on Earth and produces earthquakes at quasi-periodic intervals. Theoretically, the fault's linearity, isolation from other faults and quasi-periodicity should promote the generation of earthquakes that have similar magnitudes over multiple seismic cycles. We test the hypothesis that the Alpine Fault produces quasi-regular earthquakes that contiguously rupture the southern and central fault segments, using a novel lacustrine paleoseismic proxy to reconstruct spatial and temporal patterns of fault rupture over the last 2000 years. In three lakes located close to the Alpine Fault the last nine earthquakes are recorded as megaturbidites formed by co-seismic subaqueous slope failures, which occur when shaking exceeds Modified Mercalli (MM) VII. When the fault ruptures adjacent to a lake the co-seismic megaturbidites are overlain by stacks of turbidites produced by enhanced fluvial sediment fluxes from earthquake-induced landslides. The turbidite stacks record shaking intensities of MM>IX in the lake catchments and can be used to map the spatial location of fault rupture. The lake records can be dated precisely, facilitating meaningful along strike correlations, and the continuous records allow earthquakes closely spaced in time on adjacent fault segments to be distinguished. The results show that while multi-segment ruptures of the Alpine Fault occurred during most seismic cycles, sequential earthquakes on adjacent segments and single segment ruptures have also occurred. The complexity of the fault rupture pattern suggests that the subtle variations in fault geometry, sense of motion and slip rate that have been used to distinguish the central and southern segments of the Alpine Fault can inhibit rupture propagation, producing a soft earthquake segment boundary. The study demonstrates the utility of lakes as paleoseismometers that can be used to reconstruct the spatial and temporal patterns of earthquakes on a fault.
Brief Lags in Interrupted Sequential Performance: Evaluating a Model and Model Evaluation Method
2015-01-05
rehearsal mechanism in the model. To evaluate the model we developed a simple new goodness-of-fit test based on analysis of variance that offers an...repeated step). Sequen- tial constraints are common in medicine, equipment maintenance, computer programming and technical support, data analysis ...legal analysis , accounting, and many other home and workplace environ- ments. Sequential constraints also play a role in such basic cognitive processes
Caprioglio, Alberto; Cozzani, Mauro; Fontana, Mattia
2014-01-01
There are controversial opinions about the effect of erupted second molars on distalization of the first molars. Most of the distalizing devices are anchored on the first molars, without including second molars; so, differences between sequentially distalize maxillary molars (second molar followed by the first molar) or distalize second and first molars together are not clear. The aim of the study was to compare sequential versus simultaneous molar distalization therapy with erupted second molar using two different modified Pendulum appliances followed by fixed appliances. The treatment sample consisted of 35 class II malocclusion subjects, divided in two groups: group 1 consisted of 24 patients (13 males and 11 females) with a mean pre-treatment age of 12.9 years, treated with the Segmented Pendulum (SP) and fixed appliances; group 2 consisted of 11 patients (6 males and 5 females) with a mean pre-treatment age of 13.2 years, treated with the Quad Pendulum (QP) and fixed appliances. Lateral cephalograms were obtained before treatment (T1), at the end of distalization (T2), and at the end of orthodontic fixed appliance therapy (T3). A Student t test was used to identify significant between-group differences between T1 to T2, T2 to T3, and T1 to T3. QP and SP were equally effective in distalizing maxillary molars (3.5 and 4 mm, respectively) between T1 and T2; however, the maxillary first molar showed less distal tipping (4.6° vs. 9.6°) and more extrusion (1.1 vs. 0.2 mm) in the QP group than in the SP group, as well as the vertical facial dimension, which increased more in the QP group (1.2°) than in the SP group (0.7°). At T3, the QP group maintained greater increase in lower anterior facial height and molar extrusion and decrease in overbite than the SP group. Quad Pendulum seems to have greater increase in vertical dimension and molar extrusion than the Segmented Pendulum.
Mollet, Pieter; Keereman, Vincent; Bini, Jason; Izquierdo-Garcia, David; Fayad, Zahi A; Vandenberghe, Stefaan
2014-02-01
Quantitative PET imaging relies on accurate attenuation correction. Recently, there has been growing interest in combining state-of-the-art PET systems with MR imaging in a sequential or fully integrated setup. As CT becomes unavailable for these systems, an alternative approach to the CT-based reconstruction of attenuation coefficients (μ values) at 511 keV must be found. Deriving μ values directly from MR images is difficult because MR signals are related to the proton density and relaxation properties of tissue. Therefore, most research groups focus on segmentation or atlas registration techniques. Although studies have shown that these methods provide viable solutions in particular applications, some major drawbacks limit their use in whole-body PET/MR. Previously, we used an annulus-shaped PET transmission source inside the field of view of a PET scanner to measure attenuation coefficients at 511 keV. In this work, we describe the use of this method in studies of patients with the sequential time-of-flight (TOF) PET/MR scanner installed at the Icahn School of Medicine at Mount Sinai, New York, NY. Five human PET/MR and CT datasets were acquired. The transmission-based attenuation correction method was compared with conventional CT-based attenuation correction and the 3-segment, MR-based attenuation correction available on the TOF PET/MR imaging scanner. The transmission-based method overcame most problems related to the MR-based technique, such as truncation artifacts of the arms, segmentation artifacts in the lungs, and imaging of cortical bone. Additionally, the TOF capabilities of the PET detectors allowed the simultaneous acquisition of transmission and emission data. Compared with the MR-based approach, the transmission-based method provided average improvements in PET quantification of 6.4%, 2.4%, and 18.7% in volumes of interest inside the lung, soft tissue, and bone tissue, respectively. In conclusion, a transmission-based technique with an annulus-shaped transmission source will be more accurate than a conventional MR-based technique for measuring attenuation coefficients at 511 keV in future whole-body PET/MR studies.
A comprehensive segmentation analysis of crude oil market based on time irreversibility
NASA Astrophysics Data System (ADS)
Xia, Jianan; Shang, Pengjian; Lu, Dan; Yin, Yi
2016-05-01
In this paper, we perform a comprehensive entropic segmentation analysis of crude oil future prices from 1983 to 2014 which used the Jensen-Shannon divergence as the statistical distance between segments, and analyze the results from original series S and series begin at 1986 (marked as S∗) to find common segments which have same boundaries. Then we apply time irreversibility analysis of each segment to divide all segments into two groups according to their asymmetry degree. Based on the temporal distribution of the common segments and high asymmetry segments, we figure out that these two types of segments appear alternately and do not overlap basically in daily group, while the common portions are also high asymmetry segments in weekly group. In addition, the temporal distribution of the common segments is fairly close to the time of crises, wars or other events, because the hit from severe events to oil price makes these common segments quite different from their adjacent segments. The common segments can be confirmed in daily group series, or weekly group series due to the large divergence between common segments and their neighbors. While the identification of high asymmetry segments is helpful to know the segments which are not affected badly by the events and can recover to steady states automatically. Finally, we rearrange the segments by merging the connected common segments or high asymmetry segments into a segment, and conjoin the connected segments which are neither common nor high asymmetric.
NASA Technical Reports Server (NTRS)
Cooper, D. B.; Yalabik, N.
1975-01-01
Approximation of noisy data in the plane by straight lines or elliptic or single-branch hyperbolic curve segments arises in pattern recognition, data compaction, and other problems. The efficient search for and approximation of data by such curves were examined. Recursive least-squares linear curve-fitting was used, and ellipses and hyperbolas are parameterized as quadratic functions in x and y. The error minimized by the algorithm is interpreted, and central processing unit (CPU) times for estimating parameters for fitting straight lines and quadratic curves were determined and compared. CPU time for data search was also determined for the case of straight line fitting. Quadratic curve fitting is shown to require about six times as much CPU time as does straight line fitting, and curves relating CPU time and fitting error were determined for straight line fitting. Results are derived on early sequential determination of whether or not the underlying curve is a straight line.
A smart sensor architecture based on emergent computation in an array of outer-totalistic cells
NASA Astrophysics Data System (ADS)
Dogaru, Radu; Dogaru, Ioana; Glesner, Manfred
2005-06-01
A novel smart-sensor architecture is proposed, capable to segment and recognize characters in a monochrome image. It is capable to provide a list of ASCII codes representing the recognized characters from the monochrome visual field. It can operate as a blind's aid or for industrial applications. A bio-inspired cellular model with simple linear neurons was found the best to perform the nontrivial task of cropping isolated compact objects such as handwritten digits or characters. By attaching a simple outer-totalistic cell to each pixel sensor, emergent computation in the resulting cellular automata lattice provides a straightforward and compact solution to the otherwise computationally intensive problem of character segmentation. A simple and robust recognition algorithm is built in a compact sequential controller accessing the array of cells so that the integrated device can provide directly a list of codes of the recognized characters. Preliminary simulation tests indicate good performance and robustness to various distortions of the visual field.
Xu, Yong-Qing; Li, Jun; Zhong, Shi-Zhen; Xu, Da-Chuan; Xu, Xiao-Shan; Guo, Yuan-Fa; Wang, Xin-Min; Li, Zhu-Yi; Zhu, Yue-Liang
2004-12-01
To clarify the anatomical relationship of the structures in the first toe webbing space for better dissection of toes in thumb reconstruction. The first dorsal metatarsal artery, the first deep transverse metatarsal ligament and the extensor expansion were observed on 42 adult cadaveric lower extremities. Clinically the method of tracing the first dorsal metatarsal artery around the space of the extensor expansion was used in 36 cases of thumb reconstruction. The distal segments of the first dorsal metatarsal artery of Gilbert types I and II were located superficially to the extensor expansion. The harvesting time of a toe was shortened from 90 minutes to 50 minutes with 100% survival of reconstructed fingers. The distal segment of the first dorsal metatarsal artery lies constantly at the superficial layer of the extensor expansion. Most of the first metatarsal arteries of Gilbert types I and II can be easily located via the combined sequential and reverse dissection around the space of the extensor expansion.
Web-based unfolding cases: a strategy to enhance and evaluate clinical reasoning skills.
Johnson, Gail; Flagler, Susan
2013-10-01
Clinical reasoning involves the use of both analytical and nonanalytical intuitive cognitive processes. Fostering student development of clinical reasoning skills and evaluating student performance in this cognitive arena can challenge educators. The use of Web-based unfolding cases is proposed as a strategy to address these challenges. Unfolding cases mimic real-life clinical situations by presenting only partial clinical information in sequential segments. Students receive immediate feedback after submitting a response to a given segment. The student's comparison of the desired and submitted responses provides information to enhance the development of clinical reasoning skills. Each student's set of case responses are saved for the instructor in an individual-student electronic file, providing a record of the student's knowledge and thinking processes for faculty evaluation. For the example case given, the approaches used to evaluate individual components of clinical reasoning are provided. Possible future uses of Web-based unfolding cases are described. Copyright 2013, SLACK Incorporated.
Radionuclide bone imaging in the evaluation of osseous allograft systems. Scientific report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, J.F.; Cagle, J.D.; Stevenson, J.S.
1975-02-01
Evaluation of the progress of osteogenic activity in mandibular bone grafts in dogs by a noninvasive, nondestructive radionuclide method is feasible. The method provides a meaningful sequential interpretation of osseous repair more sensitive than conventional radiography. It is presumed that accumulating hydroxyapatite is being labelled by the imaging agent technetium diphosphonate. The osseous allograft systems studied were comparable to or exceeded autografts in their repair activity in mandibular discontinuity defects as judged by radionuclide imaging. A lyophilized mandibular allograft segment augmented with autologous cancellous marrow was more active than autograft controls at 3 and 6 weeks and was the mostmore » active system studied. Allograft segments augmented with lyophilized crushed cortical allogeneic bone particles were equal to controls at 3 weeks and more active than controls at 6 weeks. Lyophilized crushed cortical allogeneic bone particles retained in a Millipore filter while not clinically stable at 6 weeks did show osteogenic activity equal to control autografts at this interval. (GRA)« less
Low-cost telepresence for collaborative virtual environments.
Rhee, Seon-Min; Ziegler, Remo; Park, Jiyoung; Naef, Martin; Gross, Markus; Kim, Myoung-Hee
2007-01-01
We present a novel low-cost method for visual communication and telepresence in a CAVE -like environment, relying on 2D stereo-based video avatars. The system combines a selection of proven efficient algorithms and approximations in a unique way, resulting in a convincing stereoscopic real-time representation of a remote user acquired in a spatially immersive display. The system was designed to extend existing projection systems with acquisition capabilities requiring minimal hardware modifications and cost. The system uses infrared-based image segmentation to enable concurrent acquisition and projection in an immersive environment without a static background. The system consists of two color cameras and two additional b/w cameras used for segmentation in the near-IR spectrum. There is no need for special optics as the mask and color image are merged using image-warping based on a depth estimation. The resulting stereo image stream is compressed, streamed across a network, and displayed as a frame-sequential stereo texture on a billboard in the remote virtual environment.
A circuit mechanism for the propagation of waves of muscle contraction in Drosophila
Fushiki, Akira; Zwart, Maarten F; Kohsaka, Hiroshi; Fetter, Richard D; Cardona, Albert; Nose, Akinao
2016-01-01
Animals move by adaptively coordinating the sequential activation of muscles. The circuit mechanisms underlying coordinated locomotion are poorly understood. Here, we report on a novel circuit for the propagation of waves of muscle contraction, using the peristaltic locomotion of Drosophila larvae as a model system. We found an intersegmental chain of synaptically connected neurons, alternating excitatory and inhibitory, necessary for wave propagation and active in phase with the wave. The excitatory neurons (A27h) are premotor and necessary only for forward locomotion, and are modulated by stretch receptors and descending inputs. The inhibitory neurons (GDL) are necessary for both forward and backward locomotion, suggestive of different yet coupled central pattern generators, and its inhibition is necessary for wave propagation. The circuit structure and functional imaging indicated that the commands to contract one segment promote the relaxation of the next segment, revealing a mechanism for wave propagation in peristaltic locomotion. DOI: http://dx.doi.org/10.7554/eLife.13253.001 PMID:26880545
Short segment search method for phylogenetic analysis using nested sliding windows
NASA Astrophysics Data System (ADS)
Iskandar, A. A.; Bustamam, A.; Trimarsanto, H.
2017-10-01
To analyze phylogenetics in Bioinformatics, coding DNA sequences (CDS) segment is needed for maximal accuracy. However, analysis by CDS cost a lot of time and money, so a short representative segment by CDS, which is envelope protein segment or non-structural 3 (NS3) segment is necessary. After sliding window is implemented, a better short segment than envelope protein segment and NS3 is found. This paper will discuss a mathematical method to analyze sequences using nested sliding window to find a short segment which is representative for the whole genome. The result shows that our method can find a short segment which more representative about 6.57% in topological view to CDS segment than an Envelope segment or NS3 segment.
Matsushita, Kazuhiro; Inoue, Nobuo; Yamaguchi, Hiro-o; Ooi, Kazuhiro; Totsuka, Yasunori
2011-09-01
Alveolar distraction is mainly used to increase height and width of the alveolar crest. This technique, however, is not typically used for lengthening the perimeter of the dental arch or improving teeth axes. We applied alveolar distraction in a tooth-borne manner in the second stage of our original method and obtained favorable results. We therefore present an outline of this method. Genioplasty was first performed to create an infrastructure for sequential advancement of the subapical alveolar segment. After bone union, anterior subapical alveolar osteotomy was performed. The stump of the osteotomized dentate segment was moved forward without changing the incisal edge position, and a box-type bioabsorbable plate with four holes was fixed only onto the dentate segment using two screws. After a latency period, two distraction devices were placed bilaterally to the brackets and activated at 1.0 mm/day. After reaching the desired position, the distractor was immobilized, and then replaced by resin temporary teeth to retain the created space. After the consolidation period, orthodontic treatment was restarted and teeth moved into the newly created space. Bimaxillary surgery was performed after completing pre-surgical orthodontic treatment. Finally, both desirable occlusion and functional masticatory function were obtained. This tooth-borne distraction system is one applicable method for patients with skeletal class II and crowding of lower anterior teeth, achieving good results particularly in combination with our original method.
Differential segmental growth of the vertebral column of the rat (Rattus norvegicus).
Bergmann, Philip J; Melin, Amanda D; Russell, Anthony P
2006-01-01
Despite the pervasive occurrence of segmental morphologies in the animal kingdom, the study of segmental growth is almost entirely lacking, but may have significant implications for understanding the development of these organisms. We investigate the segmental and regional growth of the entire vertebral column of the rat (Rattus norvegicus) by fitting a Gompertz curve to length and age data for each vertebra and each vertebral region. Regional lengths are calculated by summing constituent vertebral lengths and intervertebral space lengths for cervical, thoracic, lumbar, sacral, and caudal regions. Gompertz curves allow for the estimation of parameters representing neonatal and adult vertebral and regional lengths, as well as initial growth rate and the rate of exponential growth decay. Findings demonstrate differences between neonatal and adult rats in terms of relative vertebral lengths, and differential growth rates between sequential vertebrae and vertebral regions. Specifically, relative differences in the length of vertebrae indicate increasing differences caudad. Vertebral length in neonates increases from the atlas to the middle of the thoracic series and decreases in length caudad, while adult vertebral lengths tend to increase caudad. There is also a general trend of increasing vertebral and regional initial growth and rate of growth decay caudad. Anteroposterior patterns of growth are sexually dimorphic, with males having longer vertebrae than females at any given age. Differences are more pronounced (a) increasingly caudad along the body axis, and (b) in adulthood than in neonates. Elucidated patterns of growth are influenced by a combination of developmental, functional, and genetic factors.
A Quasi-3-D Theory for Impedance Eduction in Uniform Grazing Flow
NASA Technical Reports Server (NTRS)
Watson, W. R.; Jones, M. G.; Parrott, T. L.
2005-01-01
A 2-D impedance eduction methodology is extended to quasi-3-D sound fields in uniform or shearing mean flow. We introduce a nonlocal, nonreflecting boundary condition to terminate the duct and then educe the impedance by minimizing an objective function. The introduction of a parallel, sparse, equation solver significantly reduces the wall clock time for educing the impedance when compared to that of the sequential band solver used in the 2-D methodology. The accuracy, efficiency, and robustness of the methodology is demonstrated using two examples. In the first example, we show that the method reproduces the known impedance of a ceramic tubular test liner. In the second example, we illustrate that the approach educes the impedance of a four-segment liner where the first, second, and fourth segments consist of a perforated face sheet bonded to honeycomb, and the third segment is a cut from the ceramic tubular test liner. The ability of the method to educe the impedances of multisegmented liners has the potential to significantly reduce the amount of time and cost required to determine the impedance of several uniform liners by allowing them to be placed in series in the test section and to educe the impedance of each segment using a single numerical experiment. Finally, we probe the objective function in great detail and show that it contains a single minimum. Thus, our objective function is ideal for use with local, inexpensive, gradient-based optimizers.
ERIC Educational Resources Information Center
Matsumoto, Yumi
2011-01-01
This is a qualitative study of nonnative English speakers who speak English as a lingua franca (ELF) in their graduate student dormitory in the United States, a community of practice (Wegner, 2004) comprised almost entirely of second language users. Using a sequential analysis (Koshik, 2002; Markee, 2000; Sacks, Schegloff, & Jefferson, 1974;…
ERIC Educational Resources Information Center
Wu, Sheng-Yi; Hou, Huei-Tse
2015-01-01
Cognitive styles play an important role in influencing the learning process, but to date no relevant study has been conducted using lag sequential analysis to assess knowledge construction learning patterns based on different cognitive styles in computer-supported collaborative learning activities in online collaborative discussions. This study…
Li, Kai; Rüdiger, Heinz; Haase, Rocco; Ziemssen, Tjalf
2018-01-01
Objective: As the multiple trigonometric regressive spectral (MTRS) analysis is extraordinary in its ability to analyze short local data segments down to 12 s, we wanted to evaluate the impact of the data segment settings by applying the technique of MTRS analysis for baroreflex sensitivity (BRS) estimation using a standardized data pool. Methods: Spectral and baroreflex analyses were performed on the EuroBaVar dataset (42 recordings, including lying and standing positions). For this analysis, the technique of MTRS was used. We used different global and local data segment lengths, and chose the global data segments from different positions. Three global data segments of 1 and 2 min and three local data segments of 12, 20, and 30 s were used in MTRS analysis for BRS. Results: All the BRS-values calculated on the three global data segments were highly correlated, both in the supine and standing positions; the different global data segments provided similar BRS estimations. When using different local data segments, all the BRS-values were also highly correlated. However, in the supine position, using short local data segments of 12 s overestimated BRS compared with those using 20 and 30 s. In the standing position, the BRS estimations using different local data segments were comparable. There was no proportional bias for the comparisons between different BRS estimations. Conclusion: We demonstrate that BRS estimation by the MTRS technique is stable when using different global data segments, and MTRS is extraordinary in its ability to evaluate BRS in even short local data segments (20 and 30 s). Because of the non-stationary character of most biosignals, the MTRS technique would be preferable for BRS analysis especially in conditions when only short stationary data segments are available or when dynamic changes of BRS should be monitored.
NASA Astrophysics Data System (ADS)
Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Shimada, Mitsuo; Iinuma, Gen
2017-03-01
In abdominal disease diagnosis and various abdominal surgeries planning, segmentation of abdominal blood vessel (ABVs) is a very imperative task. Automatic segmentation enables fast and accurate processing of ABVs. We proposed a fully automatic approach for segmenting ABVs through contrast enhanced CT images by a hybrid of 3D region growing and 4D curvature analysis. The proposed method comprises three stages. First, candidates of bone, kidneys, ABVs and heart are segmented by an auto-adapted threshold. Second, bone is auto-segmented and classified into spine, ribs and pelvis. Third, ABVs are automatically segmented in two sub-steps: (1) kidneys and abdominal part of the heart are segmented, (2) ABVs are segmented by a hybrid approach that integrates a 3D region growing and 4D curvature analysis. Results are compared with two conventional methods. Results show that the proposed method is very promising in segmenting and classifying bone, segmenting whole ABVs and may have potential utility in clinical use.
Molinari, Luisa; Mameli, Consuelo; Gnisci, Augusto
2013-09-01
A sequential analysis of classroom discourse is needed to investigate the conditions under which the triadic initiation-response-feedback (IRF) pattern may host different teaching orientations. The purpose of the study is twofold: first, to describe the characteristics of classroom discourse and, second, to identify and explore the different interactive sequences that can be captured with a sequential statistical analysis. Twelve whole-class activities were video recorded in three Italian primary schools. We observed classroom interaction as it occurs naturally on an everyday basis. In total, we collected 587 min of video recordings. Subsequently, 828 triadic IRF patterns were extracted from this material and analysed with the programme Generalized Sequential Query (GSEQ). The results indicate that classroom discourse may unfold in different ways. In particular, we identified and described four types of sequences. Dialogic sequences were triggered by authentic questions, and continued through further relaunches. Monologic sequences were directed to fulfil the teachers' pre-determined didactic purposes. Co-constructive sequences fostered deduction, reasoning, and thinking. Scaffolding sequences helped and sustained children with difficulties. The application of sequential analyses allowed us to show that interactive sequences may account for a variety of meanings, thus making a significant contribution to the literature and research practice in classroom discourse. © 2012 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Huang, Jia-Yann; Kao, Pan-Fu; Chen, Yung-Sheng
2007-06-01
Adjustment of brightness and contrast in nuclear medicine whole body bone scan images may confuse nuclear medicine physicians when identifying small bone lesions as well as making the identification of subtle bone lesion changes in sequential studies difficult. In this study, we developed a computer-aided diagnosis system, based on the fuzzy sets histogram thresholding method and anatomical knowledge-based image segmentation method that was able to analyze and quantify raw image data and identify the possible location of a lesion. To locate anatomical reference points, the fuzzy sets histogram thresholding method was adopted as a first processing stage to suppress the soft tissue in the bone images. Anatomical knowledge-based image segmentation method was then applied to segment the skeletal frame into different regions of homogeneous bones. For the different segmented bone regions, the lesion thresholds were set at different cut-offs. To obtain lesion thresholds in different segmented regions, the ranges and standard deviations of the image's gray-level distribution were obtained from 100 normal patients' whole body bone images and then, another 62 patients' images were used for testing. The two groups of images were independent. The sensitivity and the mean number of false lesions detected were used as performance indices to evaluate the proposed system. The overall sensitivity of the system is 92.1% (222 of 241) and 7.58 false detections per patient scan image. With a high sensitivity and an acceptable false lesions detection rate, this computer-aided automatic lesion detection system is demonstrated as useful and will probably in the future be able to help nuclear medicine physicians to identify possible bone lesions.
The anterior deltoid's importance in reverse shoulder arthroplasty: a cadaveric biomechanical study.
Schwartz, Daniel G; Kang, Sang Hoon; Lynch, T Sean; Edwards, Sara; Nuber, Gordon; Zhang, Li-Qun; Saltzman, Matthew
2013-03-01
Frequently, patients who are candidates for reverse shoulder arthroplasty have had prior surgery that may compromise the anterior deltoid muscle. There have been conflicting reports on the necessity of the anterior deltoid thus it is unclear whether a dysfunctional anterior deltoid muscle is a contraindication to reverse shoulder arthroplasty. The purpose of this study was to determine the 3-dimensional (3D) moment arms for all 6 deltoid segments, and determine the biomechanical significance of the anterior deltoid before and after reverse shoulder arthroplasty. Eight cadaveric shoulders were evaluated with a 6-axis force/torque sensor to assess the direction of rotation and 3D moment arms for all 6 segments of the deltoid both before and after placement of a reverse shoulder prosthesis. The 2 segments of anterior deltoid were unloaded sequentially to determine their functional role. The 3D moment arms of the deltoid were significantly altered by placement of the reverse shoulder prosthesis. The anterior and middle deltoid abduction moment arms significantly increased after placement of the reverse prosthesis (P < .05). Furthermore, the loss of the anterior deltoid resulted in a significant decrease in both abduction and flexion moments (P < .05). The anterior deltoid is important biomechanically for balanced function after a reverse total shoulder arthroplasty. Losing 1 segment of the anterior deltoid may still allow abduction; however, losing both segments of the anterior deltoid may disrupt balanced abduction. Surgeons should be cautious about performing reverse shoulder arthroplasty in patients who do not have a functioning anterior deltoid muscle. Copyright © 2013 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.
Improvements in analysis techniques for segmented mirror arrays
NASA Astrophysics Data System (ADS)
Michels, Gregory J.; Genberg, Victor L.; Bisson, Gary R.
2016-08-01
The employment of actively controlled segmented mirror architectures has become increasingly common in the development of current astronomical telescopes. Optomechanical analysis of such hardware presents unique issues compared to that of monolithic mirror designs. The work presented here is a review of current capabilities and improvements in the methodology of the analysis of mechanically induced surface deformation of such systems. The recent improvements include capability to differentiate surface deformation at the array and segment level. This differentiation allowing surface deformation analysis at each individual segment level offers useful insight into the mechanical behavior of the segments that is unavailable by analysis solely at the parent array level. In addition, capability to characterize the full displacement vector deformation of collections of points allows analysis of mechanical disturbance predictions of assembly interfaces relative to other assembly interfaces. This capability, called racking analysis, allows engineers to develop designs for segment-to-segment phasing performance in assembly integration, 0g release, and thermal stability of operation. The performance predicted by racking has the advantage of being comparable to the measurements used in assembly of hardware. Approaches to all of the above issues are presented and demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
Edla, Shwetha; Kovvali, Narayan; Papandreou-Suppappola, Antonia
2012-01-01
Constructing statistical models of electrocardiogram (ECG) signals, whose parameters can be used for automated disease classification, is of great importance in precluding manual annotation and providing prompt diagnosis of cardiac diseases. ECG signals consist of several segments with different morphologies (namely the P wave, QRS complex and the T wave) in a single heart beat, which can vary across individuals and diseases. Also, existing statistical ECG models exhibit a reliance upon obtaining a priori information from the ECG data by using preprocessing algorithms to initialize the filter parameters, or to define the user-specified model parameters. In this paper, we propose an ECG modeling technique using the sequential Markov chain Monte Carlo (SMCMC) filter that can perform simultaneous model selection, by adaptively choosing from different representations depending upon the nature of the data. Our results demonstrate the ability of the algorithm to track various types of ECG morphologies, including intermittently occurring ECG beats. In addition, we use the estimated model parameters as the feature set to classify between ECG signals with normal sinus rhythm and four different types of arrhythmia.
Cotranslational structure acquisition of nascent polypeptides monitored by NMR spectroscopy.
Eichmann, Cédric; Preissler, Steffen; Riek, Roland; Deuerling, Elke
2010-05-18
The folding of proteins in living cells may start during their synthesis when the polypeptides emerge gradually at the ribosomal exit tunnel. However, our current understanding of cotranslational folding processes at the atomic level is limited. We employed NMR spectroscopy to monitor the conformation of the SH3 domain from alpha-spectrin at sequential stages of elongation via in vivo ribosome-arrested (15)N,(13)C-labeled nascent polypeptides. These nascent chains exposed either the entire SH3 domain or C-terminally truncated segments thereof, thus providing snapshots of the translation process. We show that nascent SH3 polypeptides remain unstructured during elongation but fold into a compact, native-like beta-sheet assembly when the entire sequence information is available. Moreover, the ribosome neither imposes major conformational constraints nor significantly interacts with exposed unfolded nascent SH3 domain moieties. Our data provide evidence for a domainwise folding of the SH3 domain on ribosomes without significant population of folding intermediates. The domain follows a thermodynamically favorable pathway in which sequential folding units are stabilized, thus avoiding kinetic traps during the process of cotranslational folding.
Approaches for Achieving Broadband Achromatic Phase Shifts for Visible Nulling Coronagraphy
NASA Technical Reports Server (NTRS)
Bolcar, Matthew R.; Lyon, Richard G.
2012-01-01
Visible nulling coronagraphy is one of the few approaches to the direct detection and characterization of Jovian and Terrestrial exoplanets that works with segmented aperture telescopes. Jovian and Terrestrial planets require at least 10(exp -9) and 10(exp -10) image plane contrasts, respectively, within the spectral bandpass and thus require a nearly achromatic pi-phase difference between the arms of the interferometer. An achromatic pi-phase shift can be achieved by several techniques, including sequential angled thick glass plates of varying dispersive materials, distributed thin-film multilayer coatings, and techniques that leverage the polarization-dependent phase shift of total-internal reflections. Herein we describe two such techniques: sequential thick glass plates and Fresnel rhomb prisms. A viable technique must achieve the achromatic phase shift while simultaneously minimizing the intensity difference, chromatic beam spread and polarization variation between each arm. In this paper we describe the above techniques and report on efforts to design, model, fabricate, align the trades associated with each technique that will lead to an implementations of the most promising one in Goddard's Visible Nulling Coronagraph (VNC).
Lau, Christine S M; Ward, Amanda; Chamberlain, Ronald S
2016-06-01
Helicobacter pylori is a common infection associated with many gastrointestinal diseases. Triple or quadruple therapy is the current recommendation for H pylori eradication in children but is associated with success rates as low as 50%. Recent studies have demonstrated that a 10-day sequential therapy regimen, rather than simultaneous antibiotic administration, achieved eradication rates of nearly 95%. This meta-analysis found that sequential therapy increased eradication rates by 14.2% (relative risk [RR] = 1.142; 95% confidence interval [CI] = 1.082-1.207; P < .001). Ten-day sequential therapy significantly improved H pylori eradication rates compared to the 7-day standard therapy (RR = 1.182; 95% CI = 1.102-1.269; p < .001) and 10-day standard therapy (RR = 1.179; 95% CI = 1.074-1.295; P = .001), but had lower eradication rates compared to 14-day standard therapy (RR = 0.926; 95% CI = 0.811-1.059; P = .261). The use of sequential therapy is associated with increased H pylori eradication rates in children compared to standard therapy of equal or shorter duration. © The Author(s) 2015.
Casado, Pilar; Martín-Loeches, Manuel; León, Inmaculada; Hernández-Gutiérrez, David; Espuny, Javier; Muñoz, Francisco; Jiménez-Ortega, Laura; Fondevila, Sabela; de Vega, Manuel
2018-03-01
This study aims to extend the embodied cognition approach to syntactic processing. The hypothesis is that the brain resources to plan and perform motor sequences are also involved in syntactic processing. To test this hypothesis, Event-Related brain Potentials (ERPs) were recorded while participants read sentences with embedded relative clauses, judging for their acceptability (half of the sentences contained a subject-verb morphosyntactic disagreement). The sentences, previously divided into three segments, were self-administered segment-by-segment in two different sequential manners: linear or non-linear. Linear self-administration consisted of successively pressing three buttons with three consecutive fingers in the right hand, while non-linear self-administration implied the substitution of the finger in the middle position by the right foot. Our aim was to test whether syntactic processing could be affected by the manner the sentences were self-administered. Main results revealed that the ERPs LAN component vanished whereas the P600 component increased in response to incorrect verbs, for non-linear relative to linear self-administration. The LAN and P600 components reflect early and late syntactic processing, respectively. Our results convey evidence that language syntactic processing and performing non-linguistic motor sequences may share resources in the human brain. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zhu, Liangjia; Gao, Yi; Appia, Vikram; Yezzi, Anthony; Arepalli, Chesnal; Faber, Tracy; Stillman, Arthur; Tannenbaum, Allen
2014-01-01
Prognosis and diagnosis of cardiac diseases frequently require quantitative evaluation of the ventricle volume, mass, and ejection fraction. The delineation of the myocardial wall is involved in all of these evaluations, which is a challenging task due to large variations in myocardial shapes and image quality. In this work, we present an automatic method for extracting the myocardial wall of the left and right ventricles from cardiac CT images. In the method, the left and right ventricles are located sequentially, in which each ventricle is detected by first identifying the endocardium and then segmenting the epicardium. To this end, the endocardium is localized by utilizing its geometric features obtained on-line from a CT image. After that, a variational region-growing model is employed to extract the epicardium of the ventricles. In particular, the location of the endocardium of the left ventricle is determined via using an active contour model on the blood-pool surface. To localize the right ventricle, the active contour model is applied on a heart surface extracted based on the left ventricle segmentation result. The robustness and accuracy of the proposed approach is demonstrated by experimental results from 33 human and 12 pig CT images. PMID:23744658
Contour Tracking in Echocardiographic Sequences via Sparse Representation and Dictionary Learning
Huang, Xiaojie; Dione, Donald P.; Compas, Colin B.; Papademetris, Xenophon; Lin, Ben A.; Bregasi, Alda; Sinusas, Albert J.; Staib, Lawrence H.; Duncan, James S.
2013-01-01
This paper presents a dynamical appearance model based on sparse representation and dictionary learning for tracking both endocardial and epicardial contours of the left ventricle in echocardiographic sequences. Instead of learning offline spatiotemporal priors from databases, we exploit the inherent spatiotemporal coherence of individual data to constraint cardiac contour estimation. The contour tracker is initialized with a manual tracing of the first frame. It employs multiscale sparse representation of local image appearance and learns online multiscale appearance dictionaries in a boosting framework as the image sequence is segmented frame-by-frame sequentially. The weights of multiscale appearance dictionaries are optimized automatically. Our region-based level set segmentation integrates a spectrum of complementary multilevel information including intensity, multiscale local appearance, and dynamical shape prediction. The approach is validated on twenty-six 4D canine echocardiographic images acquired from both healthy and post-infarct canines. The segmentation results agree well with expert manual tracings. The ejection fraction estimates also show good agreement with manual results. Advantages of our approach are demonstrated by comparisons with a conventional pure intensity model, a registration-based contour tracker, and a state-of-the-art database-dependent offline dynamical shape model. We also demonstrate the feasibility of clinical application by applying the method to four 4D human data sets. PMID:24292554
Luck, Jeff; Hagigi, Fred; Parker, Louise E; Yano, Elizabeth M; Rubenstein, Lisa V; Kirchner, JoAnn E
2009-09-28
Collaborative care models for depression in primary care are effective and cost-effective, but difficult to spread to new sites. Translating Initiatives for Depression into Effective Solutions (TIDES) is an initiative to promote evidence-based collaborative care in the U.S. Veterans Health Administration (VHA). Social marketing applies marketing techniques to promote positive behavior change. Described in this paper, TIDES used a social marketing approach to foster national spread of collaborative care models. The approach relied on a sequential model of behavior change and explicit attention to audience segmentation. Segments included VHA national leadership, Veterans Integrated Service Network (VISN) regional leadership, facility managers, frontline providers, and veterans. TIDES communications, materials and messages targeted each segment, guided by an overall marketing plan. Depression collaborative care based on the TIDES model was adopted by VHA as part of the new Primary Care Mental Health Initiative and associated policies. It is currently in use in more than 50 primary care practices across the United States, and continues to spread, suggesting success for its social marketing-based dissemination strategy. Development, execution and evaluation of the TIDES marketing effort shows that social marketing is a promising approach for promoting implementation of evidence-based interventions in integrated healthcare systems.
Gomis-Rüth, F X; Gómez, M; Bode, W; Huber, R; Avilés, F X
1995-01-01
The metalloexozymogen procarboxypeptidase A is mainly secreted in ruminants as a ternary complex with zymogens of two serine endoproteinases, chymotrypsinogen C and proproteinase E. The bovine complex has been crystallized, and its molecular structure analysed and refined at 2.6 A resolution to an R factor of 0.198. In this heterotrimer, the activation segment of procarboxypeptidase A essentially clamps the other two subunits, which shield the activation sites of the former from tryptic attack. In contrast, the propeptides of both serine proproteinases are freely accessible to trypsin. This arrangement explains the sequential and delayed activation of the constituent zymogens. Procarboxypeptidase A is virtually identical to the homologous monomeric porcine form. Chymotrypsinogen C displays structural features characteristic for chymotrypsins as well as elastases, except for its activation domain; similar to bovine chymotrypsinogen A, its binding site is not properly formed, while its surface located activation segment is disordered. The proproteinase E structure is fully ordered and strikingly similar to active porcine elastase; its specificity pocket is occluded, while the activation segment is fixed to the molecular surface. This first structure of a native zymogen from the proteinase E/elastase family does not fundamentally differ from the serine proproteinases known so far. Images PMID:7556081
A 37-mm Ceramic Gun Nozzle Stress Analysis
2006-05-01
Figures iv List of Tables iv 1 . Introduction 1 2. Ceramic Nozzle Structure and Materials 1 3. Sequentially-Coupled and Fully-Coupled Thermal Stress...FEM Analysis 1 4. Ceramic Nozzle Thermal Stress Response 4 5. Ceramic Nozzle Dynamic FEM 7 6. Ceramic Nozzle Dynamic Responses and Discussions 8 7...candidate ceramics and the test fixture model components are listed in table 1 . 3. Sequentially-Coupled and Fully-Coupled Thermal Stress FEM Analysis
Real-time skin feature identification in a time-sequential video stream
NASA Astrophysics Data System (ADS)
Kramberger, Iztok
2005-04-01
Skin color can be an important feature when tracking skin-colored objects. Particularly this is the case for computer-vision-based human-computer interfaces (HCI). Humans have a highly developed feeling of space and, therefore, it is reasonable to support this within intelligent HCI, where the importance of augmented reality can be foreseen. Joining human-like interaction techniques within multimodal HCI could, or will, gain a feature for modern mobile telecommunication devices. On the other hand, real-time processing plays an important role in achieving more natural and physically intuitive ways of human-machine interaction. The main scope of this work is the development of a stereoscopic computer-vision hardware-accelerated framework for real-time skin feature identification in the sense of a single-pass image segmentation process. The hardware-accelerated preprocessing stage is presented with the purpose of color and spatial filtering, where the skin color model within the hue-saturation-value (HSV) color space is given with a polyhedron of threshold values representing the basis of the filter model. An adaptive filter management unit is suggested to achieve better segmentation results. This enables the adoption of filter parameters to the current scene conditions in an adaptive way. Implementation of the suggested hardware structure is given at the level of filed programmable system level integrated circuit (FPSLIC) devices using an embedded microcontroller as their main feature. A stereoscopic clue is achieved using a time-sequential video stream, but this shows no difference for real-time processing requirements in terms of hardware complexity. The experimental results for the hardware-accelerated preprocessing stage are given by efficiency estimation of the presented hardware structure using a simple motion-detection algorithm based on a binary function.
Wagh, Mihir S; Montane, Roberto
2012-02-01
The upper GI tract and the colon are readily accessible endoscopically, but the small intestine is relatively difficult to evaluate. To demonstrate the feasibility of using suction as a means of locomotion and to assess the initial design of a suction enteroscope. Feasibility study. Animal laboratory. Various prototype suction devices designed in our laboratory were tested in swine small intestine in a force test station. For in vivo experiments in live anesthetized animals, two suction devices (1 fixed tip and 1 movable tip) were attached to the outside of the endoscope. By creating suction in the fixed tip, the endoscope was anchored while the movable tip was advanced. Suction was then applied to the extended tip to attach it to the distal bowel. Suction on the fixed tip was then released and the movable tip with suction pulled back, resulting in advancement of the endoscope. These steps were sequentially repeated. Intestinal segments were sent for pathologic assessment after testing. Force generated ranged from 0.278 to 4.74 N with 64.3 to 88 kPa vacuum pressure. A linear relationship was seen between the pull force and vacuum pressures and tip surface area. During in vivo experiments, the endoscope was advanced in 25-cm segmental increments with sequential suction-and-release maneuvers. No significant bowel trauma was seen on pathology and necropsy. The enteroscopy system requires further refinement. A novel suction enteroscope was designed and tested. Suction tip characteristics played a critical role impacting the functionality of this enteroscopy system. Copyright © 2012 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.
ANALYSES OF RESPONSE–STIMULUS SEQUENCES IN DESCRIPTIVE OBSERVATIONS
Samaha, Andrew L; Vollmer, Timothy R; Borrero, Carrie; Sloman, Kimberly; Pipkin, Claire St. Peter; Bourret, Jason
2009-01-01
Descriptive observations were conducted to record problem behavior displayed by participants and to record antecedents and consequences delivered by caregivers. Next, functional analyses were conducted to identify reinforcers for problem behavior. Then, using data from the descriptive observations, lag-sequential analyses were conducted to examine changes in the probability of environmental events across time in relation to occurrences of problem behavior. The results of the lag-sequential analyses were interpreted in light of the results of functional analyses. Results suggested that events identified as reinforcers in a functional analysis followed behavior in idiosyncratic ways: after a range of delays and frequencies. Thus, it is possible that naturally occurring reinforcement contingencies are arranged in ways different from those typically evaluated in applied research. Further, these complex response–stimulus relations can be represented by lag-sequential analyses. However, limitations to the lag-sequential analysis are evident. PMID:19949537
Li, Ang; Zhang, Donghui
2016-03-14
Amphiphilic block copolypeptoids consisting of a hydrophilic poly(N-ethyl glycine) segment and a hydrophobic poly[(N-propargyl glycine)-r-(N-decyl glycine)] random copolymer segment [PNEG-b-P(NPgG-r-NDG), EPgD] have been synthesized by sequential primary amine-initiated ring-opening polymerization (ROP) of the corresponding N-alkyl N-carboxyanhydride monomers. The block copolypeptoids form micelles in water and the micellar core can be cross-linked with a disulfide-containing diazide cross-linker by copper-mediated alkyne-azide cycloaddition (CuAAC) in aqueous solution. Transmission electron microscopy (TEM) and dynamic light scattering (DLS) analysis revealed the formation of spherical micelles with uniform size for both the core-cross-linked micelles (CCLMs) and non-cross-linked micelles (NCLMs) precursors for selective block copolypeptoid polymers. The CCLMs exhibited increased dimensional stability relative to the NCLMs in DMF, a nonselective solvent for the core and corona segments. Micellar dissociation of CCLMs can be induced upon addition of a reducing agent (e.g., dithiothreitol) in dilute aqueous solutions, as verified by a combination of fluorescence spectroscopy, size exclusion chromatography (SEC), and (1)H NMR spectroscopic measurement. Doxorubicin (DOX), an anticancer drug, can be loaded into the hydrophobic core of CCLMs with a maximal 23% drug loading capacity (DLC) and 37% drug loading efficiency (DLE). In vitro DOX release from the CCLMs can be triggered by DTT (10 mM), in contrast to significantly reduced DOX release in the absence of DTT, attesting to the reductively responsive characteristic of the CCLMs. While the CCLMs exhibited minimal cytotoxicity toward HepG2 cancer cells, DOX-loaded CCLMs inhibited the proliferation of the HepG2 cancer cells in a concentration and time dependent manner, suggesting the controlled release of DOX from the DOX-loaded CCLMS in the cellular environment.
Superpixel-Augmented Endmember Detection for Hyperspectral Images
NASA Technical Reports Server (NTRS)
Thompson, David R.; Castano, Rebecca; Gilmore, Martha
2011-01-01
Superpixels are homogeneous image regions comprised of several contiguous pixels. They are produced by shattering the image into contiguous, homogeneous regions that each cover between 20 and 100 image pixels. The segmentation aims for a many-to-one mapping from superpixels to image features; each image feature could contain several superpixels, but each superpixel occupies no more than one image feature. This conservative segmentation is relatively easy to automate in a robust fashion. Superpixel processing is related to the more general idea of improving hyperspectral analysis through spatial constraints, which can recognize subtle features at or below the level of noise by exploiting the fact that their spectral signatures are found in neighboring pixels. Recent work has explored spatial constraints for endmember extraction, showing significant advantages over techniques that ignore pixels relative positions. Methods such as AMEE (automated morphological endmember extraction) express spatial influence using fixed isometric relationships a local square window or Euclidean distance in pixel coordinates. In other words, two pixels covariances are based on their spatial proximity, but are independent of their absolute location in the scene. These isometric spatial constraints are most appropriate when spectral variation is smooth and constant over the image. Superpixels are simple to implement, efficient to compute, and are empirically effective. They can be used as a preprocessing step with any desired endmember extraction technique. Superpixels also have a solid theoretical basis in the hyperspectral linear mixing model, making them a principled approach for improving endmember extraction. Unlike existing approaches, superpixels can accommodate non-isometric covariance between image pixels (characteristic of discrete image features separated by step discontinuities). These kinds of image features are common in natural scenes. Analysts can substitute superpixels for image pixels during endmember analysis that leverages the spatial contiguity of scene features to enhance subtle spectral features. Superpixels define populations of image pixels that are independent samples from each image feature, permitting robust estimation of spectral properties, and reducing measurement noise in proportion to the area of the superpixel. This permits improved endmember extraction, and enables automated search for novel and constituent minerals in very noisy, hyperspatial images. This innovation begins with a graph-based segmentation based on the work of Felzenszwalb et al., but then expands their approach to the hyperspectral image domain with a Euclidean distance metric. Then, the mean spectrum of each segment is computed, and the resulting data cloud is used as input into sequential maximum angle convex cone (SMACC) endmember extraction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Sang Hyun; Gao, Yaozong, E-mail: yzgao@cs.unc.edu; Shi, Yinghuan, E-mail: syh@nju.edu.cn
Purpose: Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correctmore » the segmentations from any type of automatic or interactive segmentation methods. Methods: The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm. Results: The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to evaluate both the efficiency and the robustness. The automatic segmentation results with the original average Dice similarity coefficient of 0.78 were improved to 0.865–0.872 after conducting 55–59 interactions by using the proposed method, where each editing procedure took less than 3 s. In addition, the proposed method obtained the most consistent editing results with respect to different user interactions, compared to other methods. Conclusions: The proposed method obtains robust editing results with few interactions for various wrong segmentation cases, by selecting the location-adaptive features and further imposing the manifold regularization. The authors expect the proposed method to largely reduce the laborious burdens of manual editing, as well as both the intra- and interobserver variability across clinicians.« less
NASA Technical Reports Server (NTRS)
Tilton, James C.
1988-01-01
Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.
Biomechanical analysis of the upper thoracic spine after decompressive procedures.
Healy, Andrew T; Lubelski, Daniel; Mageswaran, Prasath; Bhowmick, Deb A; Bartsch, Adam J; Benzel, Edward C; Mroz, Thomas E
2014-06-01
Decompressive procedures such as laminectomy, facetectomy, and costotransversectomy are routinely performed for various pathologies in the thoracic spine. The thoracic spine is unique, in part, because of the sternocostovertebral articulations that provide additional strength to the region relative to the cervical and lumbar spines. During decompressive surgeries, stability is compromised at a presently unknown point. To evaluate thoracic spinal stability after common surgical decompressive procedures in thoracic spines with intact sternocostovertebral articulations. Biomechanical cadaveric study. Fresh-frozen human cadaveric spine specimens with intact rib cages, C7-L1 (n=9), were used. An industrial robot tested all spines in axial rotation (AR), lateral bending (LB), and flexion-extension (FE) by applying pure moments (±5 Nm). The specimens were first tested in their intact state and then tested after each of the following sequential surgical decompressive procedures at T4-T5 consisting of laminectomy; unilateral facetectomy; unilateral costotransversectomy, and subsequently instrumented fusion from T3-T7. We found that in all three planes of motion, the sequential decompressive procedures caused no statistically significant change in motion between T3-T7 or T1-T12 when compared with intact. In comparing between intact and instrumented specimens, our study found that instrumentation reduced global range of motion (ROM) between T1-T12 by 16.3% (p=.001), 12% (p=.002), and 18.4% (p=.0004) for AR, FE, and LB, respectively. Age showed a negative correlation with motion in FE (r = -0.78, p=.01) and AR (r=-0.7, p=.04). Thoracic spine stability was not significantly affected by sequential decompressive procedures in thoracic segments at the level of the true ribs in all three planes of motion in intact thoracic specimens. Age appeared to negatively correlate with ROM of the specimen. Our study suggests that thoracic spinal stability is maintained immediately after unilateral decompression at the level of the true ribs. These preliminary observations, however, do not depict the long-term sequelae of such procedures and warrant further investigation. Copyright © 2014 Elsevier Inc. All rights reserved.
Decroocq, Justine; Itzykson, Raphaël; Vigouroux, Stéphane; Michallet, Mauricette; Yakoub-Agha, Ibrahim; Huynh, Anne; Beckerich, Florence; Suarez, Felipe; Chevallier, Patrice; Nguyen-Quoc, Stéphanie; Ledoux, Marie-Pierre; Clement, Laurence; Hicheri, Yosr; Guillerm, Gaëlle; Cornillon, Jérôme; Contentin, Nathalie; Carre, Martin; Maillard, Natacha; Mercier, Mélanie; Mohty, Mohamad; Beguin, Yves; Bourhis, Jean-Henri; Charbonnier, Amandine; Dauriac, Charles; Bay, Jacques-Olivier; Blaise, Didier; Deconinck, Eric; Jubert, Charlotte; Raus, Nicole; Peffault de Latour, Regis; Dhedin, Nathalie
2018-03-01
Patients with acute myeloid leukemia (AML) in relapse or refractory to induction therapy have a dismal prognosis. Allogeneic hematopoietic stem cell transplantation is the only curative option. In these patients, we aimed to compare the results of a myeloablative transplant versus a sequential approach consisting in a cytoreductive chemotherapy followed by a reduced intensity conditioning regimen and prophylactic donor lymphocytes infusions. We retrospectively analyzed 99 patients aged 18-50 years, transplanted for a refractory (52%) or a relapsed AML not in remission (48%). Fifty-eight patients received a sequential approach and 41 patients a myeloablative conditioning regimen. Only 6 patients received prophylactic donor lymphocytes infusions. With a median follow-up of 48 months, 2-year overall survival was 39%, 95% confidence interval (CI) (24-53) in the myeloablative group versus 33%, 95% CI (21-45) in the sequential groups (P = .39), and 2-year cumulative incidence of relapse (CIR) was 57% versus 50% respectively (P = .99). Nonrelapse mortality was not higher in the myeloablative group (17% versus 15%, P = .44). In multivariate analysis, overall survival, CIR and nonrelapse mortality remained similar between the two groups. However, in multivariate analysis, sequential conditioning led to fewer acute grade II-IV graft versus host disease (GVHD) (HR for sequential approach = 0.37; 95% CI: 0.21-0.65; P < .001) without a significant impact on chronic GVHD (all grades and extensive). In young patients with refractory or relapsed AML, myeloablative transplant and sequential approach offer similar outcomes except for a lower incidence of acute GvHD after a sequential transplant. © 2018 Wiley Periodicals, Inc.
A comparison of sequential and spiral scanning techniques in brain CT.
Pace, Ivana; Zarb, Francis
2015-01-01
To evaluate and compare image quality and radiation dose of sequential computed tomography (CT) examinations of the brain and spiral CT examinations of the brain imaged on a GE HiSpeed NX/I Dual Slice 2CT scanner. A random sample of 40 patients referred for CT examination of the brain was selected and divided into 2 groups. Half of the patients were scanned using the sequential technique; the other half were scanned using the spiral technique. Radiation dose data—both the computed tomography dose index (CTDI) and the dose length product (DLP)—were recorded on a checklist at the end of each examination. Using the European Guidelines on Quality Criteria for Computed Tomography, 4 radiologists conducted a visual grading analysis and rated the level of visibility of 6 anatomical structures considered necessary to produce images of high quality. The mean CTDI(vol) and DLP values were statistically significantly higher (P <.05) with the sequential scans (CTDI(vol): 22.06 mGy; DLP: 304.60 mGy • cm) than with the spiral scans (CTDI(vol): 14.94 mGy; DLP: 229.10 mGy • cm). The mean image quality rating scores for all criteria of the sequential scanning technique were statistically significantly higher (P <.05) in the visual grading analysis than those of the spiral scanning technique. In this local study, the sequential technique was preferred over the spiral technique for both overall image quality and differentiation between gray and white matter in brain CT scans. Other similar studies counter this finding. The radiation dose seen with the sequential CT scanning technique was significantly higher than that seen with the spiral CT scanning technique. However, image quality with the sequential technique was statistically significantly superior (P <.05).
On mining complex sequential data by means of FCA and pattern structures
NASA Astrophysics Data System (ADS)
Buzmakov, Aleksey; Egho, Elias; Jay, Nicolas; Kuznetsov, Sergei O.; Napoli, Amedeo; Raïssi, Chedy
2016-02-01
Nowadays data-sets are available in very complex and heterogeneous ways. Mining of such data collections is essential to support many real-world applications ranging from healthcare to marketing. In this work, we focus on the analysis of "complex" sequential data by means of interesting sequential patterns. We approach the problem using the elegant mathematical framework of formal concept analysis and its extension based on "pattern structures". Pattern structures are used for mining complex data (such as sequences or graphs) and are based on a subsumption operation, which in our case is defined with respect to the partial order on sequences. We show how pattern structures along with projections (i.e. a data reduction of sequential structures) are able to enumerate more meaningful patterns and increase the computing efficiency of the approach. Finally, we show the applicability of the presented method for discovering and analysing interesting patient patterns from a French healthcare data-set on cancer. The quantitative and qualitative results (with annotations and analysis from a physician) are reported in this use-case which is the main motivation for this work.
On-Line Algorithms and Reverse Mathematics
NASA Astrophysics Data System (ADS)
Harris, Seth
In this thesis, we classify the reverse-mathematical strength of sequential problems. If we are given a problem P of the form ∀X(alpha(X) → ∃Zbeta(X,Z)) then the corresponding sequential problem, SeqP, asserts the existence of infinitely many solutions to P: ∀X(∀nalpha(Xn) → ∃Z∀nbeta(X n,Zn)). P is typically provable in RCA0 if all objects involved are finite. SeqP, however, is only guaranteed to be provable in ACA0. In this thesis we exactly characterize which sequential problems are equivalent to RCA0, WKL0, or ACA0.. We say that a problem P is solvable by an on-line algorithm if P can be solved according to a two-player game, played by Alice and Bob, in which Bob has a winning strategy. Bob wins the game if Alice's sequence of plays 〈a0, ..., ak〉 and Bob's sequence of responses 〈 b0, ..., bk〉 constitute a solution to P. Formally, an on-line algorithm A is a function that inputs an admissible sequence of plays 〈a 0, b0, ..., aj〉 and outputs a new play bj for Bob. (This differs from the typical definition of "algorithm", though quite often a concrete set of instructions can be easily deduced from A.). We show that SeqP is provable in RCA0 precisely when P is solvable by an on-line algorithm. Schmerl proved this result specifically for the graph coloring problem; we generalize Schmerl's result to any problem that is on-line solvable. To prove our separation, we introduce a principle called Predictk(r) that is equivalent to -WKL0 for standard k, r.. We show that WKL0 is sufficient to prove SeqP precisely when P has a solvable closed kernel. This means that a solution exists, and each initial segment of this solution is a solution to the corresponding initial segment of the problem. (Certain bounding conditions are necessary as well.) If no such solution exists, then SeqP is equivalent to ACA0 over RCA 0 + ISigma02; RCA0 alone suffices if only sequences of standard length are considered. We use different techniques from Schmerl to prove this separation, and in the process we improve some of Schmerl's results on Grundy colorings. In Chapter 4 we analyze a variety of applications, classifying their sequential forms by reverse-mathematical strength. This builds upon similar work by Dorais and Hirst and Mummert. We consider combinatorial applications such as matching problems and Dilworth's theorems, and we also consider classic algorithms such as the task scheduling and paging problems. Tables summarizing our findings can be found at the end of Chapter 4.
NASA Astrophysics Data System (ADS)
Chen, Xinjia; Lacy, Fred; Carriere, Patrick
2015-05-01
Sequential test algorithms are playing increasingly important roles for quick detecting network intrusions such as portscanners. In view of the fact that such algorithms are usually analyzed based on intuitive approximation or asymptotic analysis, we develop an exact computational method for the performance analysis of such algorithms. Our method can be used to calculate the probability of false alarm and average detection time up to arbitrarily pre-specified accuracy.
Taljaard, Monica; McKenzie, Joanne E; Ramsay, Craig R; Grimshaw, Jeremy M
2014-06-19
An interrupted time series design is a powerful quasi-experimental approach for evaluating effects of interventions introduced at a specific point in time. To utilize the strength of this design, a modification to standard regression analysis, such as segmented regression, is required. In segmented regression analysis, the change in intercept and/or slope from pre- to post-intervention is estimated and used to test causal hypotheses about the intervention. We illustrate segmented regression using data from a previously published study that evaluated the effectiveness of a collaborative intervention to improve quality in pre-hospital ambulance care for acute myocardial infarction (AMI) and stroke. In the original analysis, a standard regression model was used with time as a continuous variable. We contrast the results from this standard regression analysis with those from segmented regression analysis. We discuss the limitations of the former and advantages of the latter, as well as the challenges of using segmented regression in analysing complex quality improvement interventions. Based on the estimated change in intercept and slope from pre- to post-intervention using segmented regression, we found insufficient evidence of a statistically significant effect on quality of care for stroke, although potential clinically important effects for AMI cannot be ruled out. Segmented regression analysis is the recommended approach for analysing data from an interrupted time series study. Several modifications to the basic segmented regression analysis approach are available to deal with challenges arising in the evaluation of complex quality improvement interventions.
Sadat, Umar; Howarth, Simon P S; Usman, Ammara; Tang, Tjun Y; Graves, Martin J; Gillard, Jonathan H
2013-11-01
Inflammation within atheromatous plaques is a known risk factor for plaque vulnerability. This can be detected in vivo on high-resolution magnetic resonance imaging (MRI) using ultrasmall superparamagnetic iron oxide (USPIO) contrast medium. The purpose of this study was to assess the feasibility of performing sequential USPIO studies over a 1-year period. Ten patients with moderate asymptomatic carotid stenosis underwent carotid MRI imaging both before and 36 hours after USPIO infusion at 0, 6, and 12 months. Images were manually segmented into quadrants, and the signal change per quadrant was calculated at these time points. A mixed repeated measures statistical model was used to determine signal change attributable to USPIO uptake over time. All patients remained asymptomatic during the study. The mixed model revealed no statistical difference in USPIO uptake between the 3 time points. Intraclass correlation coefficients revealed a good agreement of quadrant signal pre-USPIO infusion between 0 and 6 months (0.70) and 0 and 12 months (0.70). Good agreement of quadrant signal after USPIO infusion was shown between 0 and 6 months (0.68) and moderate agreement was shown between 0 and 12 months (0.33). USPIO-enhanced sequential MRI of atheromatous carotid plaques is clinically feasible. This may have important implications for future longitudinal studies involving pharmacologic intervention in large patient cohorts. Copyright © 2013 National Stroke Association. Published by Elsevier Inc. All rights reserved.
As-built design specification for proportion estimate software subsystem
NASA Technical Reports Server (NTRS)
Obrien, S. (Principal Investigator)
1980-01-01
The Proportion Estimate Processor evaluates four estimation techniques in order to get an improved estimate of the proportion of a scene that is planted in a selected crop. The four techniques to be evaluated were provided by the techniques development section and are: (1) random sampling; (2) proportional allocation, relative count estimate; (3) proportional allocation, Bayesian estimate; and (4) sequential Bayesian allocation. The user is given two options for computation of the estimated mean square error. These are referred to as the cluster calculation option and the segment calculation option. The software for the Proportion Estimate Processor is operational on the IBM 3031 computer.
Neural Correlates of Morphology Acquisition through a Statistical Learning Paradigm.
Sandoval, Michelle; Patterson, Dianne; Dai, Huanping; Vance, Christopher J; Plante, Elena
2017-01-01
The neural basis of statistical learning as it occurs over time was explored with stimuli drawn from a natural language (Russian nouns). The input reflected the "rules" for marking categories of gendered nouns, without making participants explicitly aware of the nature of what they were to learn. Participants were scanned while listening to a series of gender-marked nouns during four sequential scans, and were tested for their learning immediately after each scan. Although participants were not told the nature of the learning task, they exhibited learning after their initial exposure to the stimuli. Independent component analysis of the brain data revealed five task-related sub-networks. Unlike prior statistical learning studies of word segmentation, this morphological learning task robustly activated the inferior frontal gyrus during the learning period. This region was represented in multiple independent components, suggesting it functions as a network hub for this type of learning. Moreover, the results suggest that subnetworks activated by statistical learning are driven by the nature of the input, rather than reflecting a general statistical learning system.
Neural Correlates of Morphology Acquisition through a Statistical Learning Paradigm
Sandoval, Michelle; Patterson, Dianne; Dai, Huanping; Vance, Christopher J.; Plante, Elena
2017-01-01
The neural basis of statistical learning as it occurs over time was explored with stimuli drawn from a natural language (Russian nouns). The input reflected the “rules” for marking categories of gendered nouns, without making participants explicitly aware of the nature of what they were to learn. Participants were scanned while listening to a series of gender-marked nouns during four sequential scans, and were tested for their learning immediately after each scan. Although participants were not told the nature of the learning task, they exhibited learning after their initial exposure to the stimuli. Independent component analysis of the brain data revealed five task-related sub-networks. Unlike prior statistical learning studies of word segmentation, this morphological learning task robustly activated the inferior frontal gyrus during the learning period. This region was represented in multiple independent components, suggesting it functions as a network hub for this type of learning. Moreover, the results suggest that subnetworks activated by statistical learning are driven by the nature of the input, rather than reflecting a general statistical learning system. PMID:28798703
Molday, Robert S.; Zhong, Ming; Quazi, Faraz
2009-01-01
ABCA4 is a member of the ABCA subfamily of ATP binding cassette (ABC) transporters that is expressed in rod and cone photoreceptors of the vertebrate retina. ABCA4, also known as the Rim protein and ABCR, is a large 2273 amino acid glycoprotein organized as two tandem halves, each containing a single membrane spanning segment followed sequentially by a large exocytoplasmic domain, a multispanning membrane domain and a nucleotide binding domain. Over 500 mutations in the gene encoding ABCA4 are associated with a spectrum of related autosomal recessive retinal degenerative diseases including Stargardt macular degeneration, cone-rod dystrophy and a subset of retinitis pigmentosa. Biochemical studies on the purified ABCA4 together with analysis of abca4 knockout mice and patients with Stargardt disease have implicated ABCA4 as a retinylidene-phosphatidylethanolamine transporter that facilitates the removal of potentially reactive retinal derivatives from photoreceptors following photoexcitation. Knowledge of the genetic and molecular basis for ABCA4 related retinal degenerative diseases is being used to develop rationale therapeutic treatments for this set of disorders. PMID:19230850
Landsat-4 (TDRSS-user) orbit determination using batch least-squares and sequential methods
NASA Technical Reports Server (NTRS)
Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.
1992-01-01
TDRSS user orbit determination is analyzed using a batch least-squares method and a sequential estimation method. It was found that in the batch least-squares method analysis, the orbit determination consistency for Landsat-4, which was heavily tracked by TDRSS during January 1991, was about 4 meters in the rms overlap comparisons and about 6 meters in the maximum position differences in overlap comparisons. The consistency was about 10 to 30 meters in the 3 sigma state error covariance function in the sequential method analysis. As a measure of consistency, the first residual of each pass was within the 3 sigma bound in the residual space.
Cost-effectiveness of simultaneous versus sequential surgery in head and neck reconstruction.
Wong, Kevin K; Enepekides, Danny J; Higgins, Kevin M
2011-02-01
To determine whether simultaneous (ablation and reconstruction overlaps by two teams) head and neck reconstruction is cost effective compared to sequentially (ablation followed by reconstruction) performed surgery. Case-controlled study. Tertiary care hospital. Oncology patients undergoing free flap reconstruction of the head and neck. A match paired comparison study was performed with a retrospective chart review examining the total time of surgery for sequential and simultaneous surgery. Nine patients were selected for both the sequential and simultaneous groups. Sequential head and neck reconstruction patients were pair matched with patients who had undergone similar oncologic ablative or reconstructive procedures performed in a simultaneous fashion. A detailed cost analysis using the microcosting method was then undertaken looking at the direct costs of the surgeons, anesthesiologist, operating room, and nursing. On average, simultaneous surgery required 3 hours 15 minutes less operating time, leading to a cost savings of approximately $1200/case when compared to sequential surgery. This represents approximately a 15% reduction in the cost of the entire operation. Simultaneous head and neck reconstruction is more cost effective when compared to sequential surgery.
NASA Astrophysics Data System (ADS)
Philibosian, B.; Meltzner, A. J.; Sieh, K.
2017-12-01
Understanding earthquake cycle processes is key to both seismic hazard and fault mechanics. A concept that has come into focus recently is that rupture segmentation and cyclicity can be complex, and that simple models of periodically repeating similar earthquakes are inadequate. The term "supercycle" has been used to describe repeating longer periods of strain accumulation that involve multiple fault ruptures. However, this term has become broadly applied, lumping together several distinct phenomena that likely have disparate underlying causes. Earthquake recurrence patterns have often been described as "clustered," but this term is also imprecise. It is necessary to develop a terminology framework that consistently and meaningfully describes all types of behavior that are observed. We divide earthquake cycle patterns into four major classes, each having different implications for seismic hazard and fault mechanics: 1) quasi-periodic similar ruptures, 2) temporally clustered similar ruptures, 3) temporally clustered complementary ruptures, also known as rupture cascades, in which neighboring fault patches fail sequentially, and 4) superimposed cycles in which neighboring fault patches have cycles with different recurrence intervals, but may occasionally rupture together. Rupture segmentation is classified as persistent, frequent, or transient depending on how reliably ruptures terminate in a given area. We discuss the paleoseismic and historical evidence currently available for each of these types of behavior on subduction zone megathrust faults worldwide. Due to the unique level of paleoseismic and paleogeodetic detail provided by the coral microatoll technique, the Sumatran Sunda megathrust provides one of the most complete records over multiple seismic cycles. Most subduction zones with sufficient data exhibit examples of persistent and frequent segmentation, with cycle patterns 1, 3, and 4 on different segments. Pattern 2 is generally confined to overlap zones between segments. This catalog of seismic cycle observations provides a basis for exploring and modeling root causes of rupture segmentation and cycle behavior. Researchers should expect to discover similar behavior styles on other megathrust faults and perhaps major crustal faults around the world.
USDA-ARS?s Scientific Manuscript database
Segmentation is the first step in image analysis to subdivide an image into meaningful regions. The segmentation result directly affects the subsequent image analysis. The objective of the research was to develop an automatic adjustable algorithm for segmentation of color images, using linear suppor...
Huang, Alex S.; Saraswathy, Sindhu; Dastiridou, Anna; Begian, Alan; Mohindroo, Chirayu; Tan, James C. H.; Francis, Brian A.; Hinton, David R.; Weinreb, Robert N.
2016-01-01
Purpose To assess the ability of trabecular micro-bypass stents to improve aqueous humor outflow (AHO) in regions initially devoid of AHO as assessed by aqueous angiography. Methods Enucleated human eyes (14 total from 7 males and 3 females [ages 52–84]) were obtained from an eye bank within 48 hours of death. Eyes were oriented by inferior oblique insertion, and aqueous angiography was performed with indocyanine green (ICG; 0.4%) or fluorescein (2.5%) at 10 mm Hg. With an angiographer, infrared and fluorescent images were acquired. Concurrent anterior segment optical coherence tomography (OCT) was performed, and fixable fluorescent dextrans were introduced into the eye for histologic analysis of angiographically positive and negative areas. Experimentally, some eyes (n = 11) first received ICG aqueous angiography to determine angiographic patterns. These eyes then underwent trabecular micro-bypass sham or stent placement in regions initially devoid of angiographic signal. This was followed by fluorescein aqueous angiography to query the effects. Results Aqueous angiography in human eyes yielded high-quality images with segmental patterns. Distally, angiographically positive but not negative areas demonstrated intrascleral lumens on OCT images. Aqueous angiography with fluorescent dextrans led to their trapping in AHO pathways. Trabecular bypass but not sham in regions initially devoid of ICG aqueous angiography led to increased aqueous angiography as assessed by fluorescein (P = 0.043). Conclusions Using sequential aqueous angiography in an enucleated human eye model system, regions initially without angiographic flow or signal could be recruited for AHO using a trabecular bypass stent. PMID:27588614
Evaluation Using Sequential Trials Methods.
ERIC Educational Resources Information Center
Cohen, Mark E.; Ralls, Stephen A.
1986-01-01
Although dental school faculty as well as practitioners are interested in evaluating products and procedures used in clinical practice, research design and statistical analysis can sometimes pose problems. Sequential trials methods provide an analytical structure that is both easy to use and statistically valid. (Author/MLW)
[Scimitar syndrome. Correlation anatomo-embryological].
Muñoz-Castellanos, Luis; Kuri-Nivon, Magdalena
2016-01-01
To describe morphologically a toracoabdominal visceral block of a scimitar's syndrome case. We propose a pathogenetic theory wich explains the development of the pulmonary venous connection in this syndrome. The anatomic specimen was described with the segmental sequential system. The situs was solitus, the connections between the cardiac segments and the associated anomalies were determined. The anatomy of both lungs, including the venous pulmonary connection, was described. A pathogenetic hypothesis was made, which explains the pulmonary venous connection throw a correlation between the pathology of this syndrome and the normal development of the pulmonary veins. The situs was solitus, the connections of the cardiac chambers were normal; there were hypoplasia and dysplasia of the right lung with sequestration of the inferior lobe; the right pulmonary veins were connected with a curved collector which drainaged into the suprahepatic segment of the inferior vena cava; the left pulmonary veins were open into the left atrium. The sequestered inferior lobe of the right lung received irrigation throw a collateral aortopulmonary vessel. There was an atrial septal defect. The pathogenetic hypothesis propose that the pulmonary venous connection in this syndrome represent the persistent of the Streeter's horizon xiv (28-30 days of development), period in which the sinus of the pulmonary veins has double connection, with the left atrium and with a primitive collector into the right viteline vein which forms the suprahepatic segment of the inferior vena cava. Copyright © 2015 Instituto Nacional de Cardiología Ignacio Chávez. Published by Masson Doyma México S.A. All rights reserved.
Inspiration from nature: dynamic modelling of the musculoskeletal structure of the seahorse tail.
Praet, Tomas; Adriaens, Dominique; Van Cauter, Sofie; Masschaele, Bert; De Beule, Matthieu; Verhegghe, Benedict
2012-10-01
Technological advances are often inspired by nature, considering that engineering is frequently faced by the same challenges as organisms in nature. One such interesting challenge is creating a structure that is at the same time stiff in a certain direction, yet flexible in another. The seahorse tail combines both radial stiffness and bending flexibility in a particularly elegant way: even though the tail is covered in a protective armour, it still shows sufficient flexibility to fully function as a prehensile organ. We therefore study the complex mechanics and dynamics of the musculoskeletal system of the seahorse tail from an engineering point of view. The seahorse tail derives its combination of flexibility and resilience from a chain of articulating skeletal segments. A versatile dynamic model of those segments was constructed, on the basis of automatic recognition of joint positions and muscle attachments. Both muscle structures that are thought to be responsible for ventral and ventral-lateral tail bending, namely the median ventral muscles and the hypaxial myomere muscles, were included in the model. Simulations on the model consist mainly of dynamic multi-body simulations. The results show that the sequential structure of uniformly shaped bony segments can remain flexible because of gliding joints that connect the corners of the segments. Radial stiffness on the other hand is obtained through the support that the central vertebra provides to the tail plating. Such insights could help in designing biomedical instruments that specifically require both high bending flexibility and radial stiffness (e.g. flexible stents and steerable catheters). Copyright © 2012 John Wiley & Sons, Ltd.
Dobolyi, David G; Dodson, Chad S
2013-12-01
Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Multiplexed electrokinetic sample fractionation, preconcentration and elution for proteomics.
Hua, Yujuan; Jemere, Abebaw B; Dragoljic, Jelena; Harrison, D Jed
2013-07-07
Both 6 and 8-channel integrated microfluidic sample pretreatment devices capable of performing "in space" sample fractionation, collection, preconcentration and elution of captured analytes via sheath flow assisted electrokinetic pumping are described. Coatings and monolithic polymer beds were developed for the glass devices to provide cationic surface charge and anodal electroosmotic flow for delivery to an electrospray emitter tip. A mixed cationic ([2-(methacryloyloxy)ethyl] trimethylammonium chloride) (META) and hydrophobic butyl methacrylate-based monolithic porous polymer, photopolymerized in the 6- or 8-fractionation channels, was used to capture and preconcentrate samples. A 0.45 wt% META loaded bed generated comparable anodic electroosmotic flow to the cationic polymer PolyE-323 coated channel segments in the device. The balanced electroosmotic flow allowed stable electrokinetic sheath flow to prevent cross contamination of separated protein fractions, while reducing protein/peptide adsorption on the channel walls. Sequential elution of analytes trapped in the SPE beds revealed that the monolithic columns could be efficiently used to provide sheath flow during elution of analytes, as demonstrated for neutral carboxy SNARF (residual signal, 0.08% RSD, n = 40) and charged fluorescein (residual signal, 2.5% n = 40). Elution from monolithic columns showed reproducible performance with peak area reproducibility of ~8% (n = 6 columns) in a single sequential elution and the run-to-run reproducibility was 2.4-6.7% RSD (n = 4) for elution from the same bed. The demonstrated ability of this device design and operation to elute from multiple fractionation beds into a single exit channel for sample analysis by fluorescence or electrospray mass spectrometry is a crucial component of an integrated fractionation and assay system for proteomics.
Yoshiga, Yasuhiro; Shimizu, Akihiko; Ueyama, Takeshi; Ono, Makoto; Fukuda, Masakazu; Fumimoto, Tomoko; Ishiguchi, Hironori; Omuro, Takuya; Kobayashi, Shigeki; Yano, Masafumi
2018-08-01
An effective catheter ablation strategy, beyond pulmonary vein isolation (PVI), for persistent atrial fibrillation (AF) is necessary. Pulmonary vein (PV)-reconduction also causes recurrent atrial tachyarrhythmias. The effect of the PVI and additional effect of a superior vena cava (SVC) isolation (SVCI) was strictly evaluated. Seventy consecutive patients with persistent AF who underwent a strict sequential ablation strategy targeting the PVs and SVC were included in this study. The initial ablation strategy was a circumferential PVI. A segmental SVCI was only applied as a repeat procedure when patients demonstrated no PV-reconduction. After the initial procedure, persistent AF was suppressed in 39 of 70 (55.7%) patients during a median follow-up of 32 months. After multiple procedures, persistent AF was suppressed in 46 (65.7%) and 52 (74.3%) patients after receiving the PVI alone and PVI plus SVCI strategies, respectively. In 6 of 15 (40.0%) patients with persistent AF resistant to PVI, persistent AF was suppressed. The persistent AF duration independently predicted persistent AF recurrences after multiple PVI alone procedures [HR: 1.012 (95% confidence interval: 1.006-1.018); p<0.001] and PVI plus SVCI strategies [HR: 1.018 (95% confidence interval: 1.011-1.025); p<0.001]. A receiver-operating-characteristic analysis for recurrent persistent AF indicated an optimal cut-off value of 20 and 32 months for the persistent AF duration using the PVI alone and PVI plus SVCI strategies, respectively. The outcomes of the PVI plus SVCI strategy were favorable for patients with shorter persistent AF durations. The initial SVCI had the additional effect of maintaining sinus rhythm in some patients with persistent AF resistant to PVI. Copyright © 2018 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.
PySeqLab: an open source Python package for sequence labeling and segmentation.
Allam, Ahmed; Krauthammer, Michael
2017-11-01
Text and genomic data are composed of sequential tokens, such as words and nucleotides that give rise to higher order syntactic constructs. In this work, we aim at providing a comprehensive Python library implementing conditional random fields (CRFs), a class of probabilistic graphical models, for robust prediction of these constructs from sequential data. Python Sequence Labeling (PySeqLab) is an open source package for performing supervised learning in structured prediction tasks. It implements CRFs models, that is discriminative models from (i) first-order to higher-order linear-chain CRFs, and from (ii) first-order to higher-order semi-Markov CRFs (semi-CRFs). Moreover, it provides multiple learning algorithms for estimating model parameters such as (i) stochastic gradient descent (SGD) and its multiple variations, (ii) structured perceptron with multiple averaging schemes supporting exact and inexact search using 'violation-fixing' framework, (iii) search-based probabilistic online learning algorithm (SAPO) and (iv) an interface for Broyden-Fletcher-Goldfarb-Shanno (BFGS) and the limited-memory BFGS algorithms. Viterbi and Viterbi A* are used for inference and decoding of sequences. Using PySeqLab, we built models (classifiers) and evaluated their performance in three different domains: (i) biomedical Natural language processing (NLP), (ii) predictive DNA sequence analysis and (iii) Human activity recognition (HAR). State-of-the-art performance comparable to machine-learning based systems was achieved in the three domains without feature engineering or the use of knowledge sources. PySeqLab is available through https://bitbucket.org/A_2/pyseqlab with tutorials and documentation. ahmed.allam@yale.edu or michael.krauthammer@yale.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Miller, Victoria A; Jawad, Abbas F
2018-05-17
To assess developmental trajectories of decision-making involvement (DMI), defined as the ways in which parents and children engage each other in decision-making about illness management, in youth with type 1 diabetes (T1D) and examine the effects of DMI on levels of and changes in adherence with age. Participants included 117 youth with T1D, enrolled at ages 8-16 years and assessed five times over 2 years. The cohort sequential design allowed for the approximation of the longitudinal curve from age 8 to 19 from overlapping cohort segments. Children and parents completed the Decision-Making Involvement Scale, which yields subscales for different aspects of DMI, and a self-report adherence questionnaire. Mixed-effects growth curve modeling was used for analysis, with longitudinal measures nested within participant and participants nested within cohort. Most aspects of DMI (Parent Express, Parent Seek, Child Express, and Joint) increased with child age; scores on some child report subscales (Parent Express, Child Seek, and Joint) decreased after age 12-14 years. After accounting for age, Child Seek, Child Express, and Joint were associated with overall higher levels of adherence in both child (estimates = 0.08-0.13, p < .001) and parent (estimates = 0.07- 0.13, p < .01) report models, but they did not predict changes in adherence with age. These data suggest that helping children to be more proactive in T1D discussions, by encouraging them to express their opinions, share information, and solicit guidance from parents, is a potential target for interventions to enhance effective self-management.
NASA Astrophysics Data System (ADS)
Obulesu, O.; Rama Mohan Reddy, A., Dr; Mahendra, M.
2017-08-01
Detecting regular and efficient cyclic models is the demanding activity for data analysts due to unstructured, vigorous and enormous raw information produced from web. Many existing approaches generate large candidate patterns in the occurrence of huge and complex databases. In this work, two novel algorithms are proposed and a comparative examination is performed by considering scalability and performance parameters. The first algorithm is, EFPMA (Extended Regular Model Detection Algorithm) used to find frequent sequential patterns from the spatiotemporal dataset and the second one is, ETMA (Enhanced Tree-based Mining Algorithm) for detecting effective cyclic models with symbolic database representation. EFPMA is an algorithm grows models from both ends (prefixes and suffixes) of detected patterns, which results in faster pattern growth because of less levels of database projection compared to existing approaches such as Prefixspan and SPADE. ETMA uses distinct notions to store and manage transactions data horizontally such as segment, sequence and individual symbols. ETMA exploits a partition-and-conquer method to find maximal patterns by using symbolic notations. Using this algorithm, we can mine cyclic models in full-series sequential patterns including subsection series also. ETMA reduces the memory consumption and makes use of the efficient symbolic operation. Furthermore, ETMA only records time-series instances dynamically, in terms of character, series and section approaches respectively. The extent of the pattern and proving efficiency of the reducing and retrieval techniques from synthetic and actual datasets is a really open & challenging mining problem. These techniques are useful in data streams, traffic risk analysis, medical diagnosis, DNA sequence Mining, Earthquake prediction applications. Extensive investigational outcomes illustrates that the algorithms outperforms well towards efficiency and scalability than ECLAT, STNR and MAFIA approaches.
NASA Technical Reports Server (NTRS)
Tilton, James C.; Lawrence, William T.; Plaza, Antonio J.
2006-01-01
The hierarchical segmentation (HSEG) algorithm is a hybrid of hierarchical step-wise optimization and constrained spectral clustering that produces a hierarchical set of image segmentations. This segmentation hierarchy organizes image data in a manner that makes the image's information content more accessible for analysis by enabling region-based analysis. This paper discusses data analysis with HSEG and describes several measures of region characteristics that may be useful analyzing segmentation hierarchies for various applications. Segmentation hierarchy analysis for generating landwater and snow/ice masks from MODIS (Moderate Resolution Imaging Spectroradiometer) data was demonstrated and compared with the corresponding MODIS standard products. The masks based on HSEG segmentation hierarchies compare very favorably to the MODIS standard products. Further, the HSEG based landwater mask was specifically tailored to the MODIS data and the HSEG snow/ice mask did not require the setting of a critical threshold as required in the production of the corresponding MODIS standard product.
The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System
NASA Technical Reports Server (NTRS)
Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim
2008-01-01
Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.
Fully vs. Sequentially Coupled Loads Analysis of Offshore Wind Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damiani, Rick; Wendt, Fabian; Musial, Walter
The design and analysis methods for offshore wind turbines must consider the aerodynamic and hydrodynamic loads and response of the entire system (turbine, tower, substructure, and foundation) coupled to the turbine control system dynamics. Whereas a fully coupled (turbine and support structure) modeling approach is more rigorous, intellectual property concerns can preclude this approach. In fact, turbine control system algorithms and turbine properties are strictly guarded and often not shared. In many cases, a partially coupled analysis using separate tools and an exchange of reduced sets of data via sequential coupling may be necessary. In the sequentially coupled approach, themore » turbine and substructure designers will independently determine and exchange an abridged model of their respective subsystems to be used in their partners' dynamic simulations. Although the ability to achieve design optimization is sacrificed to some degree with a sequentially coupled analysis method, the central question here is whether this approach can deliver the required safety and how the differences in the results from the fully coupled method could affect the design. This work summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between these approaches through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.« less
Structural analysis of vibroacoustical processes
NASA Technical Reports Server (NTRS)
Gromov, A. P.; Myasnikov, L. L.; Myasnikova, Y. N.; Finagin, B. A.
1973-01-01
The method of automatic identification of acoustical signals, by means of the segmentation was used to investigate noises and vibrations in machines and mechanisms, for cybernetic diagnostics. The structural analysis consists of presentation of a noise or vibroacoustical signal as a sequence of segments, determined by the time quantization, in which each segment is characterized by specific spectral characteristics. The structural spectrum is plotted as a histogram of the segments, also as a relation of the probability density of appearance of a segment to the segment type. It is assumed that the conditions of ergodic processes are maintained.
von Helversen, Bettina; Mata, Rui
2012-12-01
We investigated the contribution of cognitive ability and affect to age differences in sequential decision making by asking younger and older adults to shop for items in a computerized sequential decision-making task. Older adults performed poorly compared to younger adults partly due to searching too few options. An analysis of the decision process with a formal model suggested that older adults set lower thresholds for accepting an option than younger participants. Further analyses suggested that positive affect, but not fluid abilities, was related to search in the sequential decision task. A second study that manipulated affect in younger adults supported the causal role of affect: Increased positive affect lowered the initial threshold for accepting an attractive option. In sum, our results suggest that positive affect is a key factor determining search in sequential decision making. Consequently, increased positive affect in older age may contribute to poorer sequential decisions by leading to insufficient search. 2013 APA, all rights reserved
Analysis of Optimal Sequential State Discrimination for Linearly Independent Pure Quantum States.
Namkung, Min; Kwon, Younghun
2018-04-25
Recently, J. A. Bergou et al. proposed sequential state discrimination as a new quantum state discrimination scheme. In the scheme, by the successful sequential discrimination of a qubit state, receivers Bob and Charlie can share the information of the qubit prepared by a sender Alice. A merit of the scheme is that a quantum channel is established between Bob and Charlie, but a classical communication is not allowed. In this report, we present a method for extending the original sequential state discrimination of two qubit states to a scheme of N linearly independent pure quantum states. Specifically, we obtain the conditions for the sequential state discrimination of N = 3 pure quantum states. We can analytically provide conditions when there is a special symmetry among N = 3 linearly independent pure quantum states. Additionally, we show that the scenario proposed in this study can be applied to quantum key distribution. Furthermore, we show that the sequential state discrimination of three qutrit states performs better than the strategy of probabilistic quantum cloning.
Wang, Nelson; Qian, Pierre; Kumar, Shejil; Yan, Tristan D; Phan, Kevin
2016-04-15
There have been a myriad of studies investigating the effectiveness of N-acetylcysteine (NAC) in the prevention of contrast induced nephropathy (CIN) in patients undergoing coronary angiography (CAG) with or without percutaneous coronary intervention (PCI). However the consensus is still out about the effectiveness of NAC pre-treatment due to vastly mixed results amongst the literature. The aim of this study was to conduct a meta-analysis and trial sequential analysis to determine the effects of pre-operative NAC in lowering the incidence of CIN in patients undergoing CAG and/or PCI. A systematic literature search was performed to include all randomized controlled trials (RCTs) comparing NAC versus control as pretreatment for CAG and/or PCI. A traditional meta-analysis and several subgroup analyses were conducted using traditional meta-analysis with relative risk (RR), trial sequential analysis, and meta-regression analysis. 43 RCTs met our inclusion criteria giving a total of 3277 patients in both control and treatment arms. There was a significant reduction in the risk of CIN in the NAC treated group compared to control (OR 0.666; 95% CI, 0.532-0.834; I2=40.11%; p=0.004). Trial sequential analysis, using a relative risk reduction threshold of 15%, indicates that the evidence is firm. The results of the present paper support the use of NAC in the prevention of CIN in patients undergoing CAG±PCI. Future studies should focus on the benefits of NAC amongst subgroups of high-risk patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.; Norris, Jay P.; Jackson, Brad; Chiang, James
2013-01-01
This paper addresses the problem of detecting and characterizing local variability in time series and other forms of sequential data. The goal is to identify and characterize statistically significant variations, at the same time suppressing the inevitable corrupting observational errors. We present a simple nonparametric modeling technique and an algorithm implementing it-an improved and generalized version of Bayesian Blocks [Scargle 1998]-that finds the optimal segmentation of the data in the observation interval. The structure of the algorithm allows it to be used in either a real-time trigger mode, or a retrospective mode. Maximum likelihood or marginal posterior functions to measure model fitness are presented for events, binned counts, and measurements at arbitrary times with known error distributions. Problems addressed include those connected with data gaps, variable exposure, extension to piece- wise linear and piecewise exponential representations, multivariate time series data, analysis of variance, data on the circle, other data modes, and dispersed data. Simulations provide evidence that the detection efficiency for weak signals is close to a theoretical asymptotic limit derived by [Arias-Castro, Donoho and Huo 2003]. In the spirit of Reproducible Research [Donoho et al. (2008)] all of the code and data necessary to reproduce all of the figures in this paper are included as auxiliary material.
Sequential biases in accumulating evidence
Huggins, Richard; Dogo, Samson Henry
2015-01-01
Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed ‘sequential decision bias’ and ‘sequential design bias’, are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed‐effect and the random‐effects models of meta‐analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence‐based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd. PMID:26626562
2009-01-01
Abstract Collaborative care models for depression in primary care are effective and cost-effective, but difficult to spread to new sites. Translating Initiatives for Depression into Effective Solutions (TIDES) is an initiative to promote evidence-based collaborative care in the U.S. Veterans Health Administration (VHA). Social marketing applies marketing techniques to promote positive behavior change. Described in this paper, TIDES used a social marketing approach to foster national spread of collaborative care models. TIDES social marketing approach The approach relied on a sequential model of behavior change and explicit attention to audience segmentation. Segments included VHA national leadership, Veterans Integrated Service Network (VISN) regional leadership, facility managers, frontline providers, and veterans. TIDES communications, materials and messages targeted each segment, guided by an overall marketing plan. Results Depression collaborative care based on the TIDES model was adopted by VHA as part of the new Primary Care Mental Health Initiative and associated policies. It is currently in use in more than 50 primary care practices across the United States, and continues to spread, suggesting success for its social marketing-based dissemination strategy. Discussion and conclusion Development, execution and evaluation of the TIDES marketing effort shows that social marketing is a promising approach for promoting implementation of evidence-based interventions in integrated healthcare systems. PMID:19785754
2006-10-01
lead to false positive segmental hair analysis results.13 Due to the increased risk of false positives associated with segmental hair analysis ...to 200 mg of hair (to allow confirmation testing). 7 The segments are typically washed to remove external contaminants and the chemicals in the hair ...further confirmation. The method overcomes the false positives associated with traditional segmental hair analysis such. By measuring the
Mathematical Problem Solving through Sequential Process Analysis
ERIC Educational Resources Information Center
Codina, A.; Cañadas, M. C.; Castro, E.
2015-01-01
Introduction: The macroscopic perspective is one of the frameworks for research on problem solving in mathematics education. Coming from this perspective, our study addresses the stages of thought in mathematical problem solving, offering an innovative approach because we apply sequential relations and global interrelations between the different…
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Multifunctional 3D printing of heterogeneous hydrogel structures
NASA Astrophysics Data System (ADS)
Nadernezhad, Ali; Khani, Navid; Skvortsov, Gözde Akdeniz; Toprakhisar, Burak; Bakirci, Ezgi; Menceloglu, Yusuf; Unal, Serkan; Koc, Bahattin
2016-09-01
Multimaterial additive manufacturing or three-dimensional (3D) printing of hydrogel structures provides the opportunity to engineer geometrically dependent functionalities. However, current fabrication methods are mostly limited to one type of material or only provide one type of functionality. In this paper, we report a novel method of multimaterial deposition of hydrogel structures based on an aspiration-on-demand protocol, in which the constitutive multimaterial segments of extruded filaments were first assembled in liquid state by sequential aspiration of inks into a glass capillary, followed by in situ gel formation. We printed different patterned objects with varying chemical, electrical, mechanical, and biological properties by tuning process and material related parameters, to demonstrate the abilities of this method in producing heterogeneous and multi-functional hydrogel structures. Our results show the potential of proposed method in producing heterogeneous objects with spatially controlled functionalities while preserving structural integrity at the switching interface between different segments. We anticipate that this method would introduce new opportunities in multimaterial additive manufacturing of hydrogels for diverse applications such as biosensors, flexible electronics, tissue engineering and organ printing.
Multifunctional 3D printing of heterogeneous hydrogel structures
Nadernezhad, Ali; Khani, Navid; Skvortsov, Gözde Akdeniz; Toprakhisar, Burak; Bakirci, Ezgi; Menceloglu, Yusuf; Unal, Serkan; Koc, Bahattin
2016-01-01
Multimaterial additive manufacturing or three-dimensional (3D) printing of hydrogel structures provides the opportunity to engineer geometrically dependent functionalities. However, current fabrication methods are mostly limited to one type of material or only provide one type of functionality. In this paper, we report a novel method of multimaterial deposition of hydrogel structures based on an aspiration-on-demand protocol, in which the constitutive multimaterial segments of extruded filaments were first assembled in liquid state by sequential aspiration of inks into a glass capillary, followed by in situ gel formation. We printed different patterned objects with varying chemical, electrical, mechanical, and biological properties by tuning process and material related parameters, to demonstrate the abilities of this method in producing heterogeneous and multi-functional hydrogel structures. Our results show the potential of proposed method in producing heterogeneous objects with spatially controlled functionalities while preserving structural integrity at the switching interface between different segments. We anticipate that this method would introduce new opportunities in multimaterial additive manufacturing of hydrogels for diverse applications such as biosensors, flexible electronics, tissue engineering and organ printing. PMID:27630079
Computational aspects of helicopter trim analysis and damping levels from Floquet theory
NASA Technical Reports Server (NTRS)
Gaonkar, Gopal H.; Achar, N. S.
1992-01-01
Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.
2012-01-01
Background This study illustrates an evidence-based method for the segmentation analysis of patients that could greatly improve the approach to population-based medicine, by filling a gap in the empirical analysis of this topic. Segmentation facilitates individual patient care in the context of the culture, health status, and the health needs of the entire population to which that patient belongs. Because many health systems are engaged in developing better chronic care management initiatives, patient profiles are critical to understanding whether some patients can move toward effective self-management and can play a central role in determining their own care, which fosters a sense of responsibility for their own health. A review of the literature on patient segmentation provided the background for this research. Method First, we conducted a literature review on patient satisfaction and segmentation to build a survey. Then, we performed 3,461 surveys of outpatient services users. The key structures on which the subjects’ perception of outpatient services was based were extrapolated using principal component factor analysis with varimax rotation. After the factor analysis, segmentation was performed through cluster analysis to better analyze the influence of individual attitudes on the results. Results Four segments were identified through factor and cluster analysis: the “unpretentious,” the “informed and supported,” the “experts” and the “advanced” patients. Their policies and managerial implications are outlined. Conclusions With this research, we provide the following: – a method for profiling patients based on common patient satisfaction surveys that is easily replicable in all health systems and contexts; – a proposal for segments based on the results of a broad-based analysis conducted in the Italian National Health System (INHS). Segments represent profiles of patients requiring different strategies for delivering health services. Their knowledge and analysis might support an effort to build an effective population-based medicine approach. PMID:23256543
Prinyakupt, Jaroonrut; Pluempitiwiriyawej, Charnchai
2015-06-30
Blood smear microscopic images are routinely investigated by haematologists to diagnose most blood diseases. However, the task is quite tedious and time consuming. An automatic detection and classification of white blood cells within such images can accelerate the process tremendously. In this paper we propose a system to locate white blood cells within microscopic blood smear images, segment them into nucleus and cytoplasm regions, extract suitable features and finally, classify them into five types: basophil, eosinophil, neutrophil, lymphocyte and monocyte. Two sets of blood smear images were used in this study's experiments. Dataset 1, collected from Rangsit University, were normal peripheral blood slides under light microscope with 100× magnification; 555 images with 601 white blood cells were captured by a Nikon DS-Fi2 high-definition color camera and saved in JPG format of size 960 × 1,280 pixels at 15 pixels per 1 μm resolution. In dataset 2, 477 cropped white blood cell images were downloaded from CellaVision.com. They are in JPG format of size 360 × 363 pixels. The resolution is estimated to be 10 pixels per 1 μm. The proposed system comprises a pre-processing step, nucleus segmentation, cell segmentation, feature extraction, feature selection and classification. The main concept of the segmentation algorithm employed uses white blood cell's morphological properties and the calibrated size of a real cell relative to image resolution. The segmentation process combined thresholding, morphological operation and ellipse curve fitting. Consequently, several features were extracted from the segmented nucleus and cytoplasm regions. Prominent features were then chosen by a greedy search algorithm called sequential forward selection. Finally, with a set of selected prominent features, both linear and naïve Bayes classifiers were applied for performance comparison. This system was tested on normal peripheral blood smear slide images from two datasets. Two sets of comparison were performed: segmentation and classification. The automatically segmented results were compared to the ones obtained manually by a haematologist. It was found that the proposed method is consistent and coherent in both datasets, with dice similarity of 98.9 and 91.6% for average segmented nucleus and cell regions, respectively. Furthermore, the overall correction rate in the classification phase is about 98 and 94% for linear and naïve Bayes models, respectively. The proposed system, based on normal white blood cell morphology and its characteristics, was applied to two different datasets. The results of the calibrated segmentation process on both datasets are fast, robust, efficient and coherent. Meanwhile, the classification of normal white blood cells into five types shows high sensitivity in both linear and naïve Bayes models, with slightly better results in the linear classifier.
Process for structural geologic analysis of topography and point data
Eliason, Jay R.; Eliason, Valerie L. C.
1987-01-01
A quantitative method of geologic structural analysis of digital terrain data is described for implementation on a computer. Assuming selected valley segments are controlled by the underlying geologic structure, topographic lows in the terrain data, defining valley bottoms, are detected, filtered and accumulated into a series line segments defining contiguous valleys. The line segments are then vectorized to produce vector segments, defining valley segments, which may be indicative of the underlying geologic structure. Coplanar analysis is performed on vector segment pairs to determine which vectors produce planes which represent underlying geologic structure. Point data such as fracture phenomena which can be related to fracture planes in 3-dimensional space can be analyzed to define common plane orientation and locations. The vectors, points, and planes are displayed in various formats for interpretation.
Surface characterization of cottonseed meal products by SEM, SEM-EDS, XRD and XPS analysis
USDA-ARS?s Scientific Manuscript database
The utilization of cottonseed meal as a valuable industrial feedstock needs to be exploited. We have recently produced water-washed cottonseed meal, total cottonseed protein, sequentially extracted water- and alkali-soluble proteins, and two residues after the total and sequential protein extraction...
Bedroom Rape: Sequences of Sexual Behavior in Stranger Assaults
ERIC Educational Resources Information Center
Fossi, Julia J.; Clarke, David D.; Lawrence, Claire
2005-01-01
This article examines the sequential, temporal, and interactional aspects of sexual assaults using sequential analysis. Fourteen statements taken from victims of bedroom-based assaults were analyzed to provide a comprehensive account of the behavioral patterns of individuals in sexually charged conflict situations. The cases were found to vary in…
ERIC Educational Resources Information Center
Lay, Robert S.
The advantages and disadvantages of new software for market segmentation analysis are discussed, and the application of this new, chi-square based procedure (CHAID), is illustrated. A comparison is presented of an earlier, binary segmentation technique (THAID) and a multiple discriminant analysis. It is suggested that CHAID is superior to earlier…
Simultaneous versus sequential penetrating keratoplasty and cataract surgery.
Hayashi, Ken; Hayashi, Hideyuki
2006-10-01
To compare the surgical outcomes of simultaneous penetrating keratoplasty and cataract surgery with those of sequential surgery. Thirty-nine eyes of 39 patients scheduled for simultaneous keratoplasty and cataract surgery and 23 eyes of 23 patients scheduled for sequential keratoplasty and secondary phacoemulsification surgery were recruited. Refractive error, regular and irregular corneal astigmatism determined by Fourier analysis, and endothelial cell loss were studied at 1 week and 3, 6, and 12 months after combined surgery in the simultaneous surgery group or after subsequent phacoemulsification surgery in the sequential surgery group. At 3 and more months after surgery, mean refractive error was significantly greater in the simultaneous surgery group than in the sequential surgery group, although no difference was seen at 1 week. The refractive error at 12 months was within 2 D of that targeted in 15 eyes (39%) in the simultaneous surgery group and within 2 D in 16 eyes (70%) in the sequential surgery group; the incidence was significantly greater in the sequential group (P = 0.0344). The regular and irregular astigmatism was not significantly different between the groups at 3 and more months after surgery. No significant difference was also found in the percentage of endothelial cell loss between the groups. Although corneal astigmatism and endothelial cell loss were not different, refractive error from target refraction was greater after simultaneous keratoplasty and cataract surgery than after sequential surgery, indicating a better outcome after sequential surgery than after simultaneous surgery.
Dynamic Encoding of Speech Sequence Probability in Human Temporal Cortex
Leonard, Matthew K.; Bouchard, Kristofer E.; Tang, Claire
2015-01-01
Sensory processing involves identification of stimulus features, but also integration with the surrounding sensory and cognitive context. Previous work in animals and humans has shown fine-scale sensitivity to context in the form of learned knowledge about the statistics of the sensory environment, including relative probabilities of discrete units in a stream of sequential auditory input. These statistics are a defining characteristic of one of the most important sequential signals humans encounter: speech. For speech, extensive exposure to a language tunes listeners to the statistics of sound sequences. To address how speech sequence statistics are neurally encoded, we used high-resolution direct cortical recordings from human lateral superior temporal cortex as subjects listened to words and nonwords with varying transition probabilities between sound segments. In addition to their sensitivity to acoustic features (including contextual features, such as coarticulation), we found that neural responses dynamically encoded the language-level probability of both preceding and upcoming speech sounds. Transition probability first negatively modulated neural responses, followed by positive modulation of neural responses, consistent with coordinated predictive and retrospective recognition processes, respectively. Furthermore, transition probability encoding was different for real English words compared with nonwords, providing evidence for online interactions with high-order linguistic knowledge. These results demonstrate that sensory processing of deeply learned stimuli involves integrating physical stimulus features with their contextual sequential structure. Despite not being consciously aware of phoneme sequence statistics, listeners use this information to process spoken input and to link low-level acoustic representations with linguistic information about word identity and meaning. PMID:25948269
Sequential /sup 1/H NMR assignments and secondary structure of hen egg white lysozyme in solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redfield, C.; Dobson, C.M.
Assignments of /sup 1/H NMR resonances of 121 of the 129 residues of hen egg white lysozyme have been obtained by sequence-specific methods. Spin systems were identified with phase-sensitive two-dimensional (2-D) correlated spectroscopy and single and double relayed coherence transfer spectroscopy. For key types of amino acid residues, particularly alanine, threonine, valine, and glycine, complete spin systems were identified. For other residues a less complete definition of the spin system was found to be adequate for the purpose of sequential assignment. Sequence-specific assignments were achieved by phase-sensitive 2-D nuclear Overhauser enhancement spectroscopy (NOESY). Exploitation of the wide range of hydrogenmore » exchange rates found in lysozyme was a useful approach to overcoming the problem of spectral overlap. The sequential assignment was built up from 21 peptide segments ranging in length from 2 to 13 residues. The NOESY spectra were also used to provide information about the secondary structure of the protein in solution. Three helical regions and two regions of ..beta..-sheet were identified from the NOESY data; these regions are identical with those found in the X-ray structure of hen lysozyme. Slowly exchanging amides are generally correlated with hydrogen bonding identified in the X-ray structure; a number of exceptions to this general trend were, however, found. The results presented in this paper indicate that highly detailed information can be obtained from 2-D NMR spectra of a protein that is significantly larger than those studies previously.« less
NASA Astrophysics Data System (ADS)
Hasanuddin; Setyawan, A.; Yulianto, B.
2018-03-01
Assessment to the performance of road pavement is deemed necessary to improve the management quality of road maintenance and rehabilitation. This research to evaluate the road base on functional and structural and recommendations handling done. Assessing the pavement performance is conducted with functional and structural evaluation. Functional evaluation of pavement is based on the value of IRI (International Roughness Index) which among others is derived from reading NAASRA for analysis and recommended road handling. Meanwhile, structural evaluation of pavement is done by analyzing deflection value based on FWD (Falling Weight Deflectometer) data resulting in SN (Structural Number) value. The analysis will result in SN eff (Structural Number Effective) and SN f (Structural Number Future) value obtained from comparing SN eff to SN f value that leads to SCI (Structural Condition Index) value. SCI value implies the possible recommendation for handling pavement. The study done to Simpang Tuan-Batas Kota Jambi road segment was based on functional analysis. The study indicated that the road segment split into 12 segments in which segment 1, 3, 5, 7, 9, and 11 were of regular maintenance, segment 2, 4, 8, 10, 12 belonged to periodic maintenance, and segment 6 was of rehabilitation. The structural analysis resulted in 8 segments consisting of segment 1 and 2 recommended for regular maintenance, segment 3, 4, 5, and 7 for functional overlay, and 6 and 8 were of structural overlay.
NASA Technical Reports Server (NTRS)
Knox, Charles E.
1993-01-01
A piloted simulation study was conducted to examine the requirements for using electromechanical flight instrumentation to provide situation information and flight guidance for manually controlled flight along curved precision approach paths to a landing. Six pilots were used as test subjects. The data from these tests indicated that flight director guidance is required for the manually controlled flight of a jet transport airplane on curved approach paths. Acceptable path tracking performance was attained with each of the three situation information algorithms tested. Approach paths with both multiple sequential turns and short final path segments were evaluated. Pilot comments indicated that all the approach paths tested could be used in normal airline operations.
Web-based segmentation and display of three-dimensional radiologic image data.
Silverstein, J; Rubenstein, J; Millman, A; Panko, W
1998-01-01
In many clinical circumstances, viewing sequential radiological image data as three-dimensional models is proving beneficial. However, designing customized computer-generated radiological models is beyond the scope of most physicians, due to specialized hardware and software requirements. We have created a simple method for Internet users to remotely construct and locally display three-dimensional radiological models using only a standard web browser. Rapid model construction is achieved by distributing the hardware intensive steps to a remote server. Once created, the model is automatically displayed on the requesting browser and is accessible to multiple geographically distributed users. Implementation of our server software on large scale systems could be of great service to the worldwide medical community.
Kinect-based sign language recognition of static and dynamic hand movements
NASA Astrophysics Data System (ADS)
Dalawis, Rando C.; Olayao, Kenneth Deniel R.; Ramos, Evan Geoffrey I.; Samonte, Mary Jane C.
2017-02-01
A different approach of sign language recognition of static and dynamic hand movements was developed in this study using normalized correlation algorithm. The goal of this research was to translate fingerspelling sign language into text using MATLAB and Microsoft Kinect. Digital input image captured by Kinect devices are matched from template samples stored in a database. This Human Computer Interaction (HCI) prototype was developed to help people with communication disability to express their thoughts with ease. Frame segmentation and feature extraction was used to give meaning to the captured images. Sequential and random testing was used to test both static and dynamic fingerspelling gestures. The researchers explained some factors they encountered causing some misclassification of signs.
Ruggeri, Marco; Uhlhorn, Stephen R.; De Freitas, Carolina; Ho, Arthur; Manns, Fabrice; Parel, Jean-Marie
2012-01-01
Abstract: An optical switch was implemented in the reference arm of an extended depth SD-OCT system to sequentially acquire OCT images at different depths into the eye ranging from the cornea to the retina. A custom-made accommodation module was coupled with the delivery of the OCT system to provide controlled step stimuli of accommodation and disaccommodation that preserve ocular alignment. The changes in the lens shape were imaged and ocular distances were dynamically measured during accommodation and disaccommodation. The system is capable of dynamic in vivo imaging of the entire anterior segment and eye-length measurement during accommodation in real-time. PMID:22808424
Ruggeri, Marco; Uhlhorn, Stephen R; De Freitas, Carolina; Ho, Arthur; Manns, Fabrice; Parel, Jean-Marie
2012-07-01
An optical switch was implemented in the reference arm of an extended depth SD-OCT system to sequentially acquire OCT images at different depths into the eye ranging from the cornea to the retina. A custom-made accommodation module was coupled with the delivery of the OCT system to provide controlled step stimuli of accommodation and disaccommodation that preserve ocular alignment. The changes in the lens shape were imaged and ocular distances were dynamically measured during accommodation and disaccommodation. The system is capable of dynamic in vivo imaging of the entire anterior segment and eye-length measurement during accommodation in real-time.
Meissner, Christian A; Tredoux, Colin G; Parker, Janat F; MacLin, Otto H
2005-07-01
Many eyewitness researchers have argued for the application of a sequential alternative to the traditional simultaneous lineup, given its role in decreasing false identifications of innocent suspects (sequential superiority effect). However, Ebbesen and Flowe (2002) have recently noted that sequential lineups may merely bring about a shift in response criterion, having no effect on discrimination accuracy. We explored this claim, using a method that allows signal detection theory measures to be collected from eyewitnesses. In three experiments, lineup type was factorially combined with conditions expected to influence response criterion and/or discrimination accuracy. Results were consistent with signal detection theory predictions, including that of a conservative criterion shift with the sequential presentation of lineups. In a fourth experiment, we explored the phenomenological basis for the criterion shift, using the remember-know-guess procedure. In accord with previous research, the criterion shift in sequential lineups was associated with a reduction in familiarity-based responding. It is proposed that the relative similarity between lineup members may create a context in which fluency-based processing is facilitated to a greater extent when lineup members are presented simultaneously.
[Segment analysis of the target market of physiotherapeutic services].
Babaskin, D V
2010-01-01
The objective of the present study was to demonstrate the possibilities to analyse selected segments of the target market of physiotherapeutic services provided by medical and preventive-facilities of two major types. The main features of a target segment, such as provision of therapeutic massage, are illustrated in terms of two characteristics, namely attractiveness to the users and the ability of a given medical facility to satisfy their requirements. Based on the analysis of portfolio of the available target segments the most promising ones (winner segments) were selected for further marketing studies. This choice does not exclude the possibility of involvement of other segments of medical services in marketing activities.
Pooler, B Dustin; Hernando, Diego; Ruby, Jeannine A; Ishii, Hiroshi; Shimakawa, Ann; Reeder, Scott B
2018-04-17
Current chemical-shift-encoded (CSE) MRI techniques for measuring hepatic proton density fat fraction (PDFF) are sensitive to motion artifacts. Initial validation of a motion-robust 2D-sequential CSE-MRI technique for quantification of hepatic PDFF. Phantom study and prospective in vivo cohort. Fifty adult patients (27 women, 23 men, mean age 57.2 years). 3D, 2D-interleaved, and 2D-sequential CSE-MRI acquisitions at 1.5T. Three CSE-MRI techniques (3D, 2D-interleaved, 2D-sequential) were performed in a PDFF phantom and in vivo. Reference standards were 3D CSE-MRI PDFF measurements for the phantom study and single-voxel MR spectroscopy hepatic PDFF measurements (MRS-PDFF) in vivo. In vivo hepatic MRI-PDFF measurements were performed during a single breath-hold (BH) and free breathing (FB), and were repeated by a second reader for the FB 2D-sequential sequence to assess interreader variability. Correlation plots to validate the 2D-sequential CSE-MRI against the phantom and in vivo reference standards. Bland-Altman analysis of FB versus BH CSE-MRI acquisitions to evaluate robustness to motion. Bland-Altman analysis to assess interreader variability. Phantom 2D-sequential CSE-MRI PDFF measurements demonstrated excellent agreement and correlation (R 2 > 0.99) with 3D CSE-MRI. In vivo, the mean (±SD) hepatic PDFF was 8.8 ± 8.7% (range 0.6-28.5%). Compared with BH acquisitions, FB hepatic PDFF measurements demonstrated bias of +0.15% for 2D-sequential compared with + 0.53% for 3D and +0.94% for 2D-interleaved. 95% limits of agreement (LOA) were narrower for 2D-sequential (±0.99%), compared with 3D (±3.72%) and 2D-interleaved (±3.10%). All CSE-MRI techniques had excellent correlation with MRS (R 2 > 0.97). The FB 2D-sequential acquisition demonstrated little interreader variability, with mean bias of +0.07% and 95% LOA of ± 1.53%. This motion-robust 2D-sequential CSE-MRI can accurately measure hepatic PDFF during free breathing in a patient population with a range of PDFF values of 0.6-28.5%, permitting accurate quantification of liver fat content without the need for suspended respiration. 1 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.
Analysis of Regional Effects on Market Segment Production
2016-06-01
REGIONAL EFFECTS ON MARKET SEGMENT PRODUCTION by James D. Moffitt June 2016 Thesis Advisor: Lyn R. Whitaker Co-Advisor: Jonathan K. Alt...REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE ANALYSIS OF REGIONAL EFFECTS ON MARKET SEGMENT PRODUCTION 5. FUNDING NUMBERS 6...accessions in Potential Rating Index Zip Code Market New Evolution (PRIZM NE) market segments. This model will aid USAREC G2 analysts involved in
Alvine, Gregory F; Swain, James M; Asher, Marc A; Burton, Douglas C
2004-08-01
The controversy of burst fracture surgical management is addressed in this retrospective case study and literature review. The series consisted of 40 consecutive patients, index included, with 41 fractures treated with stiff, limited segment transpedicular bone-anchored instrumentation and arthrodesis from 1987 through 1994. No major acute complications such as death, paralysis, or infection occurred. For the 30 fractures with pre- and postoperative computed tomography studies, spinal canal compromise was 61% and 32%, respectively. Neurologic function improved in 7 of 14 patients (50%) and did not worsen in any. The principal problem encountered was screw breakage, which occurred in 16 of the 41 (39%) instrumented fractures. As we have previously reported, transpedicular anterior bone graft augmentation significantly decreased variable screw placement (VSP) implant breakage. However, it did not prevent Isola implant breakage in two-motion segment constructs. Compared with VSP, Isola provided better sagittal plane realignment and constructs that have been found to be significantly stiffer. Unplanned reoperation was necessary in 9 of the 40 patients (23%). At 1- and 2-year follow-up, 95% and 79% of patients were available for study, and a satisfactory outcome was achieved in 84% and 79%, respectively. These satisfaction and reoperation rates are consistent with the literature of the time. Based on these observations and the loads to which implant constructs are exposed following posterior realignment and stabilization of burst fractures, we recommend that three- or four-motion segment constructs, rather than two motion, be used. To save valuable motion segments, planned construct shortening can be used. An alternative is sequential or staged anterior corpectomy and structural grafting.
NASA Astrophysics Data System (ADS)
Suter, Max
2015-01-01
During the 3 May 1887 Mw 7.5 Sonora earthquake (surface rupture end-to-end length: 101.8 km), an array of three north-south striking Basin-and-Range Province faults (from north to south Pitáycachi, Teras, and Otates) slipped sequentially along the western margin of the Sierra Madre Occidental Plateau. This detailed field survey of the 1887 earthquake rupture zone along the Pitáycachi fault includes mapping the rupture scarp and measurements of surface deformation. The surface rupture has an endpoint-to-endpoint length of ≥41.0 km, dips 70°W, and is characterized by normal left-lateral extension. The maximum surface offset is 487 cm and the mean offset 260 cm. The rupture trace shows a complex pattern of second-order segmentation. However, this segmentation is not expressed in the 1887 along-rupture surface offset profile, which indicates that the secondary segments are linked at depth into a single coherent fault surface. The Pitáycachi surface rupture shows a well-developed bipolar branching pattern suggesting that the rupture originated in its central part, where the polarity of the rupture bifurcations changes. Most likely the rupture first propagated bilaterally along the Pitáycachi fault. The southern rupture front likely jumped across a step over to the Teras fault and from there across a major relay zone to the Otates fault. Branching probably resulted from the lateral propagation of the rupture after breaching the seismogenic part of the crust, given that the much shorter ruptures of the Otates and Teras segments did not develop branches.
A transversal approach for patch-based label fusion via matrix completion
Sanroma, Gerard; Wu, Guorong; Gao, Yaozong; Thung, Kim-Han; Guo, Yanrong; Shen, Dinggang
2015-01-01
Recently, multi-atlas patch-based label fusion has received an increasing interest in the medical image segmentation field. After warping the anatomical labels from the atlas images to the target image by registration, label fusion is the key step to determine the latent label for each target image point. Two popular types of patch-based label fusion approaches are (1) reconstruction-based approaches that compute the target labels as a weighted average of atlas labels, where the weights are derived by reconstructing the target image patch using the atlas image patches; and (2) classification-based approaches that determine the target label as a mapping of the target image patch, where the mapping function is often learned using the atlas image patches and their corresponding labels. Both approaches have their advantages and limitations. In this paper, we propose a novel patch-based label fusion method to combine the above two types of approaches via matrix completion (and hence, we call it transversal). As we will show, our method overcomes the individual limitations of both reconstruction-based and classification-based approaches. Since the labeling confidences may vary across the target image points, we further propose a sequential labeling framework that first labels the highly confident points and then gradually labels more challenging points in an iterative manner, guided by the label information determined in the previous iterations. We demonstrate the performance of our novel label fusion method in segmenting the hippocampus in the ADNI dataset, subcortical and limbic structures in the LONI dataset, and mid-brain structures in the SATA dataset. We achieve more accurate segmentation results than both reconstruction-based and classification-based approaches. Our label fusion method is also ranked 1st in the online SATA Multi-Atlas Segmentation Challenge. PMID:26160394
NASA Astrophysics Data System (ADS)
Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Qiu, Yuchen; Zheng, Bin
2018-02-01
Objective of this study is to develop and test a new computer-aided detection (CAD) scheme with improved region of interest (ROI) segmentation combined with an image feature extraction framework to improve performance in predicting short-term breast cancer risk. A dataset involving 570 sets of "prior" negative mammography screening cases was retrospectively assembled. In the next sequential "current" screening, 285 cases were positive and 285 cases remained negative. A CAD scheme was applied to all 570 "prior" negative images to stratify cases into the high and low risk case group of having cancer detected in the "current" screening. First, a new ROI segmentation algorithm was used to automatically remove useless area of mammograms. Second, from the matched bilateral craniocaudal view images, a set of 43 image features related to frequency characteristics of ROIs were initially computed from the discrete cosine transform and spatial domain of the images. Third, a support vector machine model based machine learning classifier was used to optimally classify the selected optimal image features to build a CAD-based risk prediction model. The classifier was trained using a leave-one-case-out based cross-validation method. Applying this improved CAD scheme to the testing dataset, an area under ROC curve, AUC = 0.70+/-0.04, which was significantly higher than using the extracting features directly from the dataset without the improved ROI segmentation step (AUC = 0.63+/-0.04). This study demonstrated that the proposed approach could improve accuracy on predicting short-term breast cancer risk, which may play an important role in helping eventually establish an optimal personalized breast cancer paradigm.
Automated vessel segmentation using cross-correlation and pooled covariance matrix analysis.
Du, Jiang; Karimi, Afshin; Wu, Yijing; Korosec, Frank R; Grist, Thomas M; Mistretta, Charles A
2011-04-01
Time-resolved contrast-enhanced magnetic resonance angiography (CE-MRA) provides contrast dynamics in the vasculature and allows vessel segmentation based on temporal correlation analysis. Here we present an automated vessel segmentation algorithm including automated generation of regions of interest (ROIs), cross-correlation and pooled sample covariance matrix analysis. The dynamic images are divided into multiple equal-sized regions. In each region, ROIs for artery, vein and background are generated using an iterative thresholding algorithm based on the contrast arrival time map and contrast enhancement map. Region-specific multi-feature cross-correlation analysis and pooled covariance matrix analysis are performed to calculate the Mahalanobis distances (MDs), which are used to automatically separate arteries from veins. This segmentation algorithm is applied to a dual-phase dynamic imaging acquisition scheme where low-resolution time-resolved images are acquired during the dynamic phase followed by high-frequency data acquisition at the steady-state phase. The segmented low-resolution arterial and venous images are then combined with the high-frequency data in k-space and inverse Fourier transformed to form the final segmented arterial and venous images. Results from volunteer and patient studies demonstrate the advantages of this automated vessel segmentation and dual phase data acquisition technique. Copyright © 2011 Elsevier Inc. All rights reserved.
Comparison of ablation centration after bilateral sequential versus simultaneous LASIK.
Lin, Jane-Ming; Tsai, Yi-Yu
2005-01-01
To compare ablation centration after bilateral sequential and simultaneous myopic LASIK. A retrospective randomized case series was performed of 670 eyes of 335 consecutive patients who had undergone either bilateral sequential (group 1) or simultaneous (group 2) myopic LASIK between July 2000 and July 2001 at the China Medical University Hospital, Taichung, Taiwan. The ablation centrations of the first and second eyes in the two groups were compared 3 months postoperatively. Of 670 eyes, 274 eyes (137 patients) comprised the sequential group and 396 eyes (198 patients) comprised the simultaneous group. Three months post-operatively, 220 eyes of 110 patients (80%) in the sequential group and 236 eyes of 118 patients (60%) in the simultaneous group provided topographic data for centration analysis. For the first eyes, mean decentration was 0.39 +/- 0.26 mm in the sequential group and 0.41 +/- 0.19 mm in the simultaneous group (P = .30). For the second eyes, mean decentration was 0.28 +/- 0.23 mm in the sequential group and 0.30 +/- 0.21 mm in the simultaneous group (P = .36). Decentration in the second eyes significantly improved in both groups (group 1, P = .02; group 2, P < .01). The mean distance between the first and second eyes was 0.31 +/- 0.25 mm in the sequential group and 0.32 +/- 0.18 mm in the simultaneous group (P = .33). The difference of ablation center angles between the first and second eyes was 43.2 < or = 48.3 degrees in the sequential group and 45.1 +/- 50.8 degrees in the simultaneous group (P = .42). Simultaneous bilateral LASIK is comparable to sequential surgery in ablation centration.
A Sequential Analysis of Parent-Child Interactions in Anxious and Nonanxious Families
ERIC Educational Resources Information Center
Williams, Sarah R.; Kertz, Sarah J.; Schrock, Matthew D.; Woodruff-Borden, Janet
2012-01-01
Although theoretical work has suggested that reciprocal behavior patterns between parent and child may be important in the development of childhood anxiety, most empirical work has failed to consider the bidirectional nature of interactions. The current study sought to address this limitation by utilizing a sequential approach to exploring…
ERIC Educational Resources Information Center
Chen, Bodong; Resendes, Monica; Chai, Ching Sing; Hong, Huang-Yao
2017-01-01
As collaborative learning is actualized through evolving dialogues, temporality inevitably matters for the analysis of collaborative learning. This study attempts to uncover sequential patterns that distinguish "productive" threads of knowledge-building discourse. A database of Grade 1-6 knowledge-building discourse was first coded for…
Investigating Stage-Sequential Growth Mixture Models with Multiphase Longitudinal Data
ERIC Educational Resources Information Center
Kim, Su-Young; Kim, Jee-Seon
2012-01-01
This article investigates three types of stage-sequential growth mixture models in the structural equation modeling framework for the analysis of multiple-phase longitudinal data. These models can be important tools for situations in which a single-phase growth mixture model produces distorted results and can allow researchers to better understand…
Assessment of reliability and safety of a manufacturing system with sequential failures is an important issue in industry, since the reliability and safety of the system depend not only on all failed states of system components, but also on the sequence of occurrences of those...
ERIC Educational Resources Information Center
Bain, Sherry K.
1993-01-01
Analysis of Kaufman Assessment Battery for Children (K-ABC) Sequential and Simultaneous Processing scores of 94 children (ages 6-12) with learning disabilities produced factor patterns generally supportive of the traditional K-ABC Mental Processing structure with the exception of Spatial Memory. The sample exhibited relative processing strengths…
NASA Astrophysics Data System (ADS)
Schweizer, Steffen; Schlueter, Steffen; Hoeschen, Carmen; Koegel-Knabner, Ingrid; Mueller, Carsten W.
2017-04-01
Soil organic matter (SOM) is distributed on mineral surfaces depending on physicochemical soil properties that vary at the submicron scale. Nanoscale secondary ion mass spectrometry (NanoSIMS) can be used to visualize the spatial distribution of up to seven elements simultaneously at a lateral resolution of approximately 100 nm from which patterns of SOM coatings can be derived. Existing computational methods are mostly confined to visualization and lack spatial quantification measures of coverage and connectivity of organic matter coatings. This study proposes a methodology for the spatial analysis of SOM coatings based on supervised pixel classification and automatic image analysis of the 12C, 12C14N (indicative for SOM) and 16O (indicative for mineral surfaces) secondary ion distributions. The image segmentation of the secondary ion distributions into mineral particle surface and organic coating was done with a machine learning algorithm, which accounts for multiple features like size, color, intensity, edge and texture in all three ion distributions simultaneously. Our workflow allowed the spatial analysis of differences in the SOM coverage during soil development in the Damma glacier forefield (Switzerland) based on NanoSIMS measurements (n=121; containing ca. 4000 particles). The Damma chronosequence comprises several stages of soil development with increasing ice-free period (from ca. 15 to >700 years). To investigate mineral-associated SOM in the developing soil we obtained clay fractions (<2 μm) from two density fractions: light mineral (1.6 to 2.2 g cm3) and heavy mineral (>2.2 g cm3). We found increased coverage and a simultaneous development from patchy-distributed organic coatings to more connected coatings with increasing time after glacial retreat. The normalized N:C ratio (12C14N: (12C14N + 12C)) on the organic matter coatings was higher in the medium-aged soils than in the young and mature ones in both heavy and light mineral fraction. This reflects the sequential accumulation of proteinaceous SOM in the medium-aged soils and C-rich compounds in the mature soils. The results of our microscale image analysis correlated well with the SOM concentration of the fractions measured by elemental analyzer. Image analysis in combination with secondary ion distributions provides a powerful tool at the required microscale and enhances our mechanistic understanding of SOM stabilization in soil.
A Bayesian sequential design using alpha spending function to control type I error.
Zhu, Han; Yu, Qingzhao
2017-10-01
We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.
Loarie, Thomas M; Applegate, David; Kuenne, Christopher B; Choi, Lawrence J; Horowitz, Diane P
2003-01-01
Market segmentation analysis identifies discrete segments of the population whose beliefs are consistent with exhibited behaviors such as purchase choice. This study applies market segmentation analysis to low myopes (-1 to -3 D with less than 1 D cylinder) in their consideration and choice of a refractive surgery procedure to discover opportunities within the market. A quantitative survey based on focus group research was sent to a demographically balanced sample of myopes using contact lenses and/or glasses. A variable reduction process followed by a clustering analysis was used to discover discrete belief-based segments. The resulting segments were validated both analytically and through in-market testing. Discontented individuals who wear contact lenses are the primary target for vision correction surgery. However, 81% of the target group is apprehensive about laser in situ keratomileusis (LASIK). They are nervous about the procedure and strongly desire reversibility and exchangeability. There exists a large untapped opportunity for vision correction surgery within the low myope population. Market segmentation analysis helped determine how to best meet this opportunity through repositioning existing procedures or developing new vision correction technology, and could also be applied to identify opportunities in other vision correction populations.
Gupta, Vikas; Bustamante, Mariana; Fredriksson, Alexandru; Carlhäll, Carl-Johan; Ebbers, Tino
2018-01-01
Assessment of blood flow in the left ventricle using four-dimensional flow MRI requires accurate left ventricle segmentation that is often hampered by the low contrast between blood and the myocardium. The purpose of this work is to improve left-ventricular segmentation in four-dimensional flow MRI for reliable blood flow analysis. The left ventricle segmentations are first obtained using morphological cine-MRI with better in-plane resolution and contrast, and then aligned to four-dimensional flow MRI data. This alignment is, however, not trivial due to inter-slice misalignment errors caused by patient motion and respiratory drift during breath-hold based cine-MRI acquisition. A robust image registration based framework is proposed to mitigate such errors automatically. Data from 20 subjects, including healthy volunteers and patients, was used to evaluate its geometric accuracy and impact on blood flow analysis. High spatial correspondence was observed between manually and automatically aligned segmentations, and the improvements in alignment compared to uncorrected segmentations were significant (P < 0.01). Blood flow analysis from manual and automatically corrected segmentations did not differ significantly (P > 0.05). Our results demonstrate the efficacy of the proposed approach in improving left-ventricular segmentation in four-dimensional flow MRI, and its potential for reliable blood flow analysis. Magn Reson Med 79:554-560, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Colony image acquisition and genetic segmentation algorithm and colony analyses
NASA Astrophysics Data System (ADS)
Wang, W. X.
2012-01-01
Colony anaysis is used in a large number of engineerings such as food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing. In order to reduce laboring and increase analysis acuracy, many researchers and developers have made efforts for image analysis systems. The main problems in the systems are image acquisition, image segmentation and image analysis. In this paper, to acquire colony images with good quality, an illumination box was constructed. In the box, the distances between lights and dishe, camra lens and lights, and camera lens and dishe are adjusted optimally. In image segmentation, It is based on a genetic approach that allow one to consider the segmentation problem as a global optimization,. After image pre-processing and image segmentation, the colony analyses are perfomed. The colony image analysis consists of (1) basic colony parameter measurements; (2) colony size analysis; (3) colony shape analysis; and (4) colony surface measurements. All the above visual colony parameters can be selected and combined together, used to make a new engineeing parameters. The colony analysis can be applied into different applications.
Microstructural Organization of Elastomeric Polyurethanes with Siloxane-Containing Soft Segments
NASA Astrophysics Data System (ADS)
Choi, Taeyi; Weklser, Jadwiga; Padsalgikar, Ajay; Runt, James
2011-03-01
In the present study, we investigate the microstructure of two series of segmented polyurethanes (PUs) containing siloxane-based soft segments and the same hard segments, the latter synthesized from diphenylmethane diisocyanate and butanediol. The first series is synthesized using a hydroxy-terminated polydimethylsiloxane macrodiol and varying hard segment contents. The second series are derived from an oligomeric diol containing both siloxane and aliphatic carbonate species. Hard domain morphologies were characterized using tapping mode atomic force microscopy and quantitative analysis of hard/soft segment demixing was conducted using small-angle X-ray scattering. The phase transitions of all materials were investigated using DSC and dynamic mechanical analysis, and hydrogen bonding by FTIR spectroscopy.
NASA Astrophysics Data System (ADS)
Guan, Yihong; Luo, Yatao; Yang, Tao; Qiu, Lei; Li, Junchang
2012-01-01
The features of the spatial information of Markov random field image was used in image segmentation. It can effectively remove the noise, and get a more accurate segmentation results. Based on the fuzziness and clustering of pixel grayscale information, we find clustering center of the medical image different organizations and background through Fuzzy cmeans clustering method. Then we find each threshold point of multi-threshold segmentation through two dimensional histogram method, and segment it. The features of fusing multivariate information based on the Dempster-Shafer evidence theory, getting image fusion and segmentation. This paper will adopt the above three theories to propose a new human brain image segmentation method. Experimental result shows that the segmentation result is more in line with human vision, and is of vital significance to accurate analysis and application of tissues.
NASA Astrophysics Data System (ADS)
Bennett, S. E. K.; DuRoss, C. B.; Reitman, N. G.; Devore, J. R.; Hiscock, A.; Gold, R. D.; Briggs, R. W.; Personius, S. F.
2014-12-01
Paleoseismic data near fault segment boundaries constrain the extent of past surface ruptures and the persistence of rupture termination at segment boundaries. Paleoseismic evidence for large (M≥7.0) earthquakes on the central Holocene-active fault segments of the 350-km-long Wasatch fault zone (WFZ) generally supports single-segment ruptures but also permits multi-segment rupture scenarios. The extent and frequency of ruptures that span segment boundaries remains poorly known, adding uncertainty to seismic hazard models for this populated region of Utah. To address these uncertainties we conducted four paleoseismic investigations near the Salt Lake City-Provo and Provo-Nephi segment boundaries of the WFZ. We examined an exposure of the WFZ at Maple Canyon (Woodland Hills, UT) and excavated the Flat Canyon trench (Salem, UT), 7 and 11 km, respectively, from the southern tip of the Provo segment. We document evidence for at least five earthquakes at Maple Canyon and four to seven earthquakes that post-date mid-Holocene fan deposits at Flat Canyon. These earthquake chronologies will be compared to seven earthquakes observed in previous trenches on the northern Nephi segment to assess rupture correlation across the Provo-Nephi segment boundary. To assess rupture correlation across the Salt Lake City-Provo segment boundary we excavated the Alpine trench (Alpine, UT), 1 km from the northern tip of the Provo segment, and the Corner Canyon trench (Draper, UT) 1 km from the southern tip of the Salt Lake City segment. We document evidence for six earthquakes at both sites. Ongoing geochronologic analysis (14C, optically stimulated luminescence) will constrain earthquake chronologies and help identify through-going ruptures across these segment boundaries. Analysis of new high-resolution (0.5m) airborne LiDAR along the entire WFZ will quantify latest Quaternary displacements and slip rates and document spatial and temporal slip patterns near fault segment boundaries.
Using timed event sequential data in nursing research.
Pecanac, Kristen E; Doherty-King, Barbara; Yoon, Ju Young; Brown, Roger; Schiefelbein, Tony
2015-01-01
Measuring behavior is important in nursing research, and innovative technologies are needed to capture the "real-life" complexity of behaviors and events. The purpose of this article is to describe the use of timed event sequential data in nursing research and to demonstrate the use of this data in a research study. Timed event sequencing allows the researcher to capture the frequency, duration, and sequence of behaviors as they occur in an observation period and to link the behaviors to contextual details. Timed event sequential data can easily be collected with handheld computers, loaded with a software program designed for capturing observations in real time. Timed event sequential data add considerable strength to analysis of any nursing behavior of interest, which can enhance understanding and lead to improvement in nursing practice.
ChIP-re-ChIP: Co-occupancy Analysis by Sequential Chromatin Immunoprecipitation.
Beischlag, Timothy V; Prefontaine, Gratien G; Hankinson, Oliver
2018-01-01
Chromatin immunoprecipitation (ChIP) exploits the specific interactions between DNA and DNA-associated proteins. It can be used to examine a wide range of experimental parameters. A number of proteins bound at the same genomic location can identify a multi-protein chromatin complex where several proteins work together to regulate gene transcription or chromatin configuration. In many instances, this can be achieved using sequential ChIP; or simply, ChIP-re-ChIP. Whether it is for the examination of specific transcriptional or epigenetic regulators, or for the identification of cistromes, the ability to perform a sequential ChIP adds a higher level of power and definition to these analyses. In this chapter, we describe a simple and reliable method for the sequential ChIP assay.
MRI Segmentation of the Human Brain: Challenges, Methods, and Applications
Despotović, Ivana
2015-01-01
Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain's anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue. Finally, after reviewing different brain MRI segmentation methods, we discuss the validation problem in brain MRI segmentation. PMID:25945121
Axonal transports of tripeptidyl peptidase II in rat sciatic nerves.
Chikuma, Toshiyuki; Shimizu, Maki; Tsuchiya, Yukihiro; Kato, Takeshi; Hojo, Hiroshi
2007-01-01
Axonal transport of tripeptidyl peptidase II, a putative cholecystokinin inactivating serine peptidase, was examined in the proximal, middle, and distal segments of rat sciatic nerves using a double ligation technique. Enzyme activity significantly increased not only in the proximal segment but also in the distal segment 12-72h after ligation, and the maximal enzyme activity was found in the proximal and distal segments at 72h. Western blot analysis of tripeptidyl peptidase II showed that its immunoreactivities in the proximal and distal segments were 3.1- and 1.7-fold higher than that in the middle segment. The immunohistochemical analysis of the segments also showed an increase in immunoreactive tripeptidyl peptidase II level in the proximal and distal segments in comparison with that in the middle segment, indicating that tripeptidyl peptidase II is transported by anterograde and retrograde axonal flow. The results suggest that tripeptidyl peptidase II may be involved in the metabolism of neuropeptides in nerve terminals or synaptic clefts.
Gloger, Oliver; Kühn, Jens; Stanski, Adam; Völzke, Henry; Puls, Ralf
2010-07-01
Automatic 3D liver segmentation in magnetic resonance (MR) data sets has proven to be a very challenging task in the domain of medical image analysis. There exist numerous approaches for automatic 3D liver segmentation on computer tomography data sets that have influenced the segmentation of MR images. In contrast to previous approaches to liver segmentation in MR data sets, we use all available MR channel information of different weightings and formulate liver tissue and position probabilities in a probabilistic framework. We apply multiclass linear discriminant analysis as a fast and efficient dimensionality reduction technique and generate probability maps then used for segmentation. We develop a fully automatic three-step 3D segmentation approach based upon a modified region growing approach and a further threshold technique. Finally, we incorporate characteristic prior knowledge to improve the segmentation results. This novel 3D segmentation approach is modularized and can be applied for normal and fat accumulated liver tissue properties. Copyright 2010 Elsevier Inc. All rights reserved.
Silva, Ivair R
2018-01-15
Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post-market drug and vaccine safety surveillance, that is not important. In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis. Copyright © 2017 John Wiley & Sons, Ltd.
Comparison of human embryomorphokinetic parameters in sequential or global culture media.
Kazdar, Nadia; Brugnon, Florence; Bouche, Cyril; Jouve, Guilhem; Veau, Ségolène; Drapier, Hortense; Rousseau, Chloé; Pimentel, Céline; Viard, Patricia; Belaud-Rotureau, Marc-Antoine; Ravel, Célia
2017-08-01
A prospective study on randomized patients was conducted to determine how morphokinetic parameters are altered in embryos grown in sequential versus global culture media. Eleven morphokinetic parameters of 160 single embryos transferred were analyzed by time lapse imaging involving two University-affiliated in vitro fertilization (IVF) centers. We found that the fading of the two pronuclei occurred earlier in global (22.56±2.15 hpi) versus sequential media (23.63±2.71 hpi; p=0.0297). Likewise, the first cleavage started earlier at 24.52±2.33 hpi vs 25.76±2.95 hpi (p=0.0158). Also, the first cytokinesis was shorter in global medium, lasting 18±10.2 minutes in global versus 36±37.8 minutes in sequential culture medium (p <0.0001). We also observed a significant shortening in the duration of the 2-cell stage in sequential medium: 10.64 h±2.75 versus 11.66 h±1.11 in global medium (p=0.0225) which suggested a faster progression of the embryos through their first mitotic cell cycle. In conclusion, morphokinetic analysis of human embryos by Time lapse imaging reveals significant differences in five kinetic variables according to culture medium. Our study highlights the need to adapt morphokinetic analysis accordingly to the type of media used to best support human early embryo development.
Sequential change detection and monitoring of temporal trends in random-effects meta-analysis.
Dogo, Samson Henry; Clark, Allan; Kulinskaya, Elena
2017-06-01
Temporal changes in magnitude of effect sizes reported in many areas of research are a threat to the credibility of the results and conclusions of meta-analysis. Numerous sequential methods for meta-analysis have been proposed to detect changes and monitor trends in effect sizes so that meta-analysis can be updated when necessary and interpreted based on the time it was conducted. The difficulties of sequential meta-analysis under the random-effects model are caused by dependencies in increments introduced by the estimation of the heterogeneity parameter τ 2 . In this paper, we propose the use of a retrospective cumulative sum (CUSUM)-type test with bootstrap critical values. This method allows retrospective analysis of the past trajectory of cumulative effects in random-effects meta-analysis and its visualization on a chart similar to CUSUM chart. Simulation results show that the new method demonstrates good control of Type I error regardless of the number or size of the studies and the amount of heterogeneity. Application of the new method is illustrated on two examples of medical meta-analyses. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.
Protein classification using sequential pattern mining.
Exarchos, Themis P; Papaloukas, Costas; Lampros, Christos; Fotiadis, Dimitrios I
2006-01-01
Protein classification in terms of fold recognition can be employed to determine the structural and functional properties of a newly discovered protein. In this work sequential pattern mining (SPM) is utilized for sequence-based fold recognition. One of the most efficient SPM algorithms, cSPADE, is employed for protein primary structure analysis. Then a classifier uses the extracted sequential patterns for classifying proteins of unknown structure in the appropriate fold category. The proposed methodology exhibited an overall accuracy of 36% in a multi-class problem of 17 candidate categories. The classification performance reaches up to 65% when the three most probable protein folds are considered.
Pennington, A J; Pentreath, V W
1988-01-01
The isolated segmental ganglia of the horse leech Haemopis sanguisuga were used as a model system to study the utilization and control of glycogen stores within nervous tissue. The glycogen in the ganglia was extracted and assayed fluorimentrically and its cellular localization and turnover studied by autoradiography in conjunction with [(3)H]glucose. We measured the glycogen after various periods of electrical stimulation and after incubation with K(+), Ca(2+), ouabain and glucose. The results for each experimental ganglion were compared to a paired control ganglion and the results analysed by paired t-tests. Electrical stimulation caused sequential changes in glycogen levels: a reduction of up to 67% (5-10 min); followed by an increase of up to 124% (between 15-50 min); followed by a reduction of up to 63% (60-90 min). Values were calculated for glucose utilization (e.g. 0.53 ?mol glucose/gm wet weight/min after 90 min) and estimates derived for glucose consumption per action potential per neuron (e.g. 0.12 fmol at 90 min). Glucose (1.5-10 mM) increased the amount of glycogen (1.5 mM by 30% at 60 min) and attenuated the effects of electrical stimulation. Ouabain (1 mM) blocked the effect of 5 min electrical stimulation. Nine millimolar K(+) increased glycogen by 27% after 10 min and decreased glycogen by 34% after 60 min; 3 mM Ca(2+) had no effect after 10 or 20 min and decreased glycogen by 29% after 60 min. Other concentrations of K(+) and Ca(2+) reduced glycogen after 60 min. Autoradiographic analysis demonstrated that the effects of elevated K(+) were principally within the glial cells. We conclude that (i) the glycogen stores in the glial cells of leech segmental ganglia provide an endogenous energy source which can support sustained neuronal activity, (ii) both electrical stimulation and elevated K(+) can induce gluconeogenesis within the ganglia, (iii) that electrical activation of neurons produces changes in the glycogen in the glial cells which are controlled in part by changes in K(+).
Sequential Pattern Analysis: Method and Application in Exploring How Students Develop Concept Maps
ERIC Educational Resources Information Center
Chiu, Chiung-Hui; Lin, Chien-Liang
2012-01-01
Concept mapping is a technique that represents knowledge in graphs. It has been widely adopted in science education and cognitive psychology to aid learning and assessment. To realize the sequential manner in which students develop concept maps, most research relies upon human-dependent, qualitative approaches. This article proposes a method for…
An Overview of Markov Chain Methods for the Study of Stage-Sequential Developmental Processes
ERIC Educational Resources Information Center
Kapland, David
2008-01-01
This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model.…
ERIC Educational Resources Information Center
Kaufman, Alan S.; Kamphaus, Randy W.
1984-01-01
The construct validity of the Sequential Processing, Simultaneous Processing and Achievement scales of the Kaufman Assessment Battery for Children was supported by factor-analytic investigations of a representative national stratified sample of 2,000 children. Correlations provided insight into the relationship of sequential/simultaneous…
ROC and Loss Function Analysis in Sequential Testing
ERIC Educational Resources Information Center
Muijtjens, Arno M. M.; Van Luijk, Scheltus J.; Van Der Vleuten, Cees P. M.
2006-01-01
Sequential testing is applied to reduce costs in SP-based tests (OSCEs). Initially, all candidates take a screening test consisting of a part of the OSCE. Candidates who fail the screen sit the complete test, whereas those who pass the screen are qualified as a pass of the complete test. The procedure may result in a reduction of testing…
Economic Analysis. Volume V. Course Segments 65-79.
ERIC Educational Resources Information Center
Sterling Inst., Washington, DC. Educational Technology Center.
The fifth volume of the multimedia, individualized course in economic analysis produced for the United States Naval Academy covers segments 65-79 of the course. Included in the volume are discussions of monopoly markets, monopolistic competition, oligopoly markets, and the theory of factor demand and supply. Other segments of the course, the…
The concept of double inlet-double outlet right ventricle: a distinct congenital heart disease.
Spadotto, Veronica; Frescura, Carla; Ho, Siew Yen; Thiene, Gaetano
The aim of this study was to estimate the incidence and to analyze the anatomy of double inlet-double outlet right ventricle complex and its associated cardiac anomalies in our autopsy series. Among the 1640 hearts with congenital heart disease of our Anatomical Collection, we reviewed the specimens with double inlet-double outlet right ventricle, according to the sequential-segmental analysis, identifying associated cardiac anomalies and examining lung histology to assess the presence of pulmonary vascular disease. We identified 14 hearts with double inlet-double outlet right ventricle (0.85%). Right atrial isomerism was observed in 10 hearts, situs solitus in 3 and left atrial isomerism in one. Regarding the mode of atrioventricular connection, all hearts but one had a common atrioventricular valve. Systemic or pulmonary venous abnormalities were noted in all patients with atrial isomerism. In nine patients a valvular or subvalvular pulmonary stenosis was present. Among the functionally "univentricular hearts", double inlet- double outlet right ventricle represents a peculiar entity, mostly in association with right atrial isomerism. Multiple cardiac anomalies are associated and may complicate surgical repair. Copyright © 2016 Elsevier Inc. All rights reserved.
Fast approximate delivery of fluence maps for IMRT and VMAT
NASA Astrophysics Data System (ADS)
Balvert, Marleen; Craft, David
2017-02-01
In this article we provide a method to generate the trade-off between delivery time and fluence map matching quality for dynamically delivered fluence maps. At the heart of our method lies a mathematical programming model that, for a given duration of delivery, optimizes leaf trajectories and dose rates such that the desired fluence map is reproduced as well as possible. We begin with the single fluence map case and then generalize the model and the solution technique to the delivery of sequential fluence maps. The resulting large-scale, non-convex optimization problem was solved using a heuristic approach. We test our method using a prostate case and a head and neck case, and present the resulting trade-off curves. Analysis of the leaf trajectories reveals that short time plans have larger leaf openings in general than longer delivery time plans. Our method allows one to explore the continuum of possibilities between coarse, large segment plans characteristic of direct aperture approaches and narrow field plans produced by sliding window approaches. Exposing this trade-off will allow for an informed choice between plan quality and solution time. Further research is required to speed up the optimization process to make this method clinically implementable.
Payen, Celia; Di Rienzi, Sara C; Ong, Giang T; Pogachar, Jamie L; Sanchez, Joseph C; Sunshine, Anna B; Raghuraman, M K; Brewer, Bonita J; Dunham, Maitreya J
2014-03-20
Population adaptation to strong selection can occur through the sequential or parallel accumulation of competing beneficial mutations. The dynamics, diversity, and rate of fixation of beneficial mutations within and between populations are still poorly understood. To study how the mutational landscape varies across populations during adaptation, we performed experimental evolution on seven parallel populations of Saccharomyces cerevisiae continuously cultured in limiting sulfate medium. By combining quantitative polymerase chain reaction, array comparative genomic hybridization, restriction digestion and contour-clamped homogeneous electric field gel electrophoresis, and whole-genome sequencing, we followed the trajectory of evolution to determine the identity and fate of beneficial mutations. During a period of 200 generations, the yeast populations displayed parallel evolutionary dynamics that were driven by the coexistence of independent beneficial mutations. Selective amplifications rapidly evolved under this selection pressure, in particular common inverted amplifications containing the sulfate transporter gene SUL1. Compared with single clones, detailed analysis of the populations uncovers a greater complexity whereby multiple subpopulations arise and compete despite a strong selection. The most common evolutionary adaptation to strong selection in these populations grown in sulfate limitation is determined by clonal interference, with adaptive variants both persisting and replacing one another.
Payen, Celia; Di Rienzi, Sara C.; Ong, Giang T.; Pogachar, Jamie L.; Sanchez, Joseph C.; Sunshine, Anna B.; Raghuraman, M. K.; Brewer, Bonita J.; Dunham, Maitreya J.
2014-01-01
Population adaptation to strong selection can occur through the sequential or parallel accumulation of competing beneficial mutations. The dynamics, diversity, and rate of fixation of beneficial mutations within and between populations are still poorly understood. To study how the mutational landscape varies across populations during adaptation, we performed experimental evolution on seven parallel populations of Saccharomyces cerevisiae continuously cultured in limiting sulfate medium. By combining quantitative polymerase chain reaction, array comparative genomic hybridization, restriction digestion and contour-clamped homogeneous electric field gel electrophoresis, and whole-genome sequencing, we followed the trajectory of evolution to determine the identity and fate of beneficial mutations. During a period of 200 generations, the yeast populations displayed parallel evolutionary dynamics that were driven by the coexistence of independent beneficial mutations. Selective amplifications rapidly evolved under this selection pressure, in particular common inverted amplifications containing the sulfate transporter gene SUL1. Compared with single clones, detailed analysis of the populations uncovers a greater complexity whereby multiple subpopulations arise and compete despite a strong selection. The most common evolutionary adaptation to strong selection in these populations grown in sulfate limitation is determined by clonal interference, with adaptive variants both persisting and replacing one another. PMID:24368781
Unger, Miriam; Pfeifer, Frank; Siesler, Heinz W
2016-07-01
The main objective of this communication is to compare the performance of a miniaturized handheld near-infrared (NIR) spectrometer with a benchtop Fourier transform near-infrared (FT-NIR) spectrometer. Generally, NIR spectroscopy is an extremely powerful analytical tool to study hydrogen-bonding changes of amide functionalities in solid and liquid materials and therefore variable temperature NIR measurements of polyamide II (PAII) have been selected as a case study. The information content of the measurement data has been further enhanced by exploiting the potential of two-dimensional correlation spectroscopy (2D-COS) and the perturbation correlation moving window two-dimensional (PCMW2D) evaluation technique. The data provide valuable insights not only into the changes of the hydrogen-bonding structure and the recrystallization of the hydrocarbon segments of the investigated PAII but also in their sequential order. Furthermore, it has been demonstrated that the 2D-COS and PCMW2D results derived from the spectra measured with the miniaturized NIR instrument are equivalent to the information extracted from the data obtained with the high-performance FT-NIR instrument. © The Author(s) 2016.
Christenson, Stuart D; Chareonthaitawee, Panithaya; Burnes, John E; Hill, Michael R S; Kemp, Brad J; Khandheria, Bijoy K; Hayes, David L; Gibbons, Raymond J
2008-02-01
Cardiac resynchronization therapy (CRT) can improve left ventricular (LV) hemodynamics and function. Recent data suggest the energy cost of such improvement is favorable. The effects of sequential CRT on myocardial oxidative metabolism (MVO(2)) and efficiency have not been previously assessed. Eight patients with NYHA class III heart failure were studied 196 +/- 180 days after CRT implant. Dynamic [(11)C]acetate positron emission tomography (PET) and echocardiography were performed after 1 hour of: 1) AAI pacing, 2) simultaneous CRT, and 3) sequential CRT. MVO(2) was calculated using the monoexponential clearance rate of [(11)C]acetate (k(mono)). Myocardial efficiency was expressed in terms of the work metabolic index (WMI). P values represent overall significance from repeated measures analysis. Global LV and right ventricular (RV) MVO(2) were not significantly different between pacing modes, but the septal/lateral MVO(2) ratio differed significantly with the change in pacing mode (AAI pacing = 0.696 +/- 0.094 min(-1), simultaneous CRT = 0.975 +/- 0.143 min(-1), and sequential CRT = 0.938 +/- 0.189 min(-1); overall P = 0.001). Stroke volume index (SVI) (AAI pacing = 26.7 +/- 10.4 mL/m(2), simultaneous CRT = 30.6 +/- 11.2 mL/m(2), sequential CRT = 33.5 +/- 12.2 mL/m(2); overall P < 0.001) and WMI (AAI pacing = 3.29 +/- 1.34 mmHg*mL/m(2)*10(6), simultaneous CRT = 4.29 +/- 1.72 mmHg*mL/m(2)*10(6), sequential CRT = 4.79 +/- 1.92 mmHg*mL/m(2)*10(6); overall P = 0.002) also differed between pacing modes. Compared with simultaneous CRT, additional changes in septal/lateral MVO(2), SVI, and WMI with sequential CRT were not statistically significant on post hoc analysis. In this small selected population, CRT increases LV SVI without increasing MVO(2), resulting in improved myocardial efficiency. Additional improvements in LV work, oxidative metabolism, and efficiency from simultaneous to sequential CRT were not significant.
Hart, Nicolas H.; Nimphius, Sophia; Spiteri, Tania; Cochrane, Jodie L.; Newton, Robert U.
2015-01-01
Musculoskeletal examinations provide informative and valuable quantitative insight into muscle and bone health. DXA is one mainstream tool used to accurately and reliably determine body composition components and bone mass characteristics in-vivo. Presently, whole body scan models separate the body into axial and appendicular regions, however there is a need for localised appendicular segmentation models to further examine regions of interest within the upper and lower extremities. Similarly, inconsistencies pertaining to patient positioning exist in the literature which influence measurement precision and analysis outcomes highlighting a need for standardised procedure. This paper provides standardised and reproducible: 1) positioning and analysis procedures using DXA and 2) reliable segmental examinations through descriptive appendicular boundaries. Whole-body scans were performed on forty-six (n = 46) football athletes (age: 22.9 ± 4.3 yrs; height: 1.85 ± 0.07 cm; weight: 87.4 ± 10.3 kg; body fat: 11.4 ± 4.5 %) using DXA. All segments across all scans were analysed three times by the main investigator on three separate days, and by three independent investigators a week following the original analysis. To examine intra-rater and inter-rater, between day and researcher reliability, coefficients of variation (CV) and intraclass correlation coefficients (ICC) were determined. Positioning and segmental analysis procedures presented in this study produced very high, nearly perfect intra-tester (CV ≤ 2.0%; ICC ≥ 0.988) and inter-tester (CV ≤ 2.4%; ICC ≥ 0.980) reliability, demonstrating excellent reproducibility within and between practitioners. Standardised examinations of axial and appendicular segments are necessary. Future studies aiming to quantify and report segmental analyses of the upper- and lower-body musculoskeletal properties using whole-body DXA scans are encouraged to use the patient positioning and image analysis procedures outlined in this paper. Key points Musculoskeletal examinations using DXA technology require highly standardised and reproducible patient positioning and image analysis procedures to accurately measure and monitor axial, appendicular and segmental regions of interest. Internal rotation and fixation of the lower-limbs is strongly recommended during whole-body DXA scans to prevent undesired movement, improve frontal mass accessibility and enhance ankle joint visibility during scan performance and analysis. Appendicular segmental analyses using whole-body DXA scans are highly reliable for all regional upper-body and lower-body segmentations, with hard-tissue (CV ≤ 1.5%; R ≥ 0.990) achieving greater reliability and lower error than soft-tissue (CV ≤ 2.4%; R ≥ 0.980) masses when using our appendicular segmental boundaries. PMID:26336349
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wörmann, Xenia; Lesch, Markus; Steinbeis Innovation gGmbH, Center for Systems Biomedicine, Falkensee
The 2009 influenza pandemic originated from a swine-origin H1N1 virus, which, although less pathogenic than anticipated, may acquire additional virulence-associated mutations in the future. To estimate the potential risk, we sequentially passaged the isolate A/Hamburg/04/2009 in A549 human lung epithelial cells. After passage 6, we observed a 100-fold increased replication rate. High-throughput sequencing of viral gene segments identified five dominant mutations, whose contribution to the enhanced growth was analyzed by reverse genetics. The increased replication rate was pinpointed to two mutations within the hemagglutinin (HA) gene segment (HA{sub 1} D130E, HA{sub 2} I91L), near the receptor binding site and themore » stem domain. The adapted virus also replicated more efficiently in mice in vivo. Enhanced replication rate correlated with increased fusion pH of the HA protein and a decrease in receptor affinity. Our data might be relevant for surveillance of pre-pandemic strains and development of high titer cell culture strains for vaccine production. - Highlights: • We observed a spontaneous mutation of a 2009-pandemic H1N1 influenza virus in vitro. • The adaptation led to a 100-fold rise in replication rate in human A549 cells. • Adaptation was caused by two mutations in the HA gene segment. • Adaptation correlates with increased fusion pH and decreased receptor affinity.« less
Park, Bo-Yong; Lee, Mi Ji; Lee, Seung-Hak; Cha, Jihoon; Chung, Chin-Sang; Kim, Sung Tae; Park, Hyunjin
2018-01-01
Migraineurs show an increased load of white matter hyperintensities (WMHs) and more rapid deep WMH progression. Previous methods for WMH segmentation have limited efficacy to detect small deep WMHs. We developed a new fully automated detection pipeline, DEWS (DEep White matter hyperintensity Segmentation framework), for small and superficially-located deep WMHs. A total of 148 non-elderly subjects with migraine were included in this study. The pipeline consists of three components: 1) white matter (WM) extraction, 2) WMH detection, and 3) false positive reduction. In WM extraction, we adjusted the WM mask to re-assign misclassified WMHs back to WM using many sequential low-level image processing steps. In WMH detection, the potential WMH clusters were detected using an intensity based threshold and region growing approach. For false positive reduction, the detected WMH clusters were classified into final WMHs and non-WMHs using the random forest (RF) classifier. Size, texture, and multi-scale deep features were used to train the RF classifier. DEWS successfully detected small deep WMHs with a high positive predictive value (PPV) of 0.98 and true positive rate (TPR) of 0.70 in the training and test sets. Similar performance of PPV (0.96) and TPR (0.68) was attained in the validation set. DEWS showed a superior performance in comparison with other methods. Our proposed pipeline is freely available online to help the research community in quantifying deep WMHs in non-elderly adults.
An interactive method based on the live wire for segmentation of the breast in mammography images.
Zewei, Zhang; Tianyue, Wang; Li, Guo; Tingting, Wang; Lu, Xu
2014-01-01
In order to improve accuracy of computer-aided diagnosis of breast lumps, the authors introduce an improved interactive segmentation method based on Live Wire. This paper presents the Gabor filters and FCM clustering algorithm is introduced to the Live Wire cost function definition. According to the image FCM analysis for image edge enhancement, we eliminate the interference of weak edge and access external features clear segmentation results of breast lumps through improving Live Wire on two cases of breast segmentation data. Compared with the traditional method of image segmentation, experimental results show that the method achieves more accurate segmentation of breast lumps and provides more accurate objective basis on quantitative and qualitative analysis of breast lumps.
Shakeri Yekta, Sepehr; Gustavsson, Jenny; Svensson, Bo H; Skyllberg, Ulf
2012-01-30
The effect of sequential extraction of trace metals on sulfur (S) speciation in anoxic sludge samples from two lab-scale biogas reactors augmented with Fe was investigated. Analyses of sulfur K-edge X-ray absorption near edge structure (S XANES) spectroscopy and acid volatile sulfide (AVS) were conducted on the residues from each step of the sequential extraction. The S speciation in sludge samples after AVS analysis was also determined by S XANES. Sulfur was mainly present as FeS (≈ 60% of total S) and reduced organic S (≈ 30% of total S), such as organic sulfide and thiol groups, in the anoxic solid phase. Sulfur XANES and AVS analyses showed that during first step of the extraction procedure (the removal of exchangeable cations), a part of the FeS fraction corresponding to 20% of total S was transformed to zero-valent S, whereas Fe was not released into the solution during this transformation. After the last extraction step (organic/sulfide fraction) a secondary Fe phase was formed. The change in chemical speciation of S and Fe occurring during sequential extraction procedure suggests indirect effects on trace metals associated to the FeS fraction that may lead to incorrect results. Furthermore, by S XANES it was verified that the AVS analysis effectively removed the FeS fraction. The present results identified critical limitations for the application of sequential extraction for trace metal speciation analysis outside the framework for which the methods were developed. Copyright © 2011 Elsevier B.V. All rights reserved.
Avery, Taliser R; Kulldorff, Martin; Vilk, Yury; Li, Lingling; Cheetham, T Craig; Dublin, Sascha; Davis, Robert L; Liu, Liyan; Herrinton, Lisa; Brown, Jeffrey S
2013-05-01
This study describes practical considerations for implementation of near real-time medical product safety surveillance in a distributed health data network. We conducted pilot active safety surveillance comparing generic divalproex sodium to historical branded product at four health plans from April to October 2009. Outcomes reported are all-cause emergency room visits and fractures. One retrospective data extract was completed (January 2002-June 2008), followed by seven prospective monthly extracts (January 2008-November 2009). To evaluate delays in claims processing, we used three analytic approaches: near real-time sequential analysis, sequential analysis with 1.5 month delay, and nonsequential (using final retrospective data). Sequential analyses used the maximized sequential probability ratio test. Procedural and logistical barriers to active surveillance were documented. We identified 6586 new users of generic divalproex sodium and 43,960 new users of the branded product. Quality control methods identified 16 extract errors, which were corrected. Near real-time extracts captured 87.5% of emergency room visits and 50.0% of fractures, which improved to 98.3% and 68.7% respectively with 1.5 month delay. We did not identify signals for either outcome regardless of extract timeframe, and slight differences in the test statistic and relative risk estimates were found. Near real-time sequential safety surveillance is feasible, but several barriers warrant attention. Data quality review of each data extract was necessary. Although signal detection was not affected by delay in analysis, when using a historical control group differential accrual between exposure and outcomes may theoretically bias near real-time risk estimates towards the null, causing failure to detect a signal. Copyright © 2013 John Wiley & Sons, Ltd.
Segmentation of radiographic images under topological constraints: application to the femur.
Gamage, Pavan; Xie, Sheng Quan; Delmas, Patrice; Xu, Wei Liang
2010-09-01
A framework for radiographic image segmentation under topological control based on two-dimensional (2D) image analysis was developed. The system is intended for use in common radiological tasks including fracture treatment analysis, osteoarthritis diagnostics and osteotomy management planning. The segmentation framework utilizes a generic three-dimensional (3D) model of the bone of interest to define the anatomical topology. Non-rigid registration is performed between the projected contours of the generic 3D model and extracted edges of the X-ray image to achieve the segmentation. For fractured bones, the segmentation requires an additional step where a region-based active contours curve evolution is performed with a level set Mumford-Shah method to obtain the fracture surface edge. The application of the segmentation framework to analysis of human femur radiographs was evaluated. The proposed system has two major innovations. First, definition of the topological constraints does not require a statistical learning process, so the method is generally applicable to a variety of bony anatomy segmentation problems. Second, the methodology is able to handle both intact and fractured bone segmentation. Testing on clinical X-ray images yielded an average root mean squared distance (between the automatically segmented femur contour and the manual segmented ground truth) of 1.10 mm with a standard deviation of 0.13 mm. The proposed point correspondence estimation algorithm was benchmarked against three state-of-the-art point matching algorithms, demonstrating successful non-rigid registration for the cases of interest. A topologically constrained automatic bone contour segmentation framework was developed and tested, providing robustness to noise, outliers, deformations and occlusions.
Shimansky, Y; Saling, M; Wunderlich, D A; Bracha, V; Stelmach, G E; Bloedel, J R
1997-01-01
This study addresses the issue of the role of the cerebellum in the processing of sensory information by determining the capability of cerebellar patients to acquire and use kinesthetic cues received via the active or passive tracing of an irregular shape while blindfolded. Patients with cerebellar lesions and age-matched healthy controls were tested on four tasks: (1) learning to discriminate a reference shape from three others through the repeated tracing of the reference template; (2) reproducing the reference shape from memory by drawing blindfolded; (3) performing the same task with vision; and (4) visually recognizing the reference shape. The cues used to acquire and then to recognize the reference shape were generated under four conditions: (1) "active kinesthesia," in which cues were acquired by the blindfolded subject while actively tracing a reference template; (2) "passive kinesthesia," in which the tracing was performed while the hand was guided passively through the template; (3) "sequential vision," in which the shape was visualized by the serial exposure of small segments of its outline; and (4) "full vision," in which the entire shape was visualized. The sequential vision condition was employed to emulate the sequential way in which kinesthetic information is acquired while tracing the reference shape. The results demonstrate a substantial impairment of cerebellar patients in their capability to perceive two-dimensional irregular shapes based only on kinesthetic cues. There also is evidence that this deficit in part relates to a reduced capacity to integrate temporal sequences of sensory cues into a complete image useful for shape discrimination tasks or for reproducing the shape through drawing. Consequently, the cerebellum has an important role in this type of sensory information processing even when it is not directly associated with the execution of movements.
A patient-specific segmentation framework for longitudinal MR images of traumatic brain injury
NASA Astrophysics Data System (ADS)
Wang, Bo; Prastawa, Marcel; Irimia, Andrei; Chambers, Micah C.; Vespa, Paul M.; Van Horn, John D.; Gerig, Guido
2012-02-01
Traumatic brain injury (TBI) is a major cause of death and disability worldwide. Robust, reproducible segmentations of MR images with TBI are crucial for quantitative analysis of recovery and treatment efficacy. However, this is a significant challenge due to severe anatomy changes caused by edema (swelling), bleeding, tissue deformation, skull fracture, and other effects related to head injury. In this paper, we introduce a multi-modal image segmentation framework for longitudinal TBI images. The framework is initialized through manual input of primary lesion sites at each time point, which are then refined by a joint approach composed of Bayesian segmentation and construction of a personalized atlas. The personalized atlas construction estimates the average of the posteriors of the Bayesian segmentation at each time point and warps the average back to each time point to provide the updated priors for Bayesian segmentation. The difference between our approach and segmenting longitudinal images independently is that we use the information from all time points to improve the segmentations. Given a manual initialization, our framework automatically segments healthy structures (white matter, grey matter, cerebrospinal fluid) as well as different lesions such as hemorrhagic lesions and edema. Our framework can handle different sets of modalities at each time point, which provides flexibility in analyzing clinical scans. We show results on three subjects with acute baseline scans and chronic follow-up scans. The results demonstrate that joint analysis of all the points yields improved segmentation compared to independent analysis of the two time points.
Distal regulatory regions restrict the expression of cis-linked genes to the tapetal cells.
Franco, Luciana O; de O Manes, Carmem Lara; Hamdi, Said; Sachetto-Martins, Gilberto; de Oliveira, Dulce E
2002-04-24
The oleosin glycine-rich protein genes Atgrp-6, Atgrp-7, and Atgrp-8 occur in clusters in the Arabidopsis genome and are expressed specifically in the tapetum cells. The cis-regulatory regions involved in the tissue-specific gene expression were investigated by fusing different segments of the gene cluster to the uidA reporter gene. Common distal regulatory regions were identified that coordinate expression of the sequential genes. At least two of these genes were regulated spatially by proximal and distal sequences. The cis-acting elements (122 bp upstream of the transcriptional start point) drive the uidA expression to floral tissues, whereas distal 5' upstream regions restrict the gene activity to tapetal cells.
Blending Velocities In Task Space In Computing Robot Motions
NASA Technical Reports Server (NTRS)
Volpe, Richard A.
1995-01-01
Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.
Jian, Yifan; Xu, Jing; Gradowski, Martin A.; Bonora, Stefano; Zawadzki, Robert J.; Sarunic, Marinko V.
2014-01-01
We present wavefront sensorless adaptive optics (WSAO) Fourier domain optical coherence tomography (FD-OCT) for in vivo small animal retinal imaging. WSAO is attractive especially for mouse retinal imaging because it simplifies optical design and eliminates the need for wavefront sensing, which is difficult in the small animal eye. GPU accelerated processing of the OCT data permitted real-time extraction of image quality metrics (intensity) for arbitrarily selected retinal layers to be optimized. Modal control of a commercially available segmented deformable mirror (IrisAO Inc.) provided rapid convergence using a sequential search algorithm. Image quality improvements with WSAO OCT are presented for both pigmented and albino mouse retinal data, acquired in vivo. PMID:24575347
Race and Older Mothers’ Differentiation: A Sequential Quantitative and Qualitative Analysis
Sechrist, Jori; Suitor, J. Jill; Riffin, Catherine; Taylor-Watson, Kadari; Pillemer, Karl
2011-01-01
The goal of this paper is to demonstrate a process by which qualitative and quantitative approaches are combined to reveal patterns in the data that are unlikely to be detected and confirmed by either method alone. Specifically, we take a sequential approach to combining qualitative and quantitative data to explore race differences in how mothers differentiate among their adult children. We began with a standard multivariate analysis examining race differences in mothers’ differentiation among their adult children regarding emotional closeness and confiding. Finding no race differences in this analysis, we conducted an in-depth comparison of the Black and White mothers’ narratives to determine whether there were underlying patterns that we had been unable to detect in our first analysis. Using this method, we found that Black mothers were substantially more likely than White mothers to emphasize interpersonal relationships within the family when describing differences among their children. In our final step, we developed a measure of familism based on the qualitative data and conducted a multivariate analysis to confirm the patterns revealed by the in-depth comparison of the mother’s narratives. We conclude that using such a sequential mixed methods approach to data analysis has the potential to shed new light on complex family relations. PMID:21967639
Terrill, Thomas H; Wolfe, Richard M; Muir, James P
2010-12-01
Browse species containing condensed tannins (CTs) are an important source of nutrition for grazing/browsing livestock and wildlife in many parts of the world, but information on fiber concentration and CT-fiber interactions for these plants is lacking. Ten forage or browse species with a range of CT concentrations were oven dried and freeze dried and then analyzed for ash-corrected neutral detergent fiber (NDFom) and corrected acid detergent fiber (ADFom) using separate samples (ADFSEP) and sequential NDF-ADF analysis (ADFSEQ) with the ANKOM™ fiber analysis system. The ADFSEP and ADFSEQ residues were then analyzed for nitrogen (N) concentration. Oven drying increased (P < 0.05) fiber concentrations with some species, but not with others. For high-CT forage and browse species, ADFSEP concentrations were greater (P < 0.05) than NDFom values and approximately double the ADFSEQ values. Nitrogen concentration was greater (P < 0.05) in ADFSEP than ADFSEQ residues, likely due to precipitation with CTs. Sequential NDF-ADF analysis gave more realistic values and appeared to remove most of the fiber residue contaminants in CT forage samples. Freeze drying samples with sequential NDF-ADF analysis is recommended in the ANKOM™ fiber analysis system with CT-containing forage and browse species. Copyright © 2010 Society of Chemical Industry.
Role of genetic mutations in folate-related enzyme genes on Male Infertility
Liu, Kang; Zhao, Ruizhe; Shen, Min; Ye, Jiaxin; Li, Xiao; Huang, Yuan; Hua, Lixin; Wang, Zengjun; Li, Jie
2015-01-01
Several studies showed that the genetic mutations in the folate-related enzyme genes might be associated with male infertility; however, the results were still inconsistent. We performed a meta-analysis with trial sequential analysis to investigate the associations between the MTHFR C677T, MTHFR A1298C, MTR A2756G, MTRR A66G mutations and the MTHFR haplotype with the risk of male infertility. Overall, a total of 37 studies were selected. Our meta-analysis showed that the MTHFR C677T mutation was a risk factor for male infertility in both azoospermia and oligoasthenoteratozoospermia patients, especially in Asian population. Men carrying the MTHFR TC haplotype were most liable to suffer infertility while those with CC haplotype had lowest risk. On the other hand, the MTHFR A1298C mutation was not related to male infertility. MTR A2756G and MTRR A66G were potential candidates in the pathogenesis of male infertility, but more case-control studies were required to avoid false-positive outcomes. All of these results were confirmed by the trial sequential analysis. Finally, our meta-analysis with trial sequential analysis proved that the genetic mutations in the folate-related enzyme genes played a significant role in male infertility. PMID:26549413
Park, Henry S; Gross, Cary P; Makarov, Danil V; Yu, James B
2012-08-01
To evaluate the influence of immortal time bias on observational cohort studies of postoperative radiotherapy (PORT) and the effectiveness of sequential landmark analysis to account for this bias. First, we reviewed previous studies of the Surveillance, Epidemiology, and End Results (SEER) database to determine how frequently this bias was considered. Second, we used SEER to select three tumor types (glioblastoma multiforme, Stage IA-IVM0 gastric adenocarcinoma, and Stage II-III rectal carcinoma) for which prospective trials demonstrated an improvement in survival associated with PORT. For each tumor type, we calculated conditional survivals and adjusted hazard ratios of PORT vs. postoperative observation cohorts while restricting the sample at sequential monthly landmarks. Sixty-two percent of previous SEER publications evaluating PORT failed to use a landmark analysis. As expected, delivery of PORT for all three tumor types was associated with improved survival, with the largest associated benefit favoring PORT when all patients were included regardless of survival. Preselecting a cohort with a longer minimum survival sequentially diminished the apparent benefit of PORT. Although the majority of previous SEER articles do not correct for it, immortal time bias leads to altered estimates of PORT effectiveness, which are very sensitive to landmark selection. We suggest the routine use of sequential landmark analysis to account for this bias. Copyright © 2012 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Henry S.; Gross, Cary P.; Makarov, Danil V.
2012-08-01
Purpose: To evaluate the influence of immortal time bias on observational cohort studies of postoperative radiotherapy (PORT) and the effectiveness of sequential landmark analysis to account for this bias. Methods and Materials: First, we reviewed previous studies of the Surveillance, Epidemiology, and End Results (SEER) database to determine how frequently this bias was considered. Second, we used SEER to select three tumor types (glioblastoma multiforme, Stage IA-IVM0 gastric adenocarcinoma, and Stage II-III rectal carcinoma) for which prospective trials demonstrated an improvement in survival associated with PORT. For each tumor type, we calculated conditional survivals and adjusted hazard ratios of PORTmore » vs. postoperative observation cohorts while restricting the sample at sequential monthly landmarks. Results: Sixty-two percent of previous SEER publications evaluating PORT failed to use a landmark analysis. As expected, delivery of PORT for all three tumor types was associated with improved survival, with the largest associated benefit favoring PORT when all patients were included regardless of survival. Preselecting a cohort with a longer minimum survival sequentially diminished the apparent benefit of PORT. Conclusions: Although the majority of previous SEER articles do not correct for it, immortal time bias leads to altered estimates of PORT effectiveness, which are very sensitive to landmark selection. We suggest the routine use of sequential landmark analysis to account for this bias.« less
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
NASA Astrophysics Data System (ADS)
Febriani, F.; Handayani, L.; Setyani, A.; Anggono, T.; Syuhada; Soedjatmiko, B.
2018-03-01
The dimensionality and regional strike analyses of the Cimandiri Fault, West Java, Indonesia have been investigated. The Cimandiri Fault consists of six segments. They are Loji, Cidadap, Nyalindung, Cibeber, Saguling and Padalarang segments. The magnetotelluric (MT) investigation was done in the Cibeber segment. There were 42 observation points of the magnetotelluric data, which were distributed along 2 lines. The magnetotelluric phase tensor has been applied to determine the dimensionality and regional strike of the Cibeber segment, Cimandiri Fault, West Java. The result of the dimensionality analysis shows that the range values of the skew angle value which indicate the dimensionality of the study area are -5 ≤ β ≥ 5. These values indicate if we would like to generate the subsurface model of the Cibeber segment by using the magnetotelluric data, it is safe to assume that the Cibeber segment has the 2-D. While the regional strike analysis presents that the regional strike of the Cibeber segment is about N70-80°E.
Assignment of simian rotavirus SA11 temperature-sensitive mutant groups B and E to genome segments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gombold, J.L.; Estes, M.K.; Ramig, R.F.
1985-05-01
Recombinant (reassortant) viruses were selected from crosses between temperature-sensitive (ts) mutants of simian rotavirus SA11 and wild-type human rotavirus Wa. The double-stranded genome RNAs of the reassortants were examined by electrophoresis in Tris-glycine-buffered polyacrylamide gels and by dot hybridization with a cloned DNA probe for genome segment 2. Analysis of replacements of genome segments in the reassortants allowed construction of a map correlating genome segments providing functions interchangeable between SA11 and Wa. The reassortants revealed a functional correspondence in order of increasing electrophoretic mobility of genome segments. Analysis of the parental origin of genome segments in ts+ SA11/Wa reassortants derivedmore » from the crosses SA11 tsB(339) X Wa and SA11 tsE(1400) X Wa revealed that the group B lesion of tsB(339) was located on genome segment 3 and the group E lesion of tsE(1400) was on segment 8.« less
Identification of uncommon objects in containers
Bremer, Peer-Timo; Kim, Hyojin; Thiagarajan, Jayaraman J.
2017-09-12
A system for identifying in an image an object that is commonly found in a collection of images and for identifying a portion of an image that represents an object based on a consensus analysis of segmentations of the image. The system collects images of containers that contain objects for generating a collection of common objects within the containers. To process the images, the system generates a segmentation of each image. The image analysis system may also generate multiple segmentations for each image by introducing variations in the selection of voxels to be merged into a segment. The system then generates clusters of the segments based on similarity among the segments. Each cluster represents a common object found in the containers. Once the clustering is complete, the system may be used to identify common objects in images of new containers based on similarity between segments of images and the clusters.
Observations on germ band development in the cellar spider Pholcus phalangioides.
Turetzek, Natascha; Prpic, Nikola-Michael
2016-11-01
Most recent studies of spider embryonic development have focused on representatives of the species-rich group of entelegyne spiders (over 80 % of all extant species). Embryogenesis in the smaller spider groups, however, is less well studied. Here, we describe the development of the germ band in the spider species Pholcus phalangioides, a representative of the haplogyne spiders that are phylogenetically the sister group of the entelegyne spiders. We show that the transition from radially symmetric embryonic anlage to the bilaterally symmetric germ band involves the accumulation of cells in the centre of the embryonic anlage (primary thickening). These cells then disperse all across the embryonic anlage. A secondary thickening of cells then appears in the centre of the embryonic anlage, and this thickening expands and forms the segment addition zone. We also confirm that the major part of the opisthosoma initially develops as a tube shaped structure, and its segments are then sequentially folded down on the yolk during inversion. This special mode of opisthosoma formation has not been reported for entelegyne spiders, but a more comprehensive sampling of this diverse group is necessary to decide whether this peculiarity is indeed lacking in the entelegyne spiders.
Coaxially gated in-wire thin-film transistors made by template assembly.
Kovtyukhova, Nina I; Kelley, Brian K; Mallouk, Thomas E
2004-10-13
Nanowire field effect transistors were prepared by a wet chemical template replication method using anodic aluminum oxide membranes. The membrane pores were first lined with a thin SiO2 layer by the surface sol-gel method. Au, CdS (or CdSe), and Au wire segments were then sequentially electrodeposited within the pores, and the resulting nanowires were released by dissolution of the membrane. Electrofluidic alignment of these nanowires between source and drain leads and evaporation of gold over the central CdS (CdSe) stripe affords a "wrap-around gate" structure. At VDS = -2 V, the Au/CdS/Au devices had an ON/OFF current ratio of 103, a threshold voltage of 2.4 V, and a subthreshold slope of 2.2 V/decade. A 3-fold decrease in the subthreshold slope relative to that of planar nanocrystalline CdSe devices can be attributed to coaxial gating. The control of dimensions afforded by template synthesis should make it possible to reduce the gate dielectric thickness, channel length, and diameter of the semiconductor segment to sublithographic dimensions while retaining the simplicity of the wet chemical synthetic method.
Development of targeted messages to promote smoking cessation among construction trade workers
Strickland, J. R.; Smock, N.; Casey, C.; Poor, T.; Kreuter, M. W.; Evanoff, B. A.
2015-01-01
Blue-collar workers, particularly those in the construction trades, are more likely to smoke and have less success in quitting when compared with white-collar workers. Little is known about health communication strategies that might influence this priority population. This article describes our formative work to develop targeted messages to increase participation in an existing smoking cessation program among construction workers. Using an iterative and sequential mixed-methods approach, we explored the culture, health attitudes and smoking behaviors of unionized construction workers. We used focus group and survey data to inform message development, and applied audience segmentation methods to identify potential subgroups. Among 144 current smokers, 65% reported wanting to quit smoking in the next 6 months and only 15% had heard of a union-sponsored smoking cessation program, despite widespread advertising. We tested 12 message concepts and 26 images with the target audience to evaluate perceived relevance and effectiveness. Participants responded most favorably to messages and images that emphasized family and work, although responses varied by audience segments based on age and parental status. This study is an important step towards integrating the culture of a high-risk group into targeted messages to increase participation in smoking cessation activities. PMID:25231165
NASA Astrophysics Data System (ADS)
Loftfield, Nina; Kästner, Markus; Reithmeier, Eduard
2018-06-01
Local and global liquid transport properties correlate strongly with the morphology of porous materials. Therefore, by characterizing the porous network information is indirectly gained on the materials properties. Properties like the open-porosity are easily accessible with techniques like mercury porosimetry. However, the 3D image reconstruction, destructive or non-destructive, holds advantages like an accurate spatially resolved representation of the investigated material. Common 3D data acquisition is done by x-ray microtomography or a combination of focused ion beam based milling and scanning electron microscopy. In this work a reconstruction approach similar to the latter one is implemented. The porous network is reconstructed based on an alternating process of milling the surface by fly cutting and measuring the surface data with a confocal laser scanning microscope. This has the benefit of reconstructing the pore network on the basis of surface height data, measuring the structure boundaries directly. The stack of milled surface height data needs to be registered and the pore structure to be segmented. The segmented pore structure is connected throughout each height layer and afterwards meshed. The investigated materials are porous surface coatings of aluminum oxide for the usage in tribological pairings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kennemur, Justin G.; Bates, Frank S.; Hillmyer, Marc A.
Synthesis of poly(methyl ethacrylate), (PMEA), in tetrahydrofuran at -78 °C using anionic polymerization techniques results in high molar mass (>30 kg mol-1), low dispersity (1.3), and high conversion (>81%). The molar masses of a series of samples are consistent with values anticipated by the monomer-to-initiator ratio and conversion. These results represent a significant improvement to earlier reported attempts to prepare PMEA using anionic methods. Successful diblock polymerization of polystyrene-block-PMEA, (PS-PMEA), and poly(4-tert-butylstyrene)-block-PMEA, (PtBS-PMEA), is achieved through sequential anionic polymerization techniques with dispersities as low as 1.06 and segment molar fractions close to those targeted. Broad principal scattering peaks observed bymore » small-angle X-ray scattering (SAXS) for symmetric PS-PMEA at relatively high molar mass (39 kg mol-1) suggests an effective interaction parameter (χeff) that is smaller than for PS-block-poly(methyl methacrylate). On the other hand, PtBS-PMEA block polymers form a well-ordered morphology based on SAXS measurements and is attributable to the more hydrophobic PtBS segment. These results confirm the viability of PMEA as a new constituent in the expanding suite of polymers suitable for preparing nanostructured block polymers.« less
Design and validation of Segment--freely available software for cardiovascular image analysis.
Heiberg, Einar; Sjögren, Jane; Ugander, Martin; Carlsson, Marcus; Engblom, Henrik; Arheden, Håkan
2010-01-11
Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment.heiberg.se. Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.
Integrated approach to multimodal media content analysis
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1999-12-01
In this work, we present a system for the automatic segmentation, indexing and retrieval of audiovisual data based on the combination of audio, visual and textural content analysis. The video stream is demultiplexed into audio, image and caption components. Then, a semantic segmentation of the audio signal based on audio content analysis is conducted, and each segment is indexed as one of the basic audio types. The image sequence is segmented into shots based on visual information analysis, and keyframes are extracted from each shot. Meanwhile, keywords are detected from the closed caption. Index tables are designed for both linear and non-linear access to the video. It is shown by experiments that the proposed methods for multimodal media content analysis are effective. And that the integrated framework achieves satisfactory results for video information filtering and retrieval.
A segmentation/clustering model for the analysis of array CGH data.
Picard, F; Robin, S; Lebarbier, E; Daudin, J-J
2007-09-01
Microarray-CGH (comparative genomic hybridization) experiments are used to detect and map chromosomal imbalances. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose representative sequences share the same relative copy number on average. Segmentation methods constitute a natural framework for the analysis, but they do not provide a biological status for the detected segments. We propose a new model for this segmentation/clustering problem, combining a segmentation model with a mixture model. We present a new hybrid algorithm called dynamic programming-expectation maximization (DP-EM) to estimate the parameters of the model by maximum likelihood. This algorithm combines DP and the EM algorithm. We also propose a model selection heuristic to select the number of clusters and the number of segments. An example of our procedure is presented, based on publicly available data sets. We compare our method to segmentation methods and to hidden Markov models, and we show that the new segmentation/clustering model is a promising alternative that can be applied in the more general context of signal processing.
ERIC Educational Resources Information Center
Chen, Chin-Chih; McComas, Jennifer J.; Hartman, Ellie; Symons, Frank J.
2011-01-01
Research Findings: In early childhood education, the social ecology of the child is considered critical for healthy behavioral development. There is, however, relatively little information based on directly observing what children do that describes the moment-by-moment (i.e., sequential) relation between physical aggression and peer rejection acts…
ERIC Educational Resources Information Center
Jacobson, Peggy F.; Walden, Patrick R.
2013-01-01
Purpose: This study explored the utility of language sample analysis for evaluating language ability in school-age Spanish-English sequential bilingual children. Specifically, the relative potential of lexical diversity and word/morpheme omission as predictors of typical or atypical language status was evaluated. Method: Narrative samples were…
Comparing multiple imputation methods for systematically missing subject-level data.
Kline, David; Andridge, Rebecca; Kaizar, Eloise
2017-06-01
When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this paper, we focus on missing subject-level variables and compare two multiple imputation approaches: a joint modeling approach and a sequential conditional modeling approach. We find the joint modeling approach to be preferable to the sequential conditional approach, except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Remarkably, the estimates from the sequential conditional method are often less efficient than a complete case analysis, which, in the context of research synthesis, implies that we lose efficiency by combining studies. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Satínský, Dalibor; Huclová, Jitka; Ferreira, Raquel L C; Montenegro, Maria Conceição B S M; Solich, Petr
2006-02-13
The porous monolithic columns show high performance at relatively low pressure. The coupling of short monoliths with sequential injection technique (SIA) results in a new approach to implementation of separation step to non-separation low-pressure method. In this contribution, a new separation method for simultaneous determination of ambroxol, methylparaben and benzoic acid was developed based on a novel reversed-phase sequential injection chromatography (SIC) technique with UV detection. A Chromolith SpeedROD RP-18e, 50-4.6 mm column with 10 mm precolumn and a FIAlab 3000 system with a six-port selection valve and 5 ml syringe were used for sequential injection chromatographic separations in our study. The mobile phase used was acetonitrile-tetrahydrofuran-0.05M acetic acid (10:10:90, v/v/v), pH 3.75 adjusted with triethylamine, flow rate 0.48 mlmin(-1), UV-detection was at 245 nm. The analysis time was <11 min. A new SIC method was validated and compared with HPLC. The method was found to be useful for the routine analysis of the active compounds ambroxol and preservatives (methylparaben or benzoic acid) in various pharmaceutical syrups and drops.
Mining sequential patterns for protein fold recognition.
Exarchos, Themis P; Papaloukas, Costas; Lampros, Christos; Fotiadis, Dimitrios I
2008-02-01
Protein data contain discriminative patterns that can be used in many beneficial applications if they are defined correctly. In this work sequential pattern mining (SPM) is utilized for sequence-based fold recognition. Protein classification in terms of fold recognition plays an important role in computational protein analysis, since it can contribute to the determination of the function of a protein whose structure is unknown. Specifically, one of the most efficient SPM algorithms, cSPADE, is employed for the analysis of protein sequence. A classifier uses the extracted sequential patterns to classify proteins in the appropriate fold category. For training and evaluating the proposed method we used the protein sequences from the Protein Data Bank and the annotation of the SCOP database. The method exhibited an overall accuracy of 25% in a classification problem with 36 candidate categories. The classification performance reaches up to 56% when the five most probable protein folds are considered.
Sequential Injection Analysis for Optimization of Molecular Biology Reactions
Allen, Peter B.; Ellington, Andrew D.
2011-01-01
In order to automate the optimization of complex biochemical and molecular biology reactions, we developed a Sequential Injection Analysis (SIA) device and combined this with a Design of Experiment (DOE) algorithm. This combination of hardware and software automatically explores the parameter space of the reaction and provides continuous feedback for optimizing reaction conditions. As an example, we optimized the endonuclease digest of a fluorogenic substrate, and showed that the optimized reaction conditions also applied to the digest of the substrate outside of the device, and to the digest of a plasmid. The sequential technique quickly arrived at optimized reaction conditions with less reagent use than a batch process (such as a fluid handling robot exploring multiple reaction conditions in parallel) would have. The device and method should now be amenable to much more complex molecular biology reactions whose variable spaces are correspondingly larger. PMID:21338059
NASA Astrophysics Data System (ADS)
Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.
2017-06-01
This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.
Validation of automatic segmentation of ribs for NTCP modeling.
Stam, Barbara; Peulen, Heike; Rossi, Maddalena M G; Belderbos, José S A; Sonke, Jan-Jakob
2016-03-01
Determination of a dose-effect relation for rib fractures in a large patient group has been limited by the time consuming manual delineation of ribs. Automatic segmentation could facilitate such an analysis. We determine the accuracy of automatic rib segmentation in the context of normal tissue complication probability modeling (NTCP). Forty-one patients with stage I/II non-small cell lung cancer treated with SBRT to 54 Gy in 3 fractions were selected. Using the 4DCT derived mid-ventilation planning CT, all ribs were manually contoured and automatically segmented. Accuracy of segmentation was assessed using volumetric, shape and dosimetric measures. Manual and automatic dosimetric parameters Dx and EUD were tested for equivalence using the Two One-Sided T-test (TOST), and assessed for agreement using Bland-Altman analysis. NTCP models based on manual and automatic segmentation were compared. Automatic segmentation was comparable with the manual delineation in radial direction, but larger near the costal cartilage and vertebrae. Manual and automatic Dx and EUD were significantly equivalent. The Bland-Altman analysis showed good agreement. The two NTCP models were very similar. Automatic rib segmentation was significantly equivalent to manual delineation and can be used for NTCP modeling in a large patient group. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Tooth segmentation system with intelligent editing for cephalometric analysis
NASA Astrophysics Data System (ADS)
Chen, Shoupu
2015-03-01
Cephalometric analysis is the study of the dental and skeletal relationship in the head, and it is used as an assessment and planning tool for improved orthodontic treatment of a patient. Conventional cephalometric analysis identifies bony and soft-tissue landmarks in 2D cephalometric radiographs, in order to diagnose facial features and abnormalities prior to treatment, or to evaluate the progress of treatment. Recent studies in orthodontics indicate that there are persistent inaccuracies and inconsistencies in the results provided using conventional 2D cephalometric analysis. Obviously, plane geometry is inappropriate for analyzing anatomical volumes and their growth; only a 3D analysis is able to analyze the three-dimensional, anatomical maxillofacial complex, which requires computing inertia systems for individual or groups of digitally segmented teeth from an image volume of a patient's head. For the study of 3D cephalometric analysis, the current paper proposes a system for semi-automatically segmenting teeth from a cone beam computed tomography (CBCT) volume with two distinct features, including an intelligent user-input interface for automatic background seed generation, and a graphics processing unit (GPU) acceleration mechanism for three-dimensional GrowCut volume segmentation. Results show a satisfying average DICE score of 0.92, with the use of the proposed tooth segmentation system, by 15 novice users who segmented a randomly sampled tooth set. The average GrowCut processing time is around one second per tooth, excluding user interaction time.
Bilingualism modulates the white matter structure of language-related pathways.
Hämäläinen, Sini; Sairanen, Viljami; Leminen, Alina; Lehtonen, Minna
2017-05-15
Learning and speaking a second language (L2) may result in profound changes in the human brain. Here, we investigated local structural differences along two language-related white matter trajectories, the arcuate fasciculus and the inferior fronto-occipital fasciculus (IFOF), between early simultaneous bilinguals and late sequential bilinguals. We also examined whether early exposure to two languages might lead to a more bilateral structural organization of the arcuate fasciculus. Fractional anisotropy, mean and radial diffusivities (FA, MD, and RD respectively) were extracted to analyse tract-specific changes. Additionally, global voxel-wise effects were investigated with Tract-Based Spatial Statistics (TBSS). We found that relative to late exposure, early exposure to L2 leads to increased FA along a phonology-related segment of the arcuate fasciculus, but induces no modulations along the IFOF, associated to semantic processing. Late sequential bilingualism, however, was associated with decreased MD along the bilateral IFOF. Our results suggest that early vs. late bilingualism may lead to qualitatively different kind of changes in the structural language-related network. Furthermore, we show that early bilingualism contributes to the structural laterality of the arcuate fasciculus, leading to a more bilateral organization of these perisylvian language-related tracts. Copyright © 2017 Elsevier Inc. All rights reserved.
Nanthagopal, A Padma; Rajamony, R Sukanesh
2012-07-01
The proposed system provides new textural information for segmenting tumours, efficiently and accurately and with less computational time, from benign and malignant tumour images, especially in smaller dimensions of tumour regions of computed tomography (CT) images. Region-based segmentation of tumour from brain CT image data is an important but time-consuming task performed manually by medical experts. The objective of this work is to segment brain tumour from CT images using combined grey and texture features with new edge features and nonlinear support vector machine (SVM) classifier. The selected optimal features are used to model and train the nonlinear SVM classifier to segment the tumour from computed tomography images and the segmentation accuracies are evaluated for each slice of the tumour image. The method is applied on real data of 80 benign, malignant tumour images. The results are compared with the radiologist labelled ground truth. Quantitative analysis between ground truth and the segmented tumour is presented in terms of segmentation accuracy and the overlap similarity measure dice metric. From the analysis and performance measures such as segmentation accuracy and dice metric, it is inferred that better segmentation accuracy and higher dice metric are achieved with the normalized cut segmentation method than with the fuzzy c-means clustering method.
Segmental Analysis of Chlorprothixene and Desmethylchlorprothixene in Postmortem Hair.
Günther, Kamilla Nyborg; Johansen, Sys Stybe; Wicktor, Petra; Banner, Jytte; Linnet, Kristian
2018-06-26
Analysis of drugs in hair differs from their analysis in other tissues due to the extended detection window, as well as the opportunity that segmental hair analysis offers for the detection of changes in drug intake over time. The antipsychotic drug chlorprothixene is widely used, but few reports exist on chlorprothixene concentrations in hair. In this study, we analyzed hair segments from 20 deceased psychiatric patients who had undergone chronic chlorprothixene treatment, and we report hair concentrations of chlorprothixene and its metabolite desmethylchlorprothixene. Three to six 1-cm long segments were analyzed per individual, corresponding to ~3-6 months of hair growth before death, depending on the length of the hair. We used a previously published and fully validated liquid chromatography-tandem mass spectrometry method for the hair analysis. The 10th-90th percentiles of chlorprothixene and desmethylchlorprothixene concentrations in all hair segments were 0.05-0.84 ng/mg and 0.06-0.89 ng/mg, respectively, with medians of 0.21 and 0.24 ng/mg, and means of 0.38 and 0.43 ng/mg. The estimated daily dosages ranged from 28 mg/day to 417 mg/day. We found a significant positive correlation between the concentration in hair and the estimated daily doses for both chlorprothixene (P = 0.0016, slope = 0.0044 [ng/mg hair]/[mg/day]) and the metabolite desmethylchlorprothixene (P = 0.0074). Concentrations generally decreased throughout the hair shaft from proximal to distal segments, with an average reduction in concentration from segment 1 to segment 3 of 24% for all cases, indicating that most of the individuals had been compliant with their treatment. We have provided some guidance regarding reference levels for chlorprothixene and desmethylchlorprothixene concentrations in hair from patients undergoing long-term chlorprothixene treatment.
Least-squares sequential parameter and state estimation for large space structures
NASA Technical Reports Server (NTRS)
Thau, F. E.; Eliazov, T.; Montgomery, R. C.
1982-01-01
This paper presents the formulation of simultaneous state and parameter estimation problems for flexible structures in terms of least-squares minimization problems. The approach combines an on-line order determination algorithm, with least-squares algorithms for finding estimates of modal approximation functions, modal amplitudes, and modal parameters. The approach combines previous results on separable nonlinear least squares estimation with a regression analysis formulation of the state estimation problem. The technique makes use of sequential Householder transformations. This allows for sequential accumulation of matrices required during the identification process. The technique is used to identify the modal prameters of a flexible beam.
Bookwalter, Candice A; Venkatesh, Sudhakar K; Eaton, John E; Smyrk, Thomas D; Ehman, Richard L
2018-04-07
To determine correlation of liver stiffness measured by MR Elastography (MRE) with biliary abnormalities on MR Cholangiopancreatography (MRCP) and MRI parenchymal features in patients with primary sclerosing cholangitis (PSC). Fifty-five patients with PSC who underwent MRI of the liver with MRCP and MRE were retrospectively evaluated. Two board-certified abdominal radiologists in agreement reviewed the MRI, MRCP, and MRE images. The biliary tree was evaluated for stricture, dilatation, wall enhancement, and thickening at segmental duct, right main duct, left main duct, and common bile duct levels. Liver parenchyma features including signal intensity on T2W and DWI, and hyperenhancement in arterial, portal venous, and delayed phase were evaluated in nine Couinaud liver segments. Atrophy or hypertrophy of segments, cirrhotic morphology, varices, and splenomegaly were scored as present or absent. Regions of interest were placed in each of the nine segments on stiffness maps wherever available and liver stiffness (LS) was recorded. Mean segmental LS, right lobar (V-VIII), left lobar (I-III, and IVA, IVB), and global LS (average of all segments) were calculated. Spearman rank correlation analysis was performed for significant correlation. Features with significant correlation were then analyzed for significant differences in mean LS. Multiple regression analysis of MRI and MRCP features was performed for significant correlation with elevated LS. A total of 439/495 segments were evaluated and 56 segments not included in MRE slices were excluded for correlation analysis. Mean segmental LS correlated with the presence of strictures (r = 0.18, p < 0.001), T2W hyperintensity (r = 0.38, p < 0.001), DWI hyperintensity (r = 0.30, p < 0.001), and hyperenhancement of segment in all three phases. Mean LS of atrophic and hypertrophic segments were significantly higher than normal segments (7.07 ± 3.6 and 6.67 ± 3.26 vs. 5.1 ± 3.6 kPa, p < 0.001). In multiple regression analysis, only the presence of segmental strictures (p < 0.001), T2W hyperintensity (p = 0.01), and segmental hypertrophy (p < 0.001) were significantly associated with elevated segmental LS. Only left ductal stricture correlated with left lobe LS (r = 0.41, p = 0.018). Global LS correlated significantly with CBD stricture (r = 0.31, p = 0.02), number of segmental strictures (r = 0.28, p = 0.04), splenomegaly (r = 0.56, p < 0.001), and varices (r = 0.58, p < 0.001). In PSC, there is low but positive correlation between segmental LS and segmental duct strictures. Segments with increased LS show T2 hyperintensity, DWI hyperintensity, and post-contrast hyperenhancement. Global liver stiffness shows a moderate correlation with number of segmental strictures and significantly correlates with spleen stiffness, splenomegaly, and varices.
Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao
2018-01-01
After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work–family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work–family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work–family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women’s perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work–family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work–family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress. PMID:29719522
Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao
2018-01-01
After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work-family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work-family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work-family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women's perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work-family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work-family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress.
2003-09-11
KENNEDY SPACE CENTER, FLA. - Seen from below and through a solid rocket booster segment mockup, Jeff Thon, an SRB mechanic with United Space Alliance, tests the feasibility of a vertical solid rocket booster propellant grain inspection technique. The inspection of segments is required as part of safety analysis.
Automatic segmentation of time-lapse microscopy images depicting a live Dharma embryo.
Zacharia, Eleni; Bondesson, Maria; Riu, Anne; Ducharme, Nicole A; Gustafsson, Jan-Åke; Kakadiaris, Ioannis A
2011-01-01
Biological inferences about the toxicity of chemicals reached during experiments on the zebrafish Dharma embryo can be greatly affected by the analysis of the time-lapse microscopy images depicting the embryo. Among the stages of image analysis, automatic and accurate segmentation of the Dharma embryo is the most crucial and challenging. In this paper, an accurate and automatic segmentation approach for the segmentation of the Dharma embryo data obtained by fluorescent time-lapse microscopy is proposed. Experiments performed in four stacks of 3D images over time have shown promising results.
Computer Aided Segmentation Analysis: New Software for College Admissions Marketing.
ERIC Educational Resources Information Center
Lay, Robert S.; Maguire, John J.
1983-01-01
Compares segmentation solutions obtained using a binary segmentation algorithm (THAID) and a new chi-square-based procedure (CHAID) that segments the prospective pool of college applicants using application and matriculation as criteria. Results showed a higher number of estimated qualified inquiries and more accurate estimates with CHAID. (JAC)
Validation tools for image segmentation
NASA Astrophysics Data System (ADS)
Padfield, Dirk; Ross, James
2009-02-01
A large variety of image analysis tasks require the segmentation of various regions in an image. For example, segmentation is required to generate accurate models of brain pathology that are important components of modern diagnosis and therapy. While the manual delineation of such structures gives accurate information, the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing the parameter settings for individual cases and across a collection of datasets using the Design of Experiment framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms. We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the statistical segmentation algorithm although they perform reasonably well considering their simplicity.
Applications of magnetic resonance image segmentation in neurology
NASA Astrophysics Data System (ADS)
Heinonen, Tomi; Lahtinen, Antti J.; Dastidar, Prasun; Ryymin, Pertti; Laarne, Paeivi; Malmivuo, Jaakko; Laasonen, Erkki; Frey, Harry; Eskola, Hannu
1999-05-01
After the introduction of digital imagin devices in medicine computerized tissue recognition and classification have become important in research and clinical applications. Segmented data can be applied among numerous research fields including volumetric analysis of particular tissues and structures, construction of anatomical modes, 3D visualization, and multimodal visualization, hence making segmentation essential in modern image analysis. In this research project several PC based software were developed in order to segment medical images, to visualize raw and segmented images in 3D, and to produce EEG brain maps in which MR images and EEG signals were integrated. The software package was tested and validated in numerous clinical research projects in hospital environment.
Numerical study on the sequential Bayesian approach for radioactive materials detection
NASA Astrophysics Data System (ADS)
Qingpei, Xiang; Dongfeng, Tian; Jianyu, Zhu; Fanhua, Hao; Ge, Ding; Jun, Zeng
2013-01-01
A new detection method, based on the sequential Bayesian approach proposed by Candy et al., offers new horizons for the research of radioactive detection. Compared with the commonly adopted detection methods incorporated with statistical theory, the sequential Bayesian approach offers the advantages of shorter verification time during the analysis of spectra that contain low total counts, especially in complex radionuclide components. In this paper, a simulation experiment platform implanted with the methodology of sequential Bayesian approach was developed. Events sequences of γ-rays associating with the true parameters of a LaBr3(Ce) detector were obtained based on an events sequence generator using Monte Carlo sampling theory to study the performance of the sequential Bayesian approach. The numerical experimental results are in accordance with those of Candy. Moreover, the relationship between the detection model and the event generator, respectively represented by the expected detection rate (Am) and the tested detection rate (Gm) parameters, is investigated. To achieve an optimal performance for this processor, the interval of the tested detection rate as a function of the expected detection rate is also presented.
Seghouane, Abd-Krim; Iqbal, Asif
2017-09-01
Sequential dictionary learning algorithms have been successfully applied to functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are, however, structured data matrices with the notions of temporal smoothness in the column direction. This prior information, which can be converted into a constraint of smoothness on the learned dictionary atoms, has seldomly been included in classical dictionary learning algorithms when applied to fMRI data analysis. In this paper, we tackle this problem by proposing two new sequential dictionary learning algorithms dedicated to fMRI data analysis by accounting for this prior information. These algorithms differ from the existing ones in their dictionary update stage. The steps of this stage are derived as a variant of the power method for computing the SVD. The proposed algorithms generate regularized dictionary atoms via the solution of a left regularized rank-one matrix approximation problem where temporal smoothness is enforced via regularization through basis expansion and sparse basis expansion in the dictionary update stage. Applications on synthetic data experiments and real fMRI data sets illustrating the performance of the proposed algorithms are provided.
An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.
Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong
2014-08-01
Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.
Automatic segmentation and supervised learning-based selection of nuclei in cancer tissue images.
Nandy, Kaustav; Gudla, Prabhakar R; Amundsen, Ryan; Meaburn, Karen J; Misteli, Tom; Lockett, Stephen J
2012-09-01
Analysis of preferential localization of certain genes within the cell nuclei is emerging as a new technique for the diagnosis of breast cancer. Quantitation requires accurate segmentation of 100-200 cell nuclei in each tissue section to draw a statistically significant result. Thus, for large-scale analysis, manual processing is too time consuming and subjective. Fortuitously, acquired images generally contain many more nuclei than are needed for analysis. Therefore, we developed an integrated workflow that selects, following automatic segmentation, a subpopulation of accurately delineated nuclei for positioning of fluorescence in situ hybridization-labeled genes of interest. Segmentation was performed by a multistage watershed-based algorithm and screening by an artificial neural network-based pattern recognition engine. The performance of the workflow was quantified in terms of the fraction of automatically selected nuclei that were visually confirmed as well segmented and by the boundary accuracy of the well-segmented nuclei relative to a 2D dynamic programming-based reference segmentation method. Application of the method was demonstrated for discriminating normal and cancerous breast tissue sections based on the differential positioning of the HES5 gene. Automatic results agreed with manual analysis in 11 out of 14 cancers, all four normal cases, and all five noncancerous breast disease cases, thus showing the accuracy and robustness of the proposed approach. Published 2012 Wiley Periodicals, Inc.
A Real Time System for Multi-Sensor Image Analysis through Pyramidal Segmentation
1992-01-30
A Real Time Syte for M~ulti- sensor Image Analysis S. E I0 through Pyramidal Segmentation/ / c •) L. Rudin, S. Osher, G. Koepfler, J.9. Morel 7. ytu...experiments with reconnaissance photography, multi- sensor satellite imagery, medical CT and MRI multi-band data have shown a great practi- cal potential...C ,SF _/ -- / WSM iS-I-0-d41-40450 $tltwt, kw" I (nor.- . Z-97- A real-time system for multi- sensor image analysis through pyramidal segmentation
Increasing efficiency of preclinical research by group sequential designs
Piper, Sophie K.; Rex, Andre; Florez-Vargas, Oscar; Karystianis, George; Schneider, Alice; Wellwood, Ian; Siegerink, Bob; Ioannidis, John P. A.; Kimmelman, Jonathan; Dirnagl, Ulrich
2017-01-01
Despite the potential benefits of sequential designs, studies evaluating treatments or experimental manipulations in preclinical experimental biomedicine almost exclusively use classical block designs. Our aim with this article is to bring the existing methodology of group sequential designs to the attention of researchers in the preclinical field and to clearly illustrate its potential utility. Group sequential designs can offer higher efficiency than traditional methods and are increasingly used in clinical trials. Using simulation of data, we demonstrate that group sequential designs have the potential to improve the efficiency of experimental studies, even when sample sizes are very small, as is currently prevalent in preclinical experimental biomedicine. When simulating data with a large effect size of d = 1 and a sample size of n = 18 per group, sequential frequentist analysis consumes in the long run only around 80% of the planned number of experimental units. In larger trials (n = 36 per group), additional stopping rules for futility lead to the saving of resources of up to 30% compared to block designs. We argue that these savings should be invested to increase sample sizes and hence power, since the currently underpowered experiments in preclinical biomedicine are a major threat to the value and predictiveness in this research domain. PMID:28282371
A Bayesian sequential processor approach to spectroscopic portal system decisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sale, K; Candy, J; Breitfeller, E
The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waitingmore » for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.« less
Managing numerical errors in random sequential adsorption
NASA Astrophysics Data System (ADS)
Cieśla, Michał; Nowak, Aleksandra
2016-09-01
Aim of this study is to examine the influence of a finite surface size and a finite simulation time on a packing fraction estimated using random sequential adsorption simulations. The goal of particular interest is providing hints on simulation setup to achieve desired level of accuracy. The analysis is based on properties of saturated random packing of disks on continuous and flat surfaces of different sizes.
SSME propellant path leak detection real-time
NASA Technical Reports Server (NTRS)
Crawford, R. A.; Smith, L. M.
1994-01-01
Included are four documents that outline the technical aspects of the research performed on NASA Grant NAG8-140: 'A System for Sequential Step Detection with Application to Video Image Processing'; 'Leak Detection from the SSME Using Sequential Image Processing'; 'Digital Image Processor Specifications for Real-Time SSME Leak Detection'; and 'A Color Change Detection System for Video Signals with Applications to Spectral Analysis of Rocket Engine Plumes'.
Ganapathy, Kavina; Sowmithra, Sowmithra; Bhonde, Ramesh; Datta, Indrani
2016-07-16
The neuron-glia ratio is of prime importance for maintaining the physiological homeostasis of neuronal and glial cells, and especially crucial for dopaminergic neurons because a reduction in glial density has been reported in postmortem reports of brains affected by Parkinson's disease. We thus aimed at developing an in vitro midbrain culture which would replicate a similar neuron-glia ratio to that in in vivo adult midbrain while containing a similar number of dopaminergic neurons. A sequential culture technique was adopted to achieve this. Neural progenitors (NPs) were generated by the hanging-drop method and propagated as 3D neurospheres followed by the derivation of outgrowth from these neurospheres on a chosen extracellular matrix. The highest proliferation was observed in neurospheres from day in vitro (DIV) 5 through MTT and FACS analysis of Ki67 expression. FACS analysis using annexin/propidium iodide showed an increase in the apoptotic population from DIV 8. DIV 5 neurospheres were therefore selected for deriving the differentiated outgrowth of midbrain on a poly-L-lysine-coated surface. Quantitative RT-PCR showed comparable gene expressions of the mature neuronal marker β-tubulin III, glial marker GFAP and dopaminergic marker tyrosine hydroxylase (TH) as compared to in vivo adult rat midbrain. The FACS analysis showed a similar neuron-glia ratio obtained by the sequential culture in comparison to adult rat midbrain. The yield of β-tubulin III and TH was distinctly higher in the sequential culture in comparison to 2D culture, which showed a higher yield of GFAP immunopositive cells. Functional characterization indicated that both the constitutive and inducible (KCl and ATP) release of dopamine was distinctly higher in the sequential culture than the 2D culture. Thus, the sequential culture technique succeeded in the initial enrichment of NPs in 3D neurospheres, which in turn resulted in an optimal attainment of the neuron-glia ratio on outgrowth culture from these neurospheres. © 2016 S. Karger AG, Basel.
Gonzalez, Aroa Garcia; Taraba, Lukáš; Hraníček, Jakub; Kozlík, Petr; Coufal, Pavel
2017-01-01
Dasatinib is a novel oral prescription drug proposed for treating adult patients with chronic myeloid leukemia. Three analytical methods, namely ultra high performance liquid chromatography, capillary zone electrophoresis, and sequential injection analysis, were developed, validated, and compared for determination of the drug in the tablet dosage form. The total analysis time of optimized ultra high performance liquid chromatography and capillary zone electrophoresis methods was 2.0 and 2.2 min, respectively. Direct ultraviolet detection with detection wavelength of 322 nm was employed in both cases. The optimized sequential injection analysis method was based on spectrophotometric detection of dasatinib after a simple colorimetric reaction with folin ciocalteau reagent forming a blue-colored complex with an absorbance maximum at 745 nm. The total analysis time was 2.5 min. The ultra high performance liquid chromatography method provided the lowest detection and quantitation limits and the most precise and accurate results. All three newly developed methods were demonstrated to be specific, linear, sensitive, precise, and accurate, providing results satisfactorily meeting the requirements of the pharmaceutical industry, and can be employed for the routine determination of the active pharmaceutical ingredient in the tablet dosage form. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Morrish, S.; Marshall, J. S.
2013-12-01
The Nicoya Peninsula lies within the Costa Rican forearc where the Cocos plate subducts under the Caribbean plate at ~8.5 cm/yr. Rapid plate convergence produces frequent large earthquakes (~50yr recurrence interval) and pronounced crustal deformation (0.1-2.0m/ky uplift). Seven uplifted segments have been identified in previous studies using broad geomorphic surfaces (Hare & Gardner 1984) and late Quaternary marine terraces (Marshall et al. 2010). These surfaces suggest long term net uplift and segmentation of the peninsula in response to contrasting domains of subducting seafloor (EPR, CNS-1, CNS-2). In this study, newer 10m contour digital topographic data (CENIGA- Terra Project) will be used to characterize and delineate this segmentation using morphotectonic analysis of drainage basins and correlation of fluvial terrace/ geomorphic surface elevations. The peninsula has six primary watersheds which drain into the Pacific Ocean; the Río Andamojo, Río Tabaco, Río Nosara, Río Ora, Río Bongo, and Río Ario which range in area from 200 km2 to 350 km2. The trunk rivers follow major lineaments that define morphotectonic segment boundaries and in turn their drainage basins are bisected by them. Morphometric analysis of the lower (1st and 2nd) order drainage basins will provide insight into segmented tectonic uplift and deformation by comparing values of drainage basin asymmetry, stream length gradient, and hypsometry with respect to margin segmentation and subducting seafloor domain. A general geomorphic analysis will be conducted alongside the morphometric analysis to map previously recognized (Morrish et al. 2010) but poorly characterized late Quaternary fluvial terraces. Stream capture and drainage divide migration are common processes throughout the peninsula in response to the ongoing deformation. Identification and characterization of basin piracy throughout the peninsula will provide insight into the history of landscape evolution in response to differential uplift. Conducting this morphotectonic analysis of the Nicoya Peninsula will provide further constraints on rates of segment uplift, location of segment boundaries, and advance the understanding of the long term deformation of the region in relation to subduction.
Signorelli, Mauro; Lissoni, Andrea Alberto; De Ponti, Elena; Grassi, Tommaso; Ponti, Serena
2015-01-01
Objective Evaluation of the impact of sequential chemoradiotherapy in high risk endometrial cancer (EC). Methods Two hundred fifty-four women with stage IB grade 3, II and III EC (2009 FIGO staging), were included in this retrospective study. Results Stage I, II, and III was 24%, 28.7%, and 47.3%, respectively. Grade 3 tumor was 53.2% and 71.3% had deep myometrial invasion. One hundred sixty-five women (65%) underwent pelvic (+/- aortic) lymphadenectomy and 58 (22.8%) had nodal metastases. Ninety-eight women (38.6%) underwent radiotherapy, 59 (23.2%) chemotherapy, 42 (16.5%) sequential chemoradiotherapy, and 55 (21.7%) were only observed. After a median follow-up of 101 months, 78 women (30.7%) relapsed and 91 women (35.8%) died. Sequential chemoradiotherapy improved survival rates in women who did not undergo nodal evaluation (disease-free survival [DFS], p=0.040; overall survival [OS], p=0.024) or pelvic (+/- aortic) lymphadenectomy (DFS, p=0.008; OS, p=0.021). Sequential chemoradiotherapy improved both DFS (p=0.015) and OS (p=0.014) in stage III, while only a trend was found for DFS (p=0.210) and OS (p=0.102) in stage I-II EC. In the multivariate analysis, only age (≤65 years) and sequential chemoradiotherapy were statistically related to the prognosis. Conclusion Sequential chemoradiotherapy improves survival rates in high risk EC compared with chemotherapy or radiotherapy alone, in particular in stage III. PMID:26197768
Gap-free segmentation of vascular networks with automatic image processing pipeline.
Hsu, Chih-Yang; Ghaffari, Mahsa; Alaraj, Ali; Flannery, Michael; Zhou, Xiaohong Joe; Linninger, Andreas
2017-03-01
Current image processing techniques capture large vessels reliably but often fail to preserve connectivity in bifurcations and small vessels. Imaging artifacts and noise can create gaps and discontinuity of intensity that hinders segmentation of vascular trees. However, topological analysis of vascular trees require proper connectivity without gaps, loops or dangling segments. Proper tree connectivity is also important for high quality rendering of surface meshes for scientific visualization or 3D printing. We present a fully automated vessel enhancement pipeline with automated parameter settings for vessel enhancement of tree-like structures from customary imaging sources, including 3D rotational angiography, magnetic resonance angiography, magnetic resonance venography, and computed tomography angiography. The output of the filter pipeline is a vessel-enhanced image which is ideal for generating anatomical consistent network representations of the cerebral angioarchitecture for further topological or statistical analysis. The filter pipeline combined with computational modeling can potentially improve computer-aided diagnosis of cerebrovascular diseases by delivering biometrics and anatomy of the vasculature. It may serve as the first step in fully automatic epidemiological analysis of large clinical datasets. The automatic analysis would enable rigorous statistical comparison of biometrics in subject-specific vascular trees. The robust and accurate image segmentation using a validated filter pipeline would also eliminate operator dependency that has been observed in manual segmentation. Moreover, manual segmentation is time prohibitive given that vascular trees have more than thousands of segments and bifurcations so that interactive segmentation consumes excessive human resources. Subject-specific trees are a first step toward patient-specific hemodynamic simulations for assessing treatment outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ohta, Megumi; Midorikawa, Taishi; Hikihara, Yuki; Masuo, Yoshihisa; Sakamoto, Shizuo; Torii, Suguru; Kawakami, Yasuo; Fukunaga, Tetsuo; Kanehisa, Hiroaki
2017-02-01
This study examined the validity of segmental bioelectrical impedance (BI) analysis for predicting the fat-free masses (FFMs) of whole-body and body segments in children including overweight individuals. The FFM and impedance (Z) values of arms, trunk, legs, and whole body were determined using a dual-energy X-ray absorptiometry and segmental BI analyses, respectively, in 149 boys and girls aged 6 to 12 years, who were divided into model-development (n = 74), cross-validation (n = 35), and overweight (n = 40) groups. Simple regression analysis was applied to (length) 2 /Z (BI index) for each of the whole-body and 3 segments to develop the prediction equations of the measured FFM of the related body part. In the model-development group, the BI index of each of the 3 segments and whole body was significantly correlated to the measured FFM (R 2 = 0.867-0.932, standard error of estimation = 0.18-1.44 kg (5.9%-8.7%)). There was no significant difference between the measured and predicted FFM values without systematic error. The application of each equation derived in the model-development group to the cross-validation and overweight groups did not produce significant differences between the measured and predicted FFM values and systematic errors, with an exception that the arm FFM in the overweight group was overestimated. Segmental bioelectrical impedance analysis is useful for predicting the FFM of each of whole-body and body segments in children including overweight individuals, although the application for estimating arm FFM in overweight individuals requires a certain modification.
Individual bone structure segmentation and labeling from low-dose chest CT
NASA Astrophysics Data System (ADS)
Liu, Shuang; Xie, Yiting; Reeves, Anthony P.
2017-03-01
The segmentation and labeling of the individual bones serve as the first step to the fully automated measurement of skeletal characteristics and the detection of abnormalities such as skeletal deformities, osteoporosis, and vertebral fractures. Moreover, the identified landmarks on the segmented bone structures can potentially provide relatively reliable location reference to other non-rigid human organs, such as breast, heart and lung, thereby facilitating the corresponding image analysis and registration. A fully automated anatomy-directed framework for the segmentation and labeling of the individual bone structures from low-dose chest CT is presented in this paper. The proposed system consists of four main stages: First, both clavicles are segmented and labeled by fitting a piecewise cylindrical envelope. Second, the sternum is segmented under the spatial constraints provided by the segmented clavicles. Third, all ribs are segmented and labeled based on 3D region growing within the volume of interest defined with reference to the spinal canal centerline and lungs. Fourth, the individual thoracic vertebrae are segmented and labeled by image intensity based analysis in the spatial region constrained by the previously segmented bone structures. The system performance was validated with 1270 lowdose chest CT scans through visual evaluation. Satisfactory performance was obtained respectively in 97.1% cases for the clavicle segmentation and labeling, in 97.3% cases for the sternum segmentation, in 97.2% cases for the rib segmentation, in 94.2% cases for the rib labeling, in 92.4% cases for vertebra segmentation and in 89.9% cases for the vertebra labeling.
Long, Xiangbao; Miró, Manuel; Jensen, Rikard; Hansen, Elo Harald
2006-10-01
A highly selective procedure is proposed for the determination of ultra-trace level concentrations of nickel in saline aqueous matrices exploiting a micro-sequential injection Lab-On-Valve (muSI-LOV) sample pretreatment protocol comprising bead injection separation/pre-concentration and detection by electrothermal atomic absorption spectrometry (ETAAS). Based on the dimethylglyoxime (DMG) reaction used for nickel analysis, the sample, as contained in a pH 9.0 buffer, is, after on-line merging with the chelating reagent, transported to a reaction coil attached to one of the external ports of the LOV to assure sufficient reaction time for the formation of Ni(DMG)(2) chelate. The non-ionic coordination compound is then collected in a renewable micro-column packed with a reversed-phase copolymeric sorbent [namely, poly(divinylbenzene-co-N-vinylpyrrolidone)] containing a balanced ratio of hydrophilic and lipophilic monomers. Following elution by a 50-muL methanol plug in an air-segmented modality, the nickel is finally quantified by ETAAS. Under the optimized conditions and for a sample volume of 1.8 mL, a retention efficiency of 70 % and an enrichment factor of 25 were obtained. The proposed methodology showed a high tolerance to the commonly encountered alkaline earth matrix elements in environmental waters, that is, calcium and magnesium, and was successfully applied for the determination of nickel in an NIST standard reference material (NIST 1640-Trace elements in natural water), household tap water of high hardness and local seawater. Satisfying recoveries were achieved for all spiked environmental water samples with maximum deviations of 6 %. The experimental results for the standard reference material were not statistically different to the certified value at a significance level of 0.05.
NASA Astrophysics Data System (ADS)
Ikeda, M.; Toda, S.; Nishizaka, N.; Onishi, K.; Suzuki, S.
2015-12-01
Rupture patterns of a long fault system are controlled by spatial heterogeneity of fault strength and stress associated with geometrical characteristics and stress perturbation history. Mechanical process for sequential ruptures and multiple simultaneous ruptures, one of the characteristics of a long fault such as the North Anatolian fault, governs the size and frequency of large earthquakes. Here we introduce one of the cases in southwest Japan and explore what controls rupture initiation, sequential ruptures and fault branching on a long fault system. The Median Tectonic Line active fault zone (hereinafter MTL) is the longest and most active fault in Japan. Based on historical accounts, a series of M ≥ 7 earthquakes occurred on at least a 300-km-long portion of the MTL in 1596. On September 1, the first event occurred on the Kawakami fault segment, in Central Shikoku, and the subsequent events occurred further west. Then on September 5, another rupture initiated from the Central to East Shikoku and then propagated toward the Rokko-Awaji fault zone to Kobe, a northern branch of the MTL, instead of the eastern main extent of the MTL. Another rupture eventually extended to near Kyoto. To reproduce this progressive failure, we applied two numerical models: one is a coulomb stress transfer; the other is a slip-tendency analysis under the tectonic stress. We found that Coulomb stress imparted from historical ruptures have triggered the subsequent ruptures nearby. However, stress transfer does not explain beginning of the sequence and rupture directivities. Instead, calculated slip-tendency values show highly variable along the MTL: high and low seismic potential in West and East Shikoku. The initiation point of the 1596 progressive failure locates near the boundary in the slip-tendency values. Furthermore, the slip-tendency on the Rokko-Awaji fault zone is far higher than that of the MTL in Wakayama, which may explain the rupture directivity toward Kobe-Kyoto.
Novel dehydrins lacking complete K-segments in Pinaceae. The exception rather than the rule
Perdiguero, Pedro; Collada, Carmen; Soto, Álvaro
2014-01-01
Dehydrins are thought to play an essential role in the plant response, acclimation and tolerance to different abiotic stresses, such as cold and drought. These proteins contain conserved and repeated segments in their amino acid sequence, used for their classification. Thus, dehydrins from angiosperms present different repetitions of the segments Y, S, and K, while gymnosperm dehydrins show A, E, S, and K segments. The only fragment present in all the dehydrins described to date is the K-segment. Different works suggest the K-segment is involved in key protective functions during dehydration stress, mainly stabilizing membranes. In this work, we describe for the first time two Pinus pinaster proteins with truncated K-segments and a third one completely lacking K-segments, but whose sequence homology leads us to consider them still as dehydrins. qRT-PCR expression analysis show a significant induction of these dehydrins during a severe and prolonged drought stress. By in silico analysis we confirmed the presence of these dehydrins in other Pinaceae species, breaking the convention regarding the compulsory presence of K-segments in these proteins. The way of action of these unusual dehydrins remains unrevealed. PMID:25520734
The Segmentation Problem in the Study of Impromptu Speech.
ERIC Educational Resources Information Center
Loman, Bengt
A fundamental problem in the study of spontaneous speech is how to segment it for analysis. The segments should be relevant for the study of linguistic structures, speech planning, speech production, or communication strategies. Operational rules for segmentation should consider a wide variety of criteria and be hierarchically ordered. This is…
Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L
2013-03-13
With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.
Kang, Sung-Won; Lee, Woo-Jin; Choi, Soon-Chul; Lee, Sam-Sun; Heo, Min-Suk; Huh, Kyung-Hoe; Kim, Tae-Il; Yi, Won-Jin
2015-03-01
We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. VA and VBIC increased significantly with as the healing period increased (p<0.05). VBIC values were significantly correlated with VA values (p<0.05) and with 2D BIC values (p<0.05). It is possible to quantify VBIC and VA for absorbable implants using micro-CT analysis using a region-based segmentation method.
Audio-guided audiovisual data segmentation, indexing, and retrieval
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1998-12-01
While current approaches for video segmentation and indexing are mostly focused on visual information, audio signals may actually play a primary role in video content parsing. In this paper, we present an approach for automatic segmentation, indexing, and retrieval of audiovisual data, based on audio content analysis. The accompanying audio signal of audiovisual data is first segmented and classified into basic types, i.e., speech, music, environmental sound, and silence. This coarse-level segmentation and indexing step is based upon morphological and statistical analysis of several short-term features of the audio signals. Then, environmental sounds are classified into finer classes, such as applause, explosions, bird sounds, etc. This fine-level classification and indexing step is based upon time- frequency analysis of audio signals and the use of the hidden Markov model as the classifier. On top of this archiving scheme, an audiovisual data retrieval system is proposed. Experimental results show that the proposed approach has an accuracy rate higher than 90 percent for the coarse-level classification, and higher than 85 percent for the fine-level classification. Examples of audiovisual data segmentation and retrieval are also provided.
NASA Technical Reports Server (NTRS)
Bebis, George (Inventor); Amayeh, Gholamreza (Inventor)
2015-01-01
Hand-based biometric analysis systems and techniques are described which provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an in put image. Additionally, the analysis utilizes re-use of commonly-seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.
NASA Technical Reports Server (NTRS)
Bebis, George
2013-01-01
Hand-based biometric analysis systems and techniques provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an input image. Additionally, the analysis uses re-use of commonly seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.
Proportional crosstalk correction for the segmented clover at iThemba LABS
NASA Astrophysics Data System (ADS)
Bucher, T. D.; Noncolela, S. P.; Lawrie, E. A.; Dinoko, T. R. S.; Easton, J. L.; Erasmus, N.; Lawrie, J. J.; Mthembu, S. H.; Mtshali, W. X.; Shirinda, O.; Orce, J. N.
2017-11-01
Reaching new depths in nuclear structure investigations requires new experimental equipment and new techniques of data analysis. The modern γ-ray spectrometers, like AGATA and GRETINA are now built of new-generation segmented germanium detectors. These most advanced detectors are able to reconstruct the trajectory of a γ-ray inside the detector. These are powerful detectors, but they need careful characterization, since their output signals are more complex. For instance for each γ-ray interaction that occurs in a segment of such a detector additional output signals (called proportional crosstalk), falsely appearing as an independent (often negative) energy depositions, are registered on the non-interacting segments. A failure to implement crosstalk correction results in incorrectly measured energies on the segments for two- and higher-fold events. It affects all experiments which rely on the recorded segment energies. Furthermore incorrectly recorded energies on the segments cause a failure to reconstruct the γ-ray trajectories using Compton scattering analysis. The proportional crosstalk for the iThemba LABS segmented clover was measured and a crosstalk correction was successfully implemented. The measured crosstalk-corrected energies show good agreement with the true γ-ray energies independent on the number of hit segments and an improved energy resolution for the segment sum energy was obtained.
NASA Astrophysics Data System (ADS)
Varga, T.; McKinney, A. L.; Bingham, E.; Handakumbura, P. P.; Jansson, C.
2017-12-01
Plant roots play a critical role in plant-soil-microbe interactions that occur in the rhizosphere, as well as in processes with important implications to farming and thus human food supply. X-ray computed tomography (XCT) has been proven to be an effective tool for non-invasive root imaging and analysis. Selected Brachypodium distachyon phenotypes were grown in both natural and artificial soil mixes. The specimens were imaged by XCT, and the root architectures were extracted from the data using three different software-based methods; RooTrak, ImageJ-based WEKA segmentation, and the segmentation feature in VG Studio MAX. The 3D root image was successfully segmented at 30 µm resolution by all three methods. In this presentation, ease of segmentation and the accuracy of the extracted quantitative information (root volume and surface area) will be compared between soil types and segmentation methods. The best route to easy and accurate segmentation and root analysis will be highlighted.
Gruber, Ranit; Levitt, Michael; Horovitz, Amnon
2017-01-01
Knowing the mechanism of allosteric switching is important for understanding how molecular machines work. The CCT/TRiC chaperonin nanomachine undergoes ATP-driven conformational changes that are crucial for its folding function. Here, we demonstrate that insight into its allosteric mechanism of ATP hydrolysis can be achieved by Arrhenius analysis. Our results show that ATP hydrolysis triggers sequential ‟conformational waves.” They also suggest that these waves start from subunits CCT6 and CCT8 (or CCT3 and CCT6) and proceed clockwise and counterclockwise, respectively. PMID:28461478
Gruber, Ranit; Levitt, Michael; Horovitz, Amnon
2017-05-16
Knowing the mechanism of allosteric switching is important for understanding how molecular machines work. The CCT/TRiC chaperonin nanomachine undergoes ATP-driven conformational changes that are crucial for its folding function. Here, we demonstrate that insight into its allosteric mechanism of ATP hydrolysis can be achieved by Arrhenius analysis. Our results show that ATP hydrolysis triggers sequential ‟conformational waves." They also suggest that these waves start from subunits CCT6 and CCT8 (or CCT3 and CCT6) and proceed clockwise and counterclockwise, respectively.
NASA Astrophysics Data System (ADS)
Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.
2017-09-01
Digital rock physics carries the dogmatic concept of having to segment volume images for quantitative analysis but segmentation rejects huge amounts of signal information. Information that is essential for the analysis of difficult and marginally resolved samples, such as materials with very small features, is lost during segmentation. In X-ray nanotomography reconstructions of Hod chalk we observed partial volume voxels with an abundance that limits segmentation based analysis. Therefore, we investigated the suitability of greyscale analysis for establishing statistical representative elementary volumes (sREV) for the important petrophysical parameters of this type of chalk, namely porosity, specific surface area and diffusive tortuosity, by using volume images without segmenting the datasets. Instead, grey level intensities were transformed to a voxel level porosity estimate using a Gaussian mixture model. A simple model assumption was made that allowed formulating a two point correlation function for surface area estimates using Bayes' theory. The same assumption enables random walk simulations in the presence of severe partial volume effects. The established sREVs illustrate that in compacted chalk, these simulations cannot be performed in binary representations without increasing the resolution of the imaging system to a point where the spatial restrictions of the represented sample volume render the precision of the measurement unacceptable. We illustrate this by analyzing the origins of variance in the quantitative analysis of volume images, i.e. resolution dependence and intersample and intrasample variance. Although we cannot make any claims on the accuracy of the approach, eliminating the segmentation step from the analysis enables comparative studies with higher precision and repeatability.
Validity and reliability of naturalistic driving scene categorization Judgments from crowdsourcing.
Cabrall, Christopher D D; Lu, Zhenji; Kyriakidis, Miltos; Manca, Laura; Dijksterhuis, Chris; Happee, Riender; de Winter, Joost
2018-05-01
A common challenge with processing naturalistic driving data is that humans may need to categorize great volumes of recorded visual information. By means of the online platform CrowdFlower, we investigated the potential of crowdsourcing to categorize driving scene features (i.e., presence of other road users, straight road segments, etc.) at greater scale than a single person or a small team of researchers would be capable of. In total, 200 workers from 46 different countries participated in 1.5days. Validity and reliability were examined, both with and without embedding researcher generated control questions via the CrowdFlower mechanism known as Gold Test Questions (GTQs). By employing GTQs, we found significantly more valid (accurate) and reliable (consistent) identification of driving scene items from external workers. Specifically, at a small scale CrowdFlower Job of 48 three-second video segments, an accuracy (i.e., relative to the ratings of a confederate researcher) of 91% on items was found with GTQs compared to 78% without. A difference in bias was found, where without GTQs, external workers returned more false positives than with GTQs. At a larger scale CrowdFlower Job making exclusive use of GTQs, 12,862 three-second video segments were released for annotation. Infeasible (and self-defeating) to check the accuracy of each at this scale, a random subset of 1012 categorizations was validated and returned similar levels of accuracy (95%). In the small scale Job, where full video segments were repeated in triplicate, the percentage of unanimous agreement on the items was found significantly more consistent when using GTQs (90%) than without them (65%). Additionally, in the larger scale Job (where a single second of a video segment was overlapped by ratings of three sequentially neighboring segments), a mean unanimity of 94% was obtained with validated-as-correct ratings and 91% with non-validated ratings. Because the video segments overlapped in full for the small scale Job, and in part for the larger scale Job, it should be noted that such reliability reported here may not be directly comparable. Nonetheless, such results are both indicative of high levels of obtained rating reliability. Overall, our results provide compelling evidence for CrowdFlower, via use of GTQs, being able to yield more accurate and consistent crowdsourced categorizations of naturalistic driving scene contents than when used without such a control mechanism. Such annotations in such short periods of time present a potentially powerful resource in driving research and driving automation development. Copyright © 2017 Elsevier Ltd. All rights reserved.
2003-09-11
KENNEDY SPACE CENTER, FLA. - Jeff Thon, an SRB mechanic with United Space Alliance, is fitted with a harness to test a vertical solid rocket booster propellant grain inspection technique. Thon will be lowered inside a mockup of two segments of the SRBs. The inspection of segments is required as part of safety analysis.
NASA Astrophysics Data System (ADS)
Lisitsa, Y. V.; Yatskou, M. M.; Apanasovich, V. V.; Apanasovich, T. V.
2015-09-01
We have developed an algorithm for segmentation of cancer cell nuclei in three-channel luminescent images of microbiological specimens. The algorithm is based on using a correlation between fluorescence signals in the detection channels for object segmentation, which permits complete automation of the data analysis procedure. We have carried out a comparative analysis of the proposed method and conventional algorithms implemented in the CellProfiler and ImageJ software packages. Our algorithm has an object localization uncertainty which is 2-3 times smaller than for the conventional algorithms, with comparable segmentation accuracy.
Seuss, Hannes; Janka, Rolf; Prümmer, Marcus; Cavallaro, Alexander; Hammon, Rebecca; Theis, Ragnar; Sandmair, Martin; Amann, Kerstin; Bäuerle, Tobias; Uder, Michael; Hammon, Matthias
2017-04-01
Volumetric analysis of the kidney parenchyma provides additional information for the detection and monitoring of various renal diseases. Therefore the purposes of the study were to develop and evaluate a semi-automated segmentation tool and a modified ellipsoid formula for volumetric analysis of the kidney in non-contrast T2-weighted magnetic resonance (MR)-images. Three readers performed semi-automated segmentation of the total kidney volume (TKV) in axial, non-contrast-enhanced T2-weighted MR-images of 24 healthy volunteers (48 kidneys) twice. A semi-automated threshold-based segmentation tool was developed to segment the kidney parenchyma. Furthermore, the three readers measured renal dimensions (length, width, depth) and applied different formulas to calculate the TKV. Manual segmentation served as a reference volume. Volumes of the different methods were compared and time required was recorded. There was no significant difference between the semi-automatically and manually segmented TKV (p = 0.31). The difference in mean volumes was 0.3 ml (95% confidence interval (CI), -10.1 to 10.7 ml). Semi-automated segmentation was significantly faster than manual segmentation, with a mean difference = 188 s (220 vs. 408 s); p < 0.05. Volumes did not differ significantly comparing the results of different readers. Calculation of TKV with a modified ellipsoid formula (ellipsoid volume × 0.85) did not differ significantly from the reference volume; however, the mean error was three times higher (difference of mean volumes -0.1 ml; CI -31.1 to 30.9 ml; p = 0.95). Applying the modified ellipsoid formula was the fastest way to get an estimation of the renal volume (41 s). Semi-automated segmentation and volumetric analysis of the kidney in native T2-weighted MR data delivers accurate and reproducible results and was significantly faster than manual segmentation. Applying a modified ellipsoid formula quickly provides an accurate kidney volume.
2014-01-01
Background Digital image analysis has the potential to address issues surrounding traditional histological techniques including a lack of objectivity and high variability, through the application of quantitative analysis. A key initial step in image analysis is the identification of regions of interest. A widely applied methodology is that of segmentation. This paper proposes the application of image analysis techniques to segment skin tissue with varying degrees of histopathological damage. The segmentation of human tissue is challenging as a consequence of the complexity of the tissue structures and inconsistencies in tissue preparation, hence there is a need for a new robust method with the capability to handle the additional challenges materialising from histopathological damage. Methods A new algorithm has been developed which combines enhanced colour information, created following a transformation to the L*a*b* colourspace, with general image intensity information. A colour normalisation step is included to enhance the algorithm’s robustness to variations in the lighting and staining of the input images. The resulting optimised image is subjected to thresholding and the segmentation is fine-tuned using a combination of morphological processing and object classification rules. The segmentation algorithm was tested on 40 digital images of haematoxylin & eosin (H&E) stained skin biopsies. Accuracy, sensitivity and specificity of the algorithmic procedure were assessed through the comparison of the proposed methodology against manual methods. Results Experimental results show the proposed fully automated methodology segments the epidermis with a mean specificity of 97.7%, a mean sensitivity of 89.4% and a mean accuracy of 96.5%. When a simple user interaction step is included, the specificity increases to 98.0%, the sensitivity to 91.0% and the accuracy to 96.8%. The algorithm segments effectively for different severities of tissue damage. Conclusions Epidermal segmentation is a crucial first step in a range of applications including melanoma detection and the assessment of histopathological damage in skin. The proposed methodology is able to segment the epidermis with different levels of histological damage. The basic method framework could be applied to segmentation of other epithelial tissues. PMID:24521154
Three-dimensional murine airway segmentation in micro-CT images
NASA Astrophysics Data System (ADS)
Shi, Lijun; Thiesse, Jacqueline; McLennan, Geoffrey; Hoffman, Eric A.; Reinhardt, Joseph M.
2007-03-01
Thoracic imaging for small animals has emerged as an important tool for monitoring pulmonary disease progression and therapy response in genetically engineered animals. Micro-CT is becoming the standard thoracic imaging modality in small animal imaging because it can produce high-resolution images of the lung parenchyma, vasculature, and airways. Segmentation, measurement, and visualization of the airway tree is an important step in pulmonary image analysis. However, manual analysis of the airway tree in micro-CT images can be extremely time-consuming since a typical dataset is usually on the order of several gigabytes in size. Automated and semi-automated tools for micro-CT airway analysis are desirable. In this paper, we propose an automatic airway segmentation method for in vivo micro-CT images of the murine lung and validate our method by comparing the automatic results to manual tracing. Our method is based primarily on grayscale morphology. The results show good visual matches between manually segmented and automatically segmented trees. The average true positive volume fraction compared to manual analysis is 91.61%. The overall runtime for the automatic method is on the order of 30 minutes per volume compared to several hours to a few days for manual analysis.
Model-Based Segmentation of Cortical Regions of Interest for Multi-subject Analysis of fMRI Data
NASA Astrophysics Data System (ADS)
Engel, Karin; Brechmann, Andr'e.; Toennies, Klaus
The high inter-subject variability of human neuroanatomy complicates the analysis of functional imaging data across subjects. We propose a method for the correct segmentation of cortical regions of interest based on the cortical surface. First results on the segmentation of Heschl's gyrus indicate the capability of our approach for correct comparison of functional activations in relation to individual cortical patterns.
Analysis of radially cracked ring segments subject to forces and couples
NASA Technical Reports Server (NTRS)
Gross, B.; Srawley, J. E.
1977-01-01
Results of planar boundary collocation analysis are given for ring segment (C-shaped) specimens with radial cracks, subjected to combined forces and couples. Mode I stress intensity factors and crack mouth opening displacements were determined for ratios of outer to inner radius in the range 1.1 to 2.5 and ratios of crack length to segment width in the range 0.1 to 0.8.
Analysis of radially cracked ring segments subject to forces and couples
NASA Technical Reports Server (NTRS)
Gross, B.; Strawley, J. E.
1975-01-01
Results of planar boundary collocation analysis are given for ring segment (C shaped) specimens with radial cracks, subjected to combined forces and couples. Mode I stress intensity factors and crack mouth opening displacements were determined for ratios of outer to inner radius in the range 1.1 to 2.5, and ratios of crack length to segment width in the range 0.1 to 0.8.
Futamure, Sumire; Bonnet, Vincent; Dumas, Raphael; Venture, Gentiane
2017-11-07
This paper presents a method allowing a simple and efficient sensitivity analysis of dynamics parameters of complex whole-body human model. The proposed method is based on the ground reaction and joint moment regressor matrices, developed initially in robotics system identification theory, and involved in the equations of motion of the human body. The regressor matrices are linear relatively to the segment inertial parameters allowing us to use simple sensitivity analysis methods. The sensitivity analysis method was applied over gait dynamics and kinematics data of nine subjects and with a 15 segments 3D model of the locomotor apparatus. According to the proposed sensitivity indices, 76 segments inertial parameters out the 150 of the mechanical model were considered as not influent for gait. The main findings were that the segment masses were influent and that, at the exception of the trunk, moment of inertia were not influent for the computation of the ground reaction forces and moments and the joint moments. The same method also shows numerically that at least 90% of the lower-limb joint moments during the stance phase can be estimated only from a force-plate and kinematics data without knowing any of the segment inertial parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zweerink, Alwin; Allaart, Cornelis P; Kuijer, Joost P A; Wu, LiNa; Beek, Aernout M; van de Ven, Peter M; Meine, Mathias; Croisille, Pierre; Clarysse, Patrick; van Rossum, Albert C; Nijveldt, Robin
2017-12-01
Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. • Myocardial strain analysis could potentially improve patient selection for CRT. • Currently a well validated clinical approach to derive segmental strains is lacking. • The novel SLICE technique derives segmental strains from standard CMR cine images. • SLICE-derived strain markers of CRT response showed close agreement with CMR-TAG. • Future studies will focus on the prognostic value of SLICE in CRT candidates.
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A; Gombos, Eva
2014-08-01
To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast-enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise, and fitting algorithms. We modeled the underlying dynamics of the tumor by an LDS and used the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist's segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared with the radiologist's segmentation and 82.1% accuracy and 100% sensitivity when compared with the CADstream output. The overlap of the algorithm output with the radiologist's segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72, respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC = 0.95. The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. © 2013 Wiley Periodicals, Inc.
Automatic Segmentation of Invasive Breast Carcinomas from DCE-MRI using Time Series Analysis
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A.; Gombos, Eva
2013-01-01
Purpose Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise and fitting algorithms. To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Methods We modeled the underlying dynamics of the tumor by a LDS and use the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist’s segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). Results The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared to the radiologist’s segmentation and 82.1% accuracy and 100% sensitivity when compared to the CADstream output. The overlap of the algorithm output with the radiologist’s segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72 respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC=0.95. Conclusion The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. PMID:24115175
Colligan, Lacey; Anderson, Janet E; Potts, Henry W W; Berman, Jonathan
2010-01-07
Many quality and safety improvement methods in healthcare rely on a complete and accurate map of the process. Process mapping in healthcare is often achieved using a sequential flow diagram, but there is little guidance available in the literature about the most effective type of process map to use. Moreover there is evidence that the organisation of information in an external representation affects reasoning and decision making. This exploratory study examined whether the type of process map - sequential or hierarchical - affects healthcare practitioners' judgments. A sequential and a hierarchical process map of a community-based anti coagulation clinic were produced based on data obtained from interviews, talk-throughs, attendance at a training session and examination of protocols and policies. Clinic practitioners were asked to specify the parts of the process that they judged to contain quality and safety concerns. The process maps were then shown to them in counter-balanced order and they were asked to circle on the diagrams the parts of the process where they had the greatest quality and safety concerns. A structured interview was then conducted, in which they were asked about various aspects of the diagrams. Quality and safety concerns cited by practitioners differed depending on whether they were or were not looking at a process map, and whether they were looking at a sequential diagram or a hierarchical diagram. More concerns were identified using the hierarchical diagram compared with the sequential diagram and more concerns were identified in relation to clinical work than administrative work. Participants' preference for the sequential or hierarchical diagram depended on the context in which they would be using it. The difficulties of determining the boundaries for the analysis and the granularity required were highlighted. The results indicated that the layout of a process map does influence perceptions of quality and safety problems in a process. In quality improvement work it is important to carefully consider the type of process map to be used and to consider using more than one map to ensure that different aspects of the process are captured.
Cooperativity and modularity in protein folding
Sasai, Masaki; Chikenji, George; Terada, Tomoki P.
2016-01-01
A simple statistical mechanical model proposed by Wako and Saitô has explained the aspects of protein folding surprisingly well. This model was systematically applied to multiple proteins by Muñoz and Eaton and has since been referred to as the Wako-Saitô-Muñoz-Eaton (WSME) model. The success of the WSME model in explaining the folding of many proteins has verified the hypothesis that the folding is dominated by native interactions, which makes the energy landscape globally biased toward native conformation. Using the WSME and other related models, Saitô emphasized the importance of the hierarchical pathway in protein folding; folding starts with the creation of contiguous segments having a native-like configuration and proceeds as growth and coalescence of these segments. The Φ-values calculated for barnase with the WSME model suggested that segments contributing to the folding nucleus are similar to the structural modules defined by the pattern of native atomic contacts. The WSME model was extended to explain folding of multi-domain proteins having a complex topology, which opened the way to comprehensively understanding the folding process of multi-domain proteins. The WSME model was also extended to describe allosteric transitions, indicating that the allosteric structural movement does not occur as a deterministic sequential change between two conformations but as a stochastic diffusive motion over the dynamically changing energy landscape. Statistical mechanical viewpoint on folding, as highlighted by the WSME model, has been renovated in the context of modern methods and ideas, and will continue to provide insights on equilibrium and dynamical features of proteins. PMID:28409080
ERIC Educational Resources Information Center
Prinzie, P.; Onghena, P.; Hellinckx, W.
2005-01-01
Cohort-sequential latent growth modeling was used to analyze longitudinal data for children's externalizing behavior from four overlapping age cohorts (4, 5, 6, and 7 years at first assessment) measured at three annual time points. The data included mother and father ratings on the Child Behavior Checklist and the Five-Factor Personality Inventory…
1998-06-01
4] By 2010, we should be able to change how we conduct the most intense joint operations. Instead of relying on massed forces and sequential ...not independent, sequential steps. Data probes to support the analysis phase were required to complete the logical models. This generated a need...Networks) Identify Granularity (System Level) - Establish Physical Bounds or Limits to Systems • Determine System Test Configuration and Lineup
Rough-Fuzzy Clustering and Unsupervised Feature Selection for Wavelet Based MR Image Segmentation
Maji, Pradipta; Roy, Shaswati
2015-01-01
Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR) images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices. PMID:25848961
NASA Technical Reports Server (NTRS)
Trenchard, M. H. (Principal Investigator)
1980-01-01
Procedures and techniques for providing analyses of meteorological conditions at segments during the growing season were developed for the U.S./Canada Wheat and Barley Exploratory Experiment. The main product and analysis tool is the segment-level climagraph which depicts temporally meteorological variables for the current year compared with climatological normals. The variable values for the segment are estimates derived through objective analysis of values obtained at first-order station in the region. The procedures and products documented represent a baseline for future Foreign Commodity Production Forecasting experiments.
Knowledge-based low-level image analysis for computer vision systems
NASA Technical Reports Server (NTRS)
Dhawan, Atam P.; Baxi, Himanshu; Ranganath, M. V.
1988-01-01
Two algorithms for entry-level image analysis and preliminary segmentation are proposed which are flexible enough to incorporate local properties of the image. The first algorithm involves pyramid-based multiresolution processing and a strategy to define and use interlevel and intralevel link strengths. The second algorithm, which is designed for selected window processing, extracts regions adaptively using local histograms. The preliminary segmentation and a set of features are employed as the input to an efficient rule-based low-level analysis system, resulting in suboptimal meaningful segmentation.
Jung, Yoon Suk; Park, Chan Hyuk; Park, Jung Ho; Nam, Eunwoo; Lee, Hang Lak
2017-08-01
The efficacy of Helicobacter pylori eradication regimens may depend on the country where the studies were performed because of the difference in antibiotic resistance. We aimed to analyze the efficacy of H. pylori eradication regimens in Korea where clarithromycin resistance rate is high. We searched for all relevant randomized controlled trials published until November 2016 that investigated the efficacy of H. pylori eradication therapies in Korea. A network meta-analysis was performed to calculate the direct and indirect estimates of efficacy among the eradication regimens. Forty-three studies were identified through a systematic review, of which 34 studies, published since 2005, were included in the meta-analysis. Among 21 included regimens, quinolone-containing sequential therapy for 14 days (ST-Q-14) showed the highest eradication rate (91.4% [95% confidence interval [CI], 86.9%-94.4%] in the intention-to-treat [ITT] analysis). The eradication rate of the conventional triple therapy for 7 days, standard sequential therapy for 10 days, hybrid therapy for 10-14 days, and concomitant therapy for 10-14 days was 71.1% (95% CI, 68.3%-73.7%), 76.2% (95% CI, 72.8%-79.3%), 79.4% (95% CI, 75.5%-82.8%), and 78.3% (95% CI, 75.3%-80.9%), respectively, in the ITT analysis. In the network meta-analysis, ST-Q-14 showed a better comparative efficacy than the conventional triple therapy, standard sequential therapy, hybrid therapy, and concomitant therapy. In addition, tolerability of ST-Q-14 was comparable to those regimens. In Korea, ST-Q-14 showed the highest efficacy in terms of eradication and a comparable tolerability, compared to the results reported for the conventional triple therapy, standard sequential therapy, hybrid therapy, and concomitant therapy. © 2017 John Wiley & Sons Ltd.
Schwendicke, Falk; Göstemeyer, Gerd
2017-01-01
Objectives Single-visit root canal treatment has some advantages over conventional multivisit treatment, but might increase the risk of complications. We systematically evaluated the risk of complications after single-visit or multiple-visit root canal treatment using meta-analysis and trial-sequential analysis. Data Controlled trials comparing single-visit versus multiple-visit root canal treatment of permanent teeth were included. Trials needed to assess the risk of long-term complications (pain, infection, new/persisting/increasing periapical lesions ≥1 year after treatment), short-term pain or flare-up (acute exacerbation of initiation or continuation of root canal treatment). Sources Electronic databases (PubMed, EMBASE, Cochrane Central) were screened, random-effects meta-analyses performed and trial-sequential analysis used to control for risk of random errors. Evidence was graded according to GRADE. Study selection 29 trials (4341 patients) were included, all but 6 showing high risk of bias. Based on 10 trials (1257 teeth), risk of complications was not significantly different in single-visit versus multiple-visit treatment (risk ratio (RR) 1.00 (95% CI 0.75 to 1.35); weak evidence). Based on 20 studies (3008 teeth), risk of pain did not significantly differ between treatments (RR 0.99 (95% CI 0.76 to 1.30); moderate evidence). Risk of flare-up was recorded by 8 studies (1110 teeth) and was significantly higher after single-visit versus multiple-visit treatment (RR 2.13 (95% CI 1.16 to 3.89); very weak evidence). Trial-sequential analysis revealed that firm evidence for benefit, harm or futility was not reached for any of the outcomes. Conclusions There is insufficient evidence to rule out whether important differences between both strategies exist. Clinical significance Dentists can provide root canal treatment in 1 or multiple visits. Given the possibly increased risk of flare-ups, multiple-visit treatment might be preferred for certain teeth (eg, those with periapical lesions). PMID:28148534
Double-bond-containing polyallene-based triblock copolymers via phenoxyallene and (meth)acrylate
NASA Astrophysics Data System (ADS)
Ding, Aishun; Lu, Guolin; Guo, Hao; Huang, Xiaoyu
2017-03-01
A series of ABA triblock copolymers, consisting of double-bond-containing poly(phenoxyallene) (PPOA), poly(methyl methacrylate) (PMMA), or poly(butyl acrylate) (PBA) segments, were synthesized by sequential free radical polymerization and atom transfer radical polymerization (ATRP). A new bifunctional initiator bearing azo and halogen-containing ATRP initiating groups was first prepared followed by initiating conventional free radical homopolymerization of phenoxyallene with cumulated double bond to give a PPOA-based macroinitiator with ATRP initiating groups at both ends. Next, PMMA-b-PPOA-b-PMMA and PBA-b-PPOA-b-PBA triblock copolymers were synthesized by ATRP of methyl methacrylate and n-butyl acrylate initiated by the PPOA-based macroinitiator through the site transformation strategy. These double-bond-containing triblock copolymers are stable under UV irradiation and free radical circumstances.
Scalable Creation of Long-Lived Multipartite Entanglement
NASA Astrophysics Data System (ADS)
Kaufmann, H.; Ruster, T.; Schmiegelow, C. T.; Luda, M. A.; Kaushal, V.; Schulz, J.; von Lindenfels, D.; Schmidt-Kaler, F.; Poschinger, U. G.
2017-10-01
We demonstrate the deterministic generation of multipartite entanglement based on scalable methods. Four qubits are encoded in 40Ca+, stored in a microstructured segmented Paul trap. These qubits are sequentially entangled by laser-driven pairwise gate operations. Between these, the qubit register is dynamically reconfigured via ion shuttling operations, where ion crystals are separated and merged, and ions are moved in and out of a fixed laser interaction zone. A sequence consisting of three pairwise entangling gates yields a four-ion Greenberger-Horne-Zeilinger state |ψ ⟩=(1 /√{2 })(|0000 ⟩+|1111 ⟩) , and full quantum state tomography reveals a state fidelity of 94.4(3)%. We analyze the decoherence of this state and employ dynamic decoupling on the spatially distributed constituents to maintain 69(5)% coherence at a storage time of 1.1 sec.
He, Chunmao; Kulkarni, Sameer S; Thuaud, Frédéric; Bode, Jeffrey W
2015-10-26
The chemical synthesis of the 184-residue ferric heme-binding protein nitrophorin 4 was accomplished by sequential couplings of five unprotected peptide segments using α-ketoacid-hydroxylamine (KAHA) ligation reactions. The fully assembled protein was folded to its native structure and coordinated to the ferric heme b cofactor. The synthetic holoprotein, despite four homoserine residues at the ligation sites, showed identical properties to the wild-type protein in nitric oxide binding and nitrite dismutase reactivity. This work establishes the KAHA ligation as a valuable and viable approach for the chemical synthesis of proteins up to 20 kDa and demonstrates that it is well-suited for the preparation of hydrophobic protein targets. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Automatic extraction of building boundaries using aerial LiDAR data
NASA Astrophysics Data System (ADS)
Wang, Ruisheng; Hu, Yong; Wu, Huayi; Wang, Jian
2016-01-01
Building extraction is one of the main research topics of the photogrammetry community. This paper presents automatic algorithms for building boundary extractions from aerial LiDAR data. First, segmenting height information generated from LiDAR data, the outer boundaries of aboveground objects are expressed as closed chains of oriented edge pixels. Then, building boundaries are distinguished from nonbuilding ones by evaluating their shapes. The candidate building boundaries are reconstructed as rectangles or regular polygons by applying new algorithms, following the hypothesis verification paradigm. These algorithms include constrained searching in Hough space, enhanced Hough transformation, and the sequential linking technique. The experimental results show that the proposed algorithms successfully extract building boundaries at rates of 97%, 85%, and 92% for three LiDAR datasets with varying scene complexities.
NASA Astrophysics Data System (ADS)
Cizdziel, James V.; Tolbert, Candice; Brown, Garry
2010-02-01
A Direct Mercury Analyzer (DMA) based on sample combustion, concentration of mercury by amalgamation with gold, and cold vapor atomic absorption spectrometry (CVAAS) was coupled to a mercury-specific cold vapor atomic fluorescence spectrometer (CVAFS). The purpose was to evaluate combustion-AFS, a technique which is not commercially available, for low-level analysis of mercury in environmental and biological samples. The experimental setup allowed for comparison of dual measurements of mercury (AAS followed by AFS) for a single combustion event. The AFS instrument control program was modified to properly time capture of mercury from the DMA, avoiding deleterious combustion products from reaching its gold traps. Calibration was carried out using both aqueous solutions and solid reference materials. The absolute detection limits for mercury were 0.002 ng for AFS and 0.016 ng for AAS. Recoveries for reference materials ranged from 89% to 111%, and the precision was generally found to be <10% relative standard deviation (RSD). The two methods produced similar results for samples of hair, finger nails, coal, soil, leaves and food stuffs. However, for samples with mercury near the AAS detection limit (e.g., filter paper spotted with whole blood and segments of tree rings) the signal was still quantifiable with AFS, demonstrating the lower detection limit and greater sensitivity of AFS. This study shows that combustion-AFS is feasible for the direct analysis of low levels of mercury in solid samples that would otherwise require time-consuming and contamination-prone digestion.
Small rural hospitals: an example of market segmentation analysis.
Mainous, A G; Shelby, R L
1991-01-01
In recent years, market segmentation analysis has shown increased popularity among health care marketers, although marketers tend to focus upon hospitals as sellers. The present analysis suggests that there is merit to viewing hospitals as a market of consumers. Employing a random sample of 741 small rural hospitals, the present investigation sought to determine, through the use of segmentation analysis, the variables associated with hospital success (occupancy). The results of a discriminant analysis yielded a model which classifies hospitals with a high degree of predictive accuracy. Successful hospitals have more beds and employees, and are generally larger and have more resources. However, there was no significant relationship between organizational success and number of services offered by the institution.
NASA Astrophysics Data System (ADS)
Li, Shouju; Shangguan, Zichang; Cao, Lijuan
A procedure based on FEM is proposed to simulate interaction between concrete segments of tunnel linings and soils. The beam element named as Beam 3 in ANSYS software was used to simulate segments. The ground loss induced from shield tunneling and segment installing processes is simulated in finite element analysis. The distributions of bending moment, axial force and shear force on segments were computed by FEM. The commutated internal forces on segments will be used to design reinforced bars on shield linings. Numerically simulated ground settlements agree with observed values.
Contreras-Gutiérrez, María Angélica; Nunes, Marcio R.T.; Guzman, Hilda; Uribe, Sandra; Gómez, Juan Carlos Gallego; Vasco, Juan David Suaza; Cardoso, Jedson F.; Popov, Vsevolod L.; Widen, Steven G.; Wood, Thomas G.; Vasilakis, Nikos; Tesh, Robert B.
2016-01-01
The genome and structural organization of a novel insect-specific orthomyxovirus, designated Sinu virus, is described. Sinu virus (SINUV) was isolated in cultures of C6/36 cells from a pool of mosquitoes collected in northwestern Colombia. The virus has six negative-sense ssRNA segments. Genetic analysis of each segment demonstrated the presence of six distinct ORFs encoding the following genes: PB2 (Segment 1), PB1, (Segment 2), PA protein (Segment 3), envelope GP gene (Segment 4), the NP (Segment 5), and M-like gene (Segment 6). Phylogenetically, SINUV appears to be most closed related to viruses in the genus Thogotovirus. PMID:27936462
Improving Situational Awareness on Submarines Using Augmented Reality
2008-09-01
Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY...40 2. Contact Management Segment ............................................ 41 3. Ascent Segment...COGNITIVE TASK ANALYSIS (CONTACT MANAGEMENT SEGMENT
General Staining and Segmentation Procedures for High Content Imaging and Analysis.
Chambers, Kevin M; Mandavilli, Bhaskar S; Dolman, Nick J; Janes, Michael S
2018-01-01
Automated quantitative fluorescence microscopy, also known as high content imaging (HCI), is a rapidly growing analytical approach in cell biology. Because automated image analysis relies heavily on robust demarcation of cells and subcellular regions, reliable methods for labeling cells is a critical component of the HCI workflow. Labeling of cells for image segmentation is typically performed with fluorescent probes that bind DNA for nuclear-based cell demarcation or with those which react with proteins for image analysis based on whole cell staining. These reagents, along with instrument and software settings, play an important role in the successful segmentation of cells in a population for automated and quantitative image analysis. In this chapter, we describe standard procedures for labeling and image segmentation in both live and fixed cell samples. The chapter will also provide troubleshooting guidelines for some of the common problems associated with these aspects of HCI.