Sample records for sequential time points

  1. 49 CFR 563.8 - Data format.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... the first acceleration data point; (3) The number of the last point (NLP), which is an integer that...; and (4) NLP—NFP + 1 acceleration values sequentially beginning with the acceleration at time NFP * TS and continue sampling the acceleration at TS increments in time until the time NLP * TS is reached...

  2. Measuring Incompatible Observables by Exploiting Sequential Weak Values.

    PubMed

    Piacentini, F; Avella, A; Levi, M P; Gramegna, M; Brida, G; Degiovanni, I P; Cohen, E; Lussana, R; Villa, F; Tosi, A; Zappa, F; Genovese, M

    2016-10-21

    One of the most intriguing aspects of quantum mechanics is the impossibility of measuring at the same time observables corresponding to noncommuting operators, because of quantum uncertainty. This impossibility can be partially relaxed when considering joint or sequential weak value evaluation. Indeed, weak value measurements have been a real breakthrough in the quantum measurement framework that is of the utmost interest from both a fundamental and an applicative point of view. In this Letter, we show how we realized for the first time a sequential weak value evaluation of two incompatible observables using a genuine single-photon experiment. These (sometimes anomalous) sequential weak values revealed the single-operator weak values, as well as the local correlation between them.

  3. Measuring Incompatible Observables by Exploiting Sequential Weak Values

    NASA Astrophysics Data System (ADS)

    Piacentini, F.; Avella, A.; Levi, M. P.; Gramegna, M.; Brida, G.; Degiovanni, I. P.; Cohen, E.; Lussana, R.; Villa, F.; Tosi, A.; Zappa, F.; Genovese, M.

    2016-10-01

    One of the most intriguing aspects of quantum mechanics is the impossibility of measuring at the same time observables corresponding to noncommuting operators, because of quantum uncertainty. This impossibility can be partially relaxed when considering joint or sequential weak value evaluation. Indeed, weak value measurements have been a real breakthrough in the quantum measurement framework that is of the utmost interest from both a fundamental and an applicative point of view. In this Letter, we show how we realized for the first time a sequential weak value evaluation of two incompatible observables using a genuine single-photon experiment. These (sometimes anomalous) sequential weak values revealed the single-operator weak values, as well as the local correlation between them.

  4. 49 CFR 563.8 - Data format.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... point (NLP), which is an integer that when multiplied by the TS equals the time relative to time zero of the last acceleration data point; and (4) NLP—NFP + 1 acceleration values sequentially beginning with... until the time NLP * TS is reached. [73 FR 2183, Jan. 14, 2008] § 563.8, Nt. Effective Date Note: At 76...

  5. Use of personalized Dynamic Treatment Regimes (DTRs) and Sequential Multiple Assignment Randomized Trials (SMARTs) in mental health studies

    PubMed Central

    Liu, Ying; ZENG, Donglin; WANG, Yuanjia

    2014-01-01

    Summary Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each point where a clinical decision is made based on each patient’s time-varying characteristics and intermediate outcomes observed at earlier points in time. The complexity, patient heterogeneity, and chronicity of mental disorders call for learning optimal DTRs to dynamically adapt treatment to an individual’s response over time. The Sequential Multiple Assignment Randomized Trial (SMARTs) design allows for estimating causal effects of DTRs. Modern statistical tools have been developed to optimize DTRs based on personalized variables and intermediate outcomes using rich data collected from SMARTs; these statistical methods can also be used to recommend tailoring variables for designing future SMART studies. This paper introduces DTRs and SMARTs using two examples in mental health studies, discusses two machine learning methods for estimating optimal DTR from SMARTs data, and demonstrates the performance of the statistical methods using simulated data. PMID:25642116

  6. Developmental Time Course of the Acquisition of Sequential Egocentric and Allocentric Navigation Strategies

    ERIC Educational Resources Information Center

    Bullens, Jessie; Igloi, Kinga; Berthoz, Alain; Postma, Albert; Rondi-Reig, Laure

    2010-01-01

    Navigation in a complex environment can rely on the use of different spatial strategies. We have focused on the employment of "allocentric" (i.e., encoding interrelationships among environmental cues, movements, and the location of the goal) and "sequential egocentric" (i.e., sequences of body turns associated with specific choice points)…

  7. Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images

    PubMed Central

    Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun

    2013-01-01

    This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608

  8. A meta-analysis of response-time tests of the sequential two-systems model of moral judgment.

    PubMed

    Baron, Jonathan; Gürçay, Burcu

    2017-05-01

    The (generalized) sequential two-system ("default interventionist") model of utilitarian moral judgment predicts that utilitarian responses often arise from a system-two correction of system-one deontological intuitions. Response-time (RT) results that seem to support this model are usually explained by the fact that low-probability responses have longer RTs. Following earlier results, we predicted response probability from each subject's tendency to make utilitarian responses (A, "Ability") and each dilemma's tendency to elicit deontological responses (D, "Difficulty"), estimated from a Rasch model. At the point where A = D, the two responses are equally likely, so probability effects cannot account for any RT differences between them. The sequential two-system model still predicts that many of the utilitarian responses made at this point will result from system-two corrections of system-one intuitions, hence should take longer. However, when A = D, RT for the two responses was the same, contradicting the sequential model. Here we report a meta-analysis of 26 data sets, which replicated the earlier results of no RT difference overall at the point where A = D. The data sets used three different kinds of moral judgment items, and the RT equality at the point where A = D held for all three. In addition, we found that RT increased with A-D. This result holds for subjects (characterized by Ability) but not for items (characterized by Difficulty). We explain the main features of this unanticipated effect, and of the main results, with a drift-diffusion model.

  9. Parent and Child Personality Traits and Children's Externalizing Problem Behavior from Age 4 to 9 Years: A Cohort-Sequential Latent Growth Curve Analysis

    ERIC Educational Resources Information Center

    Prinzie, P.; Onghena, P.; Hellinckx, W.

    2005-01-01

    Cohort-sequential latent growth modeling was used to analyze longitudinal data for children's externalizing behavior from four overlapping age cohorts (4, 5, 6, and 7 years at first assessment) measured at three annual time points. The data included mother and father ratings on the Child Behavior Checklist and the Five-Factor Personality Inventory…

  10. Multi-point objective-oriented sequential sampling strategy for constrained robust design

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Zhang, Siliang; Chen, Wei

    2015-03-01

    Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.

  11. Computational time reduction for sequential batch solutions in GNSS precise point positioning technique

    NASA Astrophysics Data System (ADS)

    Martín Furones, Angel; Anquela Julián, Ana Belén; Dimas-Pages, Alejandro; Cos-Gayón, Fernando

    2017-08-01

    Precise point positioning (PPP) is a well established Global Navigation Satellite System (GNSS) technique that only requires information from the receiver (or rover) to obtain high-precision position coordinates. This is a very interesting and promising technique because eliminates the need for a reference station near the rover receiver or a network of reference stations, thus reducing the cost of a GNSS survey. From a computational perspective, there are two ways to solve the system of observation equations produced by static PPP either in a single step (so-called batch adjustment) or with a sequential adjustment/filter. The results of each should be the same if they are both well implemented. However, if a sequential solution (that is, not only the final coordinates, but also those observed in previous GNSS epochs), is needed, as for convergence studies, finding a batch solution becomes a very time consuming task owing to the need for matrix inversion that accumulates with each consecutive epoch. This is not a problem for the filter solution, which uses information computed in the previous epoch for the solution of the current epoch. Thus filter implementations need extra considerations of user dynamics and parameter state variations between observation epochs with appropriate stochastic update parameter variances from epoch to epoch. These filtering considerations are not needed in batch adjustment, which makes it attractive. The main objective of this research is to significantly reduce the computation time required to obtain sequential results using batch adjustment. The new method we implemented in the adjustment process led to a mean reduction in computational time by 45%.

  12. Designing group sequential randomized clinical trials with time to event end points using a R function.

    PubMed

    Filleron, Thomas; Gal, Jocelyn; Kramar, Andrew

    2012-10-01

    A major and difficult task is the design of clinical trials with a time to event endpoint. In fact, it is necessary to compute the number of events and in a second step the required number of patients. Several commercial software packages are available for computing sample size in clinical trials with sequential designs and time to event endpoints, but there are a few R functions implemented. The purpose of this paper is to describe features and use of the R function. plansurvct.func, which is an add-on function to the package gsDesign which permits in one run of the program to calculate the number of events, and required sample size but also boundaries and corresponding p-values for a group sequential design. The use of the function plansurvct.func is illustrated by several examples and validated using East software. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  13. Building a Lego wall: Sequential action selection.

    PubMed

    Arnold, Amy; Wing, Alan M; Rotshtein, Pia

    2017-05-01

    The present study draws together two distinct lines of enquiry into the selection and control of sequential action: motor sequence production and action selection in everyday tasks. Participants were asked to build 2 different Lego walls. The walls were designed to have hierarchical structures with shared and dissociated colors and spatial components. Participants built 1 wall at a time, under low and high load cognitive states. Selection times for correctly completed trials were measured using 3-dimensional motion tracking. The paradigm enabled precise measurement of the timing of actions, while using real objects to create an end product. The experiment demonstrated that action selection was slowed at decision boundary points, relative to boundaries where no between-wall decision was required. Decision points also affected selection time prior to the actual selection window. Dual-task conditions increased selection errors. Errors mostly occurred at boundaries between chunks and especially when these required decisions. The data support hierarchical control of sequenced behavior. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. A geostatistical methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer.

    PubMed

    Júnez-Ferreira, H E; Herrera, G S

    2013-04-01

    This paper presents a new methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer in Mexico. The selection of the space-time monitoring points is done using a static Kalman filter combined with a sequential optimization method. The Kalman filter requires as input a space-time covariance matrix, which is derived from a geostatistical analysis. A sequential optimization method that selects the space-time point that minimizes a function of the variance, in each step, is used. We demonstrate the methodology applying it to the redesign of the hydraulic head monitoring network of the Valle de Querétaro aquifer with the objective of selecting from a set of monitoring positions and times, those that minimize the spatiotemporal redundancy. The database for the geostatistical space-time analysis corresponds to information of 273 wells located within the aquifer for the period 1970-2007. A total of 1,435 hydraulic head data were used to construct the experimental space-time variogram. The results show that from the existing monitoring program that consists of 418 space-time monitoring points, only 178 are not redundant. The implied reduction of monitoring costs was possible because the proposed method is successful in propagating information in space and time.

  15. Damage diagnosis algorithm using a sequential change point detection method with an unknown distribution for damage

    NASA Astrophysics Data System (ADS)

    Noh, Hae Young; Rajagopal, Ram; Kiremidjian, Anne S.

    2012-04-01

    This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method for the cases where the post-damage feature distribution is unknown a priori. This algorithm extracts features from structural vibration data using time-series analysis and then declares damage using the change point detection method. The change point detection method asymptotically minimizes detection delay for a given false alarm rate. The conventional method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori. Therefore, our algorithm estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using multiple sets of simulated data and a set of experimental data collected from a four-story steel special moment-resisting frame. Our algorithm was able to estimate the post-damage distribution consistently and resulted in detection delays only a few seconds longer than the delays from the conventional method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.

  16. Personalized long-term prediction of cognitive function: Using sequential assessments to improve model performance.

    PubMed

    Chi, Chih-Lin; Zeng, Wenjun; Oh, Wonsuk; Borson, Soo; Lenskaia, Tatiana; Shen, Xinpeng; Tonellato, Peter J

    2017-12-01

    Prediction of onset and progression of cognitive decline and dementia is important both for understanding the underlying disease processes and for planning health care for populations at risk. Predictors identified in research studies are typically accessed at one point in time. In this manuscript, we argue that an accurate model for predicting cognitive status over relatively long periods requires inclusion of time-varying components that are sequentially assessed at multiple time points (e.g., in multiple follow-up visits). We developed a pilot model to test the feasibility of using either estimated or observed risk factors to predict cognitive status. We developed two models, the first using a sequential estimation of risk factors originally obtained from 8 years prior, then improved by optimization. This model can predict how cognition will change over relatively long time periods. The second model uses observed rather than estimated time-varying risk factors and, as expected, results in better prediction. This model can predict when newly observed data are acquired in a follow-up visit. Performances of both models that are evaluated in10-fold cross-validation and various patient subgroups show supporting evidence for these pilot models. Each model consists of multiple base prediction units (BPUs), which were trained using the same set of data. The difference in usage and function between the two models is the source of input data: either estimated or observed data. In the next step of model refinement, we plan to integrate the two types of data together to flexibly predict dementia status and changes over time, when some time-varying predictors are measured only once and others are measured repeatedly. Computationally, both data provide upper and lower bounds for predictive performance. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. A multiple imputation strategy for sequential multiple assignment randomized trials

    PubMed Central

    Shortreed, Susan M.; Laber, Eric; Stroup, T. Scott; Pineau, Joelle; Murphy, Susan A.

    2014-01-01

    Sequential multiple assignment randomized trials (SMARTs) are increasingly being used to inform clinical and intervention science. In a SMART, each patient is repeatedly randomized over time. Each randomization occurs at a critical decision point in the treatment course. These critical decision points often correspond to milestones in the disease process or other changes in a patient’s health status. Thus, the timing and number of randomizations may vary across patients and depend on evolving patient-specific information. This presents unique challenges when analyzing data from a SMART in the presence of missing data. This paper presents the first comprehensive discussion of missing data issues typical of SMART studies: we describe five specific challenges, and propose a flexible imputation strategy to facilitate valid statistical estimation and inference using incomplete data from a SMART. To illustrate these contributions, we consider data from the Clinical Antipsychotic Trial of Intervention and Effectiveness (CATIE), one of the most well-known SMARTs to date. PMID:24919867

  18. Understanding Human Motion Skill with Peak Timing Synergy

    NASA Astrophysics Data System (ADS)

    Ueno, Ken; Furukawa, Koichi

    The careful observation of motion phenomena is important in understanding the skillful human motion. However, this is a difficult task due to the complexities in timing when dealing with the skilful control of anatomical structures. To investigate the dexterity of human motion, we decided to concentrate on timing with respect to motion, and we have proposed a method to extract the peak timing synergy from multivariate motion data. The peak timing synergy is defined as a frequent ordered graph with time stamps, which has nodes consisting of turning points in motion waveforms. A proposed algorithm, PRESTO automatically extracts the peak timing synergy. PRESTO comprises the following 3 processes: (1) detecting peak sequences with polygonal approximation; (2) generating peak-event sequences; and (3) finding frequent peak-event sequences using a sequential pattern mining method, generalized sequential patterns (GSP). Here, we measured right arm motion during the task of cello bowing and prepared a data set of the right shoulder and arm motion. We successfully extracted the peak timing synergy on cello bowing data set using the PRESTO algorithm, which consisted of common skills among cellists and personal skill differences. To evaluate the sequential pattern mining algorithm GSP in PRESTO, we compared the peak timing synergy by using GSP algorithm and the one by using filtering by reciprocal voting (FRV) algorithm as a non time-series method. We found that the support is 95 - 100% in GSP, while 83 - 96% in FRV and that the results by GSP are better than the one by FRV in the reproducibility of human motion. Therefore we show that sequential pattern mining approach is more effective to extract the peak timing synergy than non-time series analysis approach.

  19. Sequential imaging of asymptomatic carotid atheroma using ultrasmall superparamagnetic iron oxide-enhanced magnetic resonance imaging: a feasibility study.

    PubMed

    Sadat, Umar; Howarth, Simon P S; Usman, Ammara; Tang, Tjun Y; Graves, Martin J; Gillard, Jonathan H

    2013-11-01

    Inflammation within atheromatous plaques is a known risk factor for plaque vulnerability. This can be detected in vivo on high-resolution magnetic resonance imaging (MRI) using ultrasmall superparamagnetic iron oxide (USPIO) contrast medium. The purpose of this study was to assess the feasibility of performing sequential USPIO studies over a 1-year period. Ten patients with moderate asymptomatic carotid stenosis underwent carotid MRI imaging both before and 36 hours after USPIO infusion at 0, 6, and 12 months. Images were manually segmented into quadrants, and the signal change per quadrant was calculated at these time points. A mixed repeated measures statistical model was used to determine signal change attributable to USPIO uptake over time. All patients remained asymptomatic during the study. The mixed model revealed no statistical difference in USPIO uptake between the 3 time points. Intraclass correlation coefficients revealed a good agreement of quadrant signal pre-USPIO infusion between 0 and 6 months (0.70) and 0 and 12 months (0.70). Good agreement of quadrant signal after USPIO infusion was shown between 0 and 6 months (0.68) and moderate agreement was shown between 0 and 12 months (0.33). USPIO-enhanced sequential MRI of atheromatous carotid plaques is clinically feasible. This may have important implications for future longitudinal studies involving pharmacologic intervention in large patient cohorts. Copyright © 2013 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  20. Sequential roles of primary somatosensory cortex and posterior parietal cortex in tactile-visual cross-modal working memory: a single-pulse transcranial magnetic stimulation (spTMS) study.

    PubMed

    Ku, Yixuan; Zhao, Di; Hao, Ning; Hu, Yi; Bodner, Mark; Zhou, Yong-Di

    2015-01-01

    Both monkey neurophysiological and human EEG studies have shown that association cortices, as well as primary sensory cortical areas, play an essential role in sequential neural processes underlying cross-modal working memory. The present study aims to further examine causal and sequential roles of the primary sensory cortex and association cortex in cross-modal working memory. Individual MRI-based single-pulse transcranial magnetic stimulation (spTMS) was applied to bilateral primary somatosensory cortices (SI) and the contralateral posterior parietal cortex (PPC), while participants were performing a tactile-visual cross-modal delayed matching-to-sample task. Time points of spTMS were 300 ms, 600 ms, 900 ms after the onset of the tactile sample stimulus in the task. The accuracy of task performance and reaction time were significantly impaired when spTMS was applied to the contralateral SI at 300 ms. Significant impairment on performance accuracy was also observed when the contralateral PPC was stimulated at 600 ms. SI and PPC play sequential and distinct roles in neural processes of cross-modal associations and working memory. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Sequential Pointing in Children and Adults.

    ERIC Educational Resources Information Center

    Badan, Maryse; Hauert, Claude-Alain; Mounoud, Pierre

    2000-01-01

    Four experiments investigated the development of visuomotor control in sequential pointing in tasks varying in difficulty among 6- to 10-year-olds and adults. Comparisons across difficulty levels and ages suggest that motor development is not a uniform fine-tuning of stable strategies. Findings raise argument for stage characteristics of…

  2. Time-resolved contrast-enhanced MR angiography of the thorax in adults with congenital heart disease.

    PubMed

    Mohrs, Oliver K; Petersen, Steffen E; Voigtlaender, Thomas; Peters, Jutta; Nowak, Bernd; Heinemann, Markus K; Kauczor, Hans-Ulrich

    2006-10-01

    The aim of this study was to evaluate the diagnostic value of time-resolved contrast-enhanced MR angiography in adults with congenital heart disease. Twenty patients with congenital heart disease (mean age, 38 +/- 14 years; range, 16-73 years) underwent contrast-enhanced turbo fast low-angle shot MR angiography. Thirty consecutive coronal 3D slabs with a frame rate of 1-second duration were acquired. The mask defined as the first data set was subtracted from subsequent images. Image quality was evaluated using a 5-point scale (from 1, not assessable, to 5, excellent image quality). Twelve diagnostic parameters yielded 1 point each in case of correct diagnosis (binary analysis into normal or abnormal) and were summarized into three categories: anatomy of the main thoracic vessels (maximum, 5 points), sequential cardiac anatomy (maximum, 5 points), and shunt detection (maximum, 2 points). The results were compared with a combined clinical reference comprising medical or surgical reports and other imaging studies. Diagnostic accuracies were calculated for each of the parameters as well as for the three categories. The mean image quality was 3.7 +/- 1.0. Using a binary approach, 220 (92%) of the 240 single diagnostic parameters could be analyzed. The percentage of maximum diagnostic points, the sensitivity, the specificity, and the positive and the negative predictive values were all 100% for the anatomy of the main thoracic vessels; 97%, 87%, 100%, 100%, and 96% for sequential cardiac anatomy; and 93%, 93%, 92%, 88%, and 96% for shunt detection. Time-resolved contrast-enhanced MR angiography provides, in one breath-hold, anatomic and qualitative functional information in adult patients with congenital heart disease. The high diagnostic accuracy allows the investigator to tailor subsequent specific MR sequences within the same session.

  3. Time Series ARIMA Models of Undergraduate Grade Point Average.

    ERIC Educational Resources Information Center

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  4. Edge-following algorithm for tracking geological features

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.

    1977-01-01

    Sequential edge-tracking algorithm employs circular scanning to point permit effective real-time tracking of coastlines and rivers from earth resources satellites. Technique eliminates expensive high-resolution cameras. System might also be adaptable for application in monitoring automated assembly lines, inspecting conveyor belts, or analyzing thermographs, or x ray images.

  5. Establishing the Learning Curve of Robotic Sacral Colpopexy in a Start-up Robotics Program.

    PubMed

    Sharma, Shefali; Calixte, Rose; Finamore, Peter S

    2016-01-01

    To determine the learning curve of the following segments of a robotic sacral colpopexy: preoperative setup, operative time, postoperative transition, and room turnover. A retrospective cohort study to determine the number of cases needed to reach points of efficiency in the various segments of a robotic sacral colpopexy (Canadian Task Force II-2). A university-affiliated community hospital. Women who underwent robotic sacral colpopexy at our institution from 2009 to 2013 comprise the study population. Patient characteristics and operative reports were extracted from a patient database that has been maintained since the inception of the robotics program at Winthrop University Hospital and electronic medical records. Based on additional procedures performed, 4 groups of patients were created (A-D). Learning curves for each of the segment times of interest were created using penalized basis spline (B-spline) regression. Operative time was further analyzed using an inverse curve and sequential grouping. A total of 176 patients were eligible. Nonparametric tests detected no difference in procedure times between the 4 groups (A-D) of patients. The preoperative and postoperative points of efficiency were 108 and 118 cases, respectively. The operative points of proficiency and efficiency were 25 and 36 cases, respectively. Operative time was further analyzed using an inverse curve that revealed that after 11 cases the surgeon had reached 90% of the learning plateau. Sequential grouping revealed no significant improvement in operative time after 60 cases. Turnover time could not be assessed because of incomplete data. There is a difference in the operative time learning curve for robotic sacral colpopexy depending on the statistical analysis used. The learning curve of the operative segment showed an improvement in operative time between 25 and 36 cases when using B-spline regression. When the data for operative time was fit to an inverse curve, a learning rate of 11 cases was appreciated. Using sequential grouping to describe the data, no improvement in operative time was seen after 60 cases. Ultimately, we believe that efficiency in operative time is attained after 30 to 60 cases when performing robotic sacral colpopexy. The learning curve for preoperative setup and postoperative transition, which is reflective of anesthesia and nursing staff, was approximately 110 cases. Copyright © 2016 AAGL. Published by Elsevier Inc. All rights reserved.

  6. Sequential causal inference: Application to randomized trials of adaptive treatment strategies

    PubMed Central

    Dawson, Ree; Lavori, Philip W.

    2009-01-01

    SUMMARY Clinical trials that randomize subjects to decision algorithms, which adapt treatments over time according to individual response, have gained considerable interest as investigators seek designs that directly inform clinical decision making. We consider designs in which subjects are randomized sequentially at decision points, among adaptive treatment options under evaluation. We present a sequential method to estimate the comparative effects of the randomized adaptive treatments, which are formalized as adaptive treatment strategies. Our causal estimators are derived using Bayesian predictive inference. We use analytical and empirical calculations to compare the predictive estimators to (i) the ‘standard’ approach that allocates the sequentially obtained data to separate strategy-specific groups as would arise from randomizing subjects at baseline; (ii) the semi-parametric approach of marginal mean models that, under appropriate experimental conditions, provides the same sequential estimator of causal differences as the proposed approach. Simulation studies demonstrate that sequential causal inference offers substantial efficiency gains over the standard approach to comparing treatments, because the predictive estimators can take advantage of the monotone structure of shared data among adaptive strategies. We further demonstrate that the semi-parametric asymptotic variances, which are marginal ‘one-step’ estimators, may exhibit significant bias, in contrast to the predictive variances. We show that the conditions under which the sequential method is attractive relative to the other two approaches are those most likely to occur in real studies. PMID:17914714

  7. Sequential structural damage diagnosis algorithm using a change point detection method

    NASA Astrophysics Data System (ADS)

    Noh, H.; Rajagopal, R.; Kiremidjian, A. S.

    2013-11-01

    This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method. The general change point detection method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori, unless we are looking for a known specific type of damage. Therefore, we introduce an additional algorithm that estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using a set of experimental data collected from a four-story steel special moment-resisting frame and multiple sets of simulated data. Various features of different dimensions have been explored, and the algorithm was able to identify damage, particularly when it uses multidimensional damage sensitive features and lower false alarm rates, with a known post-damage feature distribution. For unknown feature distribution cases, the post-damage distribution was consistently estimated and the detection delays were only a few time steps longer than the delays from the general method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.

  8. The Neural Representation of Prospective Choice during Spatial Planning and Decisions

    PubMed Central

    Kaplan, Raphael; Koster, Raphael; Penny, William D.; Burgess, Neil; Friston, Karl J.

    2017-01-01

    We are remarkably adept at inferring the consequences of our actions, yet the neuronal mechanisms that allow us to plan a sequence of novel choices remain unclear. We used functional magnetic resonance imaging (fMRI) to investigate how the human brain plans the shortest path to a goal in novel mazes with one (shallow maze) or two (deep maze) choice points. We observed two distinct anterior prefrontal responses to demanding choices at the second choice point: one in rostrodorsal medial prefrontal cortex (rd-mPFC)/superior frontal gyrus (SFG) that was also sensitive to (deactivated by) demanding initial choices and another in lateral frontopolar cortex (lFPC), which was only engaged by demanding choices at the second choice point. Furthermore, we identified hippocampal responses during planning that correlated with subsequent choice accuracy and response time, particularly in mazes affording sequential choices. Psychophysiological interaction (PPI) analyses showed that coupling between the hippocampus and rd-mPFC increases during sequential (deep versus shallow) planning and is higher before correct versus incorrect choices. In short, using a naturalistic spatial planning paradigm, we reveal how the human brain represents sequential choices during planning without extensive training. Our data highlight a network centred on the cortical midline and hippocampus that allows us to make prospective choices while maintaining initial choices during planning in novel environments. PMID:28081125

  9. Reactivation, Replay, and Preplay: How It Might All Fit Together

    PubMed Central

    Buhry, Laure; Azizi, Amir H.; Cheng, Sen

    2011-01-01

    Sequential activation of neurons that occurs during “offline” states, such as sleep or awake rest, is correlated with neural sequences recorded during preceding exploration phases. This so-called reactivation, or replay, has been observed in a number of different brain regions such as the striatum, prefrontal cortex, primary visual cortex and, most prominently, the hippocampus. Reactivation largely co-occurs together with hippocampal sharp-waves/ripples, brief high-frequency bursts in the local field potential. Here, we first review the mounting evidence for the hypothesis that reactivation is the neural mechanism for memory consolidation during sleep. We then discuss recent results that suggest that offline sequential activity in the waking state might not be simple repetitions of previously experienced sequences. Some offline sequential activity occurs before animals are exposed to a novel environment for the first time, and some sequences activated offline correspond to trajectories never experienced by the animal. We propose a conceptual framework for the dynamics of offline sequential activity that can parsimoniously describe a broad spectrum of experimental results. These results point to a potentially broader role of offline sequential activity in cognitive functions such as maintenance of spatial representation, learning, or planning. PMID:21918724

  10. North Korean Civil-Military Trends: Military-First Politics to a Point

    DTIC Science & Technology

    2006-09-01

    to the Great Leader. Yi Yong Mu. The NDC’s impact on the formal leadership lineup was made clear by 2001, when Yi Yong Mu, vice chairman of the NDC...to be restricted, as are career bonds that could come about between officers through sequential appointments to same commands. • First time

  11. Short-term memory for spatial, sequential and duration information.

    PubMed

    Manohar, Sanjay G; Pertzov, Yoni; Husain, Masud

    2017-10-01

    Space and time appear to play key roles in the way that information is organized in short-term memory (STM). Some argue that they are crucial contexts within which other stored features are embedded, allowing binding of information that belongs together within STM. Here we review recent behavioral, neurophysiological and imaging studies that have sought to investigate the nature of spatial, sequential and duration representations in STM, and how these might break down in disease. Findings from these studies point to an important role of the hippocampus and other medial temporal lobe structures in aspects of STM, challenging conventional accounts of involvement of these regions in only long-term memory.

  12. Radiation detection method and system using the sequential probability ratio test

    DOEpatents

    Nelson, Karl E [Livermore, CA; Valentine, John D [Redwood City, CA; Beauchamp, Brock R [San Ramon, CA

    2007-07-17

    A method and system using the Sequential Probability Ratio Test to enhance the detection of an elevated level of radiation, by determining whether a set of observations are consistent with a specified model within a given bounds of statistical significance. In particular, the SPRT is used in the present invention to maximize the range of detection, by providing processing mechanisms for estimating the dynamic background radiation, adjusting the models to reflect the amount of background knowledge at the current point in time, analyzing the current sample using the models to determine statistical significance, and determining when the sample has returned to the expected background conditions.

  13. An Expansion of Glider Observation STrategies to Systematically Transmit and Analyze Preferred Waypoints of Underwater Gliders

    DTIC Science & Technology

    2015-01-01

    exercis ead over time i e rendezvous C function. nformation ne that set of po antify the relat are combined to as a morpho runs independ rs. EMPath a...Glider pilots do not use the temporal waypoint estimate from GOST in their control of the glider; waypoints are simply treated as a sequence of points...glider pilots’ interpretation of points as only sequential instead of temporal will eliminate this as a hindrance to the system’s use. Further

  14. Noninferiority, randomized, controlled trial comparing embryo development using media developed for sequential or undisturbed culture in a time-lapse setup.

    PubMed

    Hardarson, Thorir; Bungum, Mona; Conaghan, Joe; Meintjes, Marius; Chantilis, Samuel J; Molnar, Laszlo; Gunnarsson, Kristina; Wikland, Matts

    2015-12-01

    To study whether a culture medium that allows undisturbed culture supports human embryo development to the blastocyst stage equivalently to a well-established sequential media. Randomized, double-blinded sibling trial. Independent in vitro fertilization (IVF) clinics. One hundred twenty-eight patients, with 1,356 zygotes randomized into two study arms. Embryos randomly allocated into two study arms to compare embryo development on a time-lapse system using a single-step medium or sequential media. Percentage of good-quality blastocysts on day 5. Percentage of day 5 good-quality blastocysts was 21.1% (standard deviation [SD] ± 21.6%) and 22.2% (SD ± 22.1%) in the single-step time-lapse medium (G-TL) and the sequential media (G-1/G-2) groups, respectively. The mean difference (-1.2; 95% CI, -6.0; 3.6) between the two media systems for the primary end point was less than the noninferiority margin of -8%. There was a statistically significantly lower number of good-quality embryos on day 3 in the G-TL group [50.7% (SD ± 30.6%) vs. 60.8% (SD ± 30.7%)]. Four out of the 11 measured morphokinetic parameters were statistically significantly different for the two media used. The mean levels of ammonium concentration in the media at the end of the culture period was statistically significantly lower in the G-TL group as compared with the G-2 group. We have shown that a single-step culture medium supports blastocyst development equivalently to established sequential media. The ammonium concentrations were lower in the single-step media, and the measured morphokinetic parameters were modified somewhat. NCT01939626. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  15. A field trial of ethyl hexanediol against Aedes dorsalis in Sonoma County, California.

    PubMed

    Rutledge, L C; Hooper, R L; Wirtz, R A; Gupta, R K

    1989-09-01

    The repellent ethyl hexanediol (2-ethyl-1,3-hexanediol) was tested against the mosquito Aedes dorsalis in a coastal salt marsh in California. The experimental design incorporated a linear regression model, sequential treatments and a proportional end point (95%) for protection time. The protection time of 0.10 mg/cm2 ethyl hexanediol was estimated at 0.8 h. This time is shorter than that obtained previously for deet (N,N-diethyl-3-methylbenzamide) against Ae. dorsalis (4.4 h).

  16. One-sided truncated sequential t-test: application to natural resource sampling

    Treesearch

    Gary W. Fowler; William G. O' Regan

    1974-01-01

    A new procedure for constructing one-sided truncated sequential t-tests and its application to natural resource sampling are described. Monte Carlo procedures were used to develop a series of one-sided truncated sequential t-tests and the associated approximations to the operating characteristic and average sample number functions. Different truncation points and...

  17. Application of dynamic topic models to toxicogenomics data.

    PubMed

    Lee, Mikyung; Liu, Zhichao; Huang, Ruili; Tong, Weida

    2016-10-06

    All biological processes are inherently dynamic. Biological systems evolve transiently or sustainably according to sequential time points after perturbation by environment insults, drugs and chemicals. Investigating the temporal behavior of molecular events has been an important subject to understand the underlying mechanisms governing the biological system in response to, such as, drug treatment. The intrinsic complexity of time series data requires appropriate computational algorithms for data interpretation. In this study, we propose, for the first time, the application of dynamic topic models (DTM) for analyzing time-series gene expression data. A large time-series toxicogenomics dataset was studied. It contains over 3144 microarrays of gene expression data corresponding to rat livers treated with 131 compounds (most are drugs) at two doses (control and high dose) in a repeated schedule containing four separate time points (4-, 8-, 15- and 29-day). We analyzed, with DTM, the topics (consisting of a set of genes) and their biological interpretations over these four time points. We identified hidden patterns embedded in this time-series gene expression profiles. From the topic distribution for compound-time condition, a number of drugs were successfully clustered by their shared mode-of-action such as PPARɑ agonists and COX inhibitors. The biological meaning underlying each topic was interpreted using diverse sources of information such as functional analysis of the pathways and therapeutic uses of the drugs. Additionally, we found that sample clusters produced by DTM are much more coherent in terms of functional categories when compared to traditional clustering algorithms. We demonstrated that DTM, a text mining technique, can be a powerful computational approach for clustering time-series gene expression profiles with the probabilistic representation of their dynamic features along sequential time frames. The method offers an alternative way for uncovering hidden patterns embedded in time series gene expression profiles to gain enhanced understanding of dynamic behavior of gene regulation in the biological system.

  18. Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping

    NASA Technical Reports Server (NTRS)

    Leberl, F.

    1975-01-01

    Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.

  19. Separation of left and right lungs using 3D information of sequential CT images and a guided dynamic programming algorithm

    PubMed Central

    Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin

    2011-01-01

    Objective this article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on CT examinations. Methods we developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. Results the scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing dataset of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. Conclusions The proposed method is able to robustly and accurately disconnect all connections between left and right lungs and the guided dynamic programming algorithm is able to remove redundant processing. PMID:21412104

  20. Separation of left and right lungs using 3-dimensional information of sequential computed tomography images and a guided dynamic programming algorithm.

    PubMed

    Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin

    2011-01-01

    This article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on computed tomography (CT) examinations. We developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. The scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing data set of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. The proposed method is able to robustly and accurately disconnect all connections between left and right lungs, and the guided dynamic programming algorithm is able to remove redundant processing.

  1. Algorithms and Application of Sparse Matrix Assembly and Equation Solvers for Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Watson, W. R.; Nguyen, D. T.; Reddy, C. J.; Vatsa, V. N.; Tang, W. H.

    2001-01-01

    An algorithm for symmetric sparse equation solutions on an unstructured grid is described. Efficient, sequential sparse algorithms for degree-of-freedom reordering, supernodes, symbolic/numerical factorization, and forward backward solution phases are reviewed. Three sparse algorithms for the generation and assembly of symmetric systems of matrix equations are presented. The accuracy and numerical performance of the sequential version of the sparse algorithms are evaluated over the frequency range of interest in a three-dimensional aeroacoustics application. Results show that the solver solutions are accurate using a discretization of 12 points per wavelength. Results also show that the first assembly algorithm is impractical for high-frequency noise calculations. The second and third assembly algorithms have nearly equal performance at low values of source frequencies, but at higher values of source frequencies the third algorithm saves CPU time and RAM. The CPU time and the RAM required by the second and third assembly algorithms are two orders of magnitude smaller than that required by the sparse equation solver. A sequential version of these sparse algorithms can, therefore, be conveniently incorporated into a substructuring for domain decomposition formulation to achieve parallel computation, where different substructures are handles by different parallel processors.

  2. Comparing a motivational and a self-regulatory intervention to adopt an oral self-care regimen: a two-sequential randomized crossover trial.

    PubMed

    Lhakhang, Pempa; Gholami, Maryam; Knoll, Nina; Schwarzer, Ralf

    2015-01-01

    A sequential intervention to facilitate the adoption and maintenance of dental flossing was conducted among 205 students in India, aged 18-26 years. Two experimental groups received different treatment sequences and were observed at three assessment points, 34 days apart. One group received first a motivational intervention (intention, outcome expectancies, and risk perception, followed by a self-regulatory intervention (planning, self-efficacy, and action control). The second group received the same intervention in the opposite order. Both intervention sequences yielded gains in terms of flossing, planning, self-efficacy, and action control. However, at Time 2, those who had received the self-regulatory intervention first, were superior to their counterparts who had received the motivational intervention first. At Time 3, differences vanished as everyone had then received both interventions. Thus, findings highlight the benefits of a self-regulatory compared to a mere motivational intervention.

  3. Sequential memory: Binding dynamics

    NASA Astrophysics Data System (ADS)

    Afraimovich, Valentin; Gong, Xue; Rabinovich, Mikhail

    2015-10-01

    Temporal order memories are critical for everyday animal and human functioning. Experiments and our own experience show that the binding or association of various features of an event together and the maintaining of multimodality events in sequential order are the key components of any sequential memories—episodic, semantic, working, etc. We study a robustness of binding sequential dynamics based on our previously introduced model in the form of generalized Lotka-Volterra equations. In the phase space of the model, there exists a multi-dimensional binding heteroclinic network consisting of saddle equilibrium points and heteroclinic trajectories joining them. We prove here the robustness of the binding sequential dynamics, i.e., the feasibility phenomenon for coupled heteroclinic networks: for each collection of successive heteroclinic trajectories inside the unified networks, there is an open set of initial points such that the trajectory going through each of them follows the prescribed collection staying in a small neighborhood of it. We show also that the symbolic complexity function of the system restricted to this neighborhood is a polynomial of degree L - 1, where L is the number of modalities.

  4. Sequential memory: Binding dynamics.

    PubMed

    Afraimovich, Valentin; Gong, Xue; Rabinovich, Mikhail

    2015-10-01

    Temporal order memories are critical for everyday animal and human functioning. Experiments and our own experience show that the binding or association of various features of an event together and the maintaining of multimodality events in sequential order are the key components of any sequential memories-episodic, semantic, working, etc. We study a robustness of binding sequential dynamics based on our previously introduced model in the form of generalized Lotka-Volterra equations. In the phase space of the model, there exists a multi-dimensional binding heteroclinic network consisting of saddle equilibrium points and heteroclinic trajectories joining them. We prove here the robustness of the binding sequential dynamics, i.e., the feasibility phenomenon for coupled heteroclinic networks: for each collection of successive heteroclinic trajectories inside the unified networks, there is an open set of initial points such that the trajectory going through each of them follows the prescribed collection staying in a small neighborhood of it. We show also that the symbolic complexity function of the system restricted to this neighborhood is a polynomial of degree L - 1, where L is the number of modalities.

  5. Williams' paradox and the role of phenotypic plasticity in sexual systems.

    PubMed

    Leonard, Janet L

    2013-10-01

    As George Williams pointed out in 1975, although evolutionary explanations, based on selection acting on individuals, have been developed for the advantages of simultaneous hermaphroditism, sequential hermaphroditism and gonochorism, none of these evolutionary explanations adequately explains the current distribution of these sexual systems within the Metazoa (Williams' Paradox). As Williams further pointed out, the current distribution of sexual systems is explained largely by phylogeny. Since 1975, we have made a great deal of empirical and theoretical progress in understanding sexual systems. However, we still lack a theory that explains the current distribution of sexual systems in animals and we do not understand the evolutionary transitions between hermaphroditism and gonochorism. Empirical data, collected over the past 40 years, demonstrate that gender may have more phenotypic plasticity than was previously realized. We know that not only sequential hermaphrodites, but also simultaneous hermaphrodites have phenotypic plasticity that alters sex allocation in response to social and environmental conditions. A focus on phenotypic plasticity suggests that one sees a continuum in animals between genetically determined gonochorism on the one hand and simultaneous hermaphroditism on the other, with various types of sequential hermaphroditism and environmental sex determination as points along the spectrum. Here I suggest that perhaps the reason we have been unable to resolve Williams' Paradox is because the problem was not correctly framed. First, because, for example, simultaneous hermaphroditism provides reproductive assurance or dioecy ensures outcrossing does not mean that there are no other evolutionary paths that can provide adaptive responses to those selective pressures. Second, perhaps the question we need to ask is: What selective forces favor increased versus reduced phenotypic plasticity in gender expression? It is time to begin to look at the question of sexual system as one of understanding the timing and degree of phenotypic plasticity in gender expression in the life history in terms of selection acting on a continuum, rather than on a set of discrete sexual systems.

  6. Anomalous weak values and the violation of a multiple-measurement Leggett-Garg inequality

    NASA Astrophysics Data System (ADS)

    Avella, Alessio; Piacentini, Fabrizio; Borsarelli, Michelangelo; Barbieri, Marco; Gramegna, Marco; Lussana, Rudi; Villa, Federica; Tosi, Alberto; Degiovanni, Ivo Pietro; Genovese, Marco

    2017-11-01

    Quantum mechanics presents peculiar properties that, on the one hand, have been the subject of several theoretical and experimental studies about its very foundations and, on the other hand, provide tools for developing new technologies, the so-called quantum technologies. The nonclassicality pointed out by Leggett-Garg inequalities has represented, with Bell inequalities, one of the most investigated subjects. In this article we study the connection of Leggett-Garg inequalities with a new emerging field of quantum measurement, the weak values in the case of a series of sequential measurements on a single object. In detail, we perform an experimental study of the four-time-correlator Leggett-Garg test, by exploiting single and sequential weak measurements performed on heralded single photons.

  7. The subtyping of primary aldosteronism by adrenal vein sampling: sequential blood sampling causes factitious lateralization.

    PubMed

    Rossitto, Giacomo; Battistel, Michele; Barbiero, Giulio; Bisogni, Valeria; Maiolino, Giuseppe; Diego, Miotto; Seccia, Teresa M; Rossi, Gian Paolo

    2018-02-01

    The pulsatile secretion of adrenocortical hormones and a stress reaction occurring when starting adrenal vein sampling (AVS) can affect the selectivity and also the assessment of lateralization when sequential blood sampling is used. We therefore tested the hypothesis that a simulated sequential blood sampling could decrease the diagnostic accuracy of lateralization index for identification of aldosterone-producing adenoma (APA), as compared with bilaterally simultaneous AVS. In 138 consecutive patients who underwent subtyping of primary aldosteronism, we compared the results obtained simultaneously bilaterally when starting AVS (t-15) and 15 min after (t0), with those gained with a simulated sequential right-to-left AVS technique (R ⇒ L) created by combining hormonal values obtained at t-15 and at t0. The concordance between simultaneously obtained values at t-15 and t0, and between simultaneously obtained values and values gained with a sequential R ⇒ L technique, was also assessed. We found a marked interindividual variability of lateralization index values in the patients with bilaterally selective AVS at both time point. However, overall the lateralization index simultaneously determined at t0 provided a more accurate identification of APA than the simulated sequential lateralization indexR ⇒ L (P = 0.001). Moreover, regardless of which side was sampled first, the sequential AVS technique induced a sequence-dependent overestimation of lateralization index. While in APA patients the concordance between simultaneous AVS at t0 and t-15 and between simultaneous t0 and sequential technique was moderate-to-good (K = 0.55 and 0.66, respectively), in non-APA patients, it was poor (K = 0.12 and 0.13, respectively). Sequential AVS generates factitious between-sides gradients, which lower its diagnostic accuracy, likely because of the stress reaction arising upon starting AVS.

  8. C-learning: A new classification framework to estimate optimal dynamic treatment regimes.

    PubMed

    Zhang, Baqun; Zhang, Min

    2017-12-11

    A dynamic treatment regime is a sequence of decision rules, each corresponding to a decision point, that determine that next treatment based on each individual's own available characteristics and treatment history up to that point. We show that identifying the optimal dynamic treatment regime can be recast as a sequential optimization problem and propose a direct sequential optimization method to estimate the optimal treatment regimes. In particular, at each decision point, the optimization is equivalent to sequentially minimizing a weighted expected misclassification error. Based on this classification perspective, we propose a powerful and flexible C-learning algorithm to learn the optimal dynamic treatment regimes backward sequentially from the last stage until the first stage. C-learning is a direct optimization method that directly targets optimizing decision rules by exploiting powerful optimization/classification techniques and it allows incorporation of patient's characteristics and treatment history to improve performance, hence enjoying advantages of both the traditional outcome regression-based methods (Q- and A-learning) and the more recent direct optimization methods. The superior performance and flexibility of the proposed methods are illustrated through extensive simulation studies. © 2017, The International Biometric Society.

  9. A mathematical programming approach for sequential clustering of dynamic networks

    NASA Astrophysics Data System (ADS)

    Silva, Jonathan C.; Bennett, Laura; Papageorgiou, Lazaros G.; Tsoka, Sophia

    2016-02-01

    A common analysis performed on dynamic networks is community structure detection, a challenging problem that aims to track the temporal evolution of network modules. An emerging area in this field is evolutionary clustering, where the community structure of a network snapshot is identified by taking into account both its current state as well as previous time points. Based on this concept, we have developed a mixed integer non-linear programming (MINLP) model, SeqMod, that sequentially clusters each snapshot of a dynamic network. The modularity metric is used to determine the quality of community structure of the current snapshot and the historical cost is accounted for by optimising the number of node pairs co-clustered at the previous time point that remain so in the current snapshot partition. Our method is tested on social networks of interactions among high school students, college students and members of the Brazilian Congress. We show that, for an adequate parameter setting, our algorithm detects the classes that these students belong more accurately than partitioning each time step individually or by partitioning the aggregated snapshots. Our method also detects drastic discontinuities in interaction patterns across network snapshots. Finally, we present comparative results with similar community detection methods for time-dependent networks from the literature. Overall, we illustrate the applicability of mathematical programming as a flexible, adaptable and systematic approach for these community detection problems. Contribution to the Topical Issue "Temporal Network Theory and Applications", edited by Petter Holme.

  10. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    PubMed Central

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  11. On the origin of reproducible sequential activity in neural circuits

    NASA Astrophysics Data System (ADS)

    Afraimovich, V. S.; Zhigulin, V. P.; Rabinovich, M. I.

    2004-12-01

    Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.

  12. On the origin of reproducible sequential activity in neural circuits.

    PubMed

    Afraimovich, V S; Zhigulin, V P; Rabinovich, M I

    2004-12-01

    Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.

  13. A Parallel Point Matching Algorithm for Landmark Based Image Registration Using Multicore Platform

    PubMed Central

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.

    2013-01-01

    Point matching is crucial for many computer vision applications. Establishing the correspondence between a large number of data points is a computationally intensive process. Some point matching related applications, such as medical image registration, require real time or near real time performance if applied to critical clinical applications like image assisted surgery. In this paper, we report a new multicore platform based parallel algorithm for fast point matching in the context of landmark based medical image registration. We introduced a non-regular data partition algorithm which utilizes the K-means clustering algorithm to group the landmarks based on the number of available processing cores, which optimize the memory usage and data transfer. We have tested our method using the IBM Cell Broadband Engine (Cell/B.E.) platform. The results demonstrated a significant speed up over its sequential implementation. The proposed data partition and parallelization algorithm, though tested only on one multicore platform, is generic by its design. Therefore the parallel algorithm can be extended to other computing platforms, as well as other point matching related applications. PMID:24308014

  14. On Fixed Points of Strictly Causal Functions

    DTIC Science & Technology

    2013-04-08

    were defined to be the functions that are strictly contracting with respect to the Cantor metric (also called the Baire distance) on signals over non...in Computer Science, pages 447–484. Springer Berlin / Heidelberg, 1992. [36] George Markowsky. Chain-complete posets and directed sets with...Journal of Logic Programming, 42(2):59–70, 2000. [53] George M. Reed and A. William Roscoe. A timed model for communicating sequential processes. In

  15. United Kingdom national paediatric bilateral project: Demographics and results of localization and speech perception testing.

    PubMed

    Cullington, H E; Bele, D; Brinton, J C; Cooper, S; Daft, M; Harding, J; Hatton, N; Humphries, J; Lutman, M E; Maddocks, J; Maggs, J; Millward, K; O'Donoghue, G; Patel, S; Rajput, K; Salmon, V; Sear, T; Speers, A; Wheeler, A; Wilson, K

    2017-01-01

    To assess longitudinal outcomes in a large and varied population of children receiving bilateral cochlear implants both simultaneously and sequentially. This observational non-randomized service evaluation collected localization and speech recognition in noise data from simultaneously and sequentially implanted children at four time points: before bilateral cochlear implants or before the sequential implant, 1 year, 2 years, and 3 years after bilateral implants. No inclusion criteria were applied, so children with additional difficulties, cochleovestibular anomalies, varying educational placements, 23 different home languages, a full range of outcomes and varying device use were included. 1001 children were included: 465 implanted simultaneously and 536 sequentially, representing just over 50% of children receiving bilateral implants in the UK in this period. In simultaneously implanted children the median age at implant was 2.1 years; 7% were implanted at less than 1 year of age. In sequentially implanted children the interval between implants ranged from 0.1 to 14.5 years. Children with simultaneous bilateral implants localized better than those with one implant. On average children receiving a second (sequential) cochlear implant showed improvement in localization and listening in background noise after 1 year of bilateral listening. The interval between sequential implants had no effect on localization improvement although a smaller interval gave more improvement in speech recognition in noise. Children with sequential implants on average were able to use their second device to obtain spatial release from masking after 2 years of bilateral listening. Although ranges were large, bilateral cochlear implants on average offered an improvement in localization and speech perception in noise over unilateral implants. These data represent the diverse population of children with bilateral cochlear implants in the UK from 2010 to 2012. Predictions of outcomes for individual patients are not possible from these data. However, there are no indications to preclude children with long inter-implant interval having the chance of a second cochlear implant.

  16. Random covering of the circle: the configuration-space of the free deposition process

    NASA Astrophysics Data System (ADS)

    Huillet, Thierry

    2003-12-01

    Consider a circle of circumference 1. Throw at random n points, sequentially, on this circle and append clockwise an arc (or rod) of length s to each such point. The resulting random set (the free gas of rods) is a collection of a random number of clusters with random sizes. It models a free deposition process on a 1D substrate. For such processes, we shall consider the occurrence times (number of rods) and probabilities, as n grows, of the following configurations: those avoiding rod overlap (the hard-rod gas), those for which the largest gap is smaller than rod length s (the packing gas), those (parking configurations) for which hard rod and packing constraints are both fulfilled and covering configurations. Special attention is paid to the statistical properties of each such (rare) configuration in the asymptotic density domain when ns = rgr, for some finite density rgr of points. Using results from spacings in the random division of the circle, explicit large deviation rate functions can be computed in each case from state equations. Lastly, a process consisting in selecting at random one of these specific equilibrium configurations (called the observable) can be modelled. When particularized to the parking model, this system produces parking configurations differently from Rényi's random sequential adsorption model.

  17. Application of a multi-beam vibrometer on industrial components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bendel, Karl

    2014-05-27

    Laser Doppler vibrometry is a well proven tool for the non-contact measurement of vibration. The scanning of several measurement points allows to visualize the deflection shape of the component, ideally a 3D-operating deflection shape, if a 3-D scanner is applied. Measuring the points sequentially, however, requires stationary behavior during the measurement time. This cannot be guaranteed for many real objects. Therefore, a multipoint laser Doppler vibrometer has been developed by Polytec and the University of Stuttgart with Bosch as industrial partner. A short description of the measurement system is given. Applications for the parallel measurement of the vibration of severalmore » points are shown for non-stationary vibrating Bosch components such as power-tools or valves.« less

  18. Cost-effectiveness of the sequential application of tyrosine kinase inhibitors for the treatment of chronic myeloid leukemia.

    PubMed

    Rochau, Ursula; Sroczynski, Gaby; Wolf, Dominik; Schmidt, Stefan; Jahn, Beate; Kluibenschaedl, Martina; Conrads-Frank, Annette; Stenehjem, David; Brixner, Diana; Radich, Jerald; Gastl, Günther; Siebert, Uwe

    2015-01-01

    Several tyrosine kinase inhibitors (TKIs) are approved for chronic myeloid leukemia (CML) therapy. We evaluated the long-term cost-effectiveness of seven sequential therapy regimens for CML in Austria. A cost-effectiveness analysis was performed using a state-transition Markov model. As model parameters, we used published trial data, clinical, epidemiological and economic data from the Austrian CML registry and national databases. We performed a cohort simulation over a life-long time-horizon from a societal perspective. Nilotinib without second-line TKI yielded an incremental cost-utility ratio of 121,400 €/quality-adjusted life year (QALY) compared to imatinib without second-line TKI after imatinib failure. Imatinib followed by nilotinib after failure resulted in 131,100 €/QALY compared to nilotinib without second-line TKI. Nilotinib followed by dasatinib yielded 152,400 €/QALY compared to imatinib followed by nilotinib after failure. Remaining strategies were dominated. The sequential application of TKIs is standard-of-care, and thus, our analysis points toward imatinib followed by nilotinib as the most cost-effective strategy.

  19. Effect of Annealing on Microstructures and Hardening of Helium-Hydrogen-Implanted Sequentially Vanadium Alloys

    NASA Astrophysics Data System (ADS)

    Jiang, Shaoning; Wang, Zhiming

    2018-03-01

    The effect of post-irradiation annealing on the microstructures and mechanical properties of V-4Cr-4Ti alloys was studied. Helium-hydrogen-irradiated sequentially V-4Cr-4Ti alloys at room temperature (RT) were undergone post-irradiation annealing at 450 °C over periods of up to 30 h. These samples were carried out by high-resolution transmission electron microscopy (HRTEM) observation and nanoindentation test. With the holding time, large amounts of point defects produced during irradiation at RT accumulated into large dislocation loops and then dislocation nets which promoted the irradiation hardening. Meanwhile, bubbles appeared. As annealing time extended, these bubbles grew up and merged, and finally broke up. In the process, the size of bubbles increased and the number density decreased. Microstructural changes due to post-irradiation annealing corresponded to the change of hardening. Dislocations and bubbles are co-contributed to irradiation hardening. With the holding time up to 30 h, the recovery of hardening is not obvious. The phenomenon was discussed by dispersed barrier hardening model and Friedel-Kroupa-Hirsch relationship.

  20. The Fixed-Point Theory of Strictly Causal Functions

    DTIC Science & Technology

    2013-06-09

    functions were defined to be the functions that are strictly contracting with respect to the Cantor metric (also called the Baire distance) on signals...of Lecture Notes in Computer Science, pages 447–484. Springer Berlin / Heidelberg, 1992. [36] George Markowsky. Chain-complete posets and directed...Journal of Logic Programming, 42(2):59–70, 2000. [52] George M. Reed and A. William Roscoe. A timed model for communicating sequential processes. In Laurent

  1. Precise determination of time to reach viral load set point after acute HIV-1 infection.

    PubMed

    Huang, Xiaojie; Chen, Hui; Li, Wei; Li, Haiying; Jin, Xia; Perelson, Alan S; Fox, Zoe; Zhang, Tong; Xu, Xiaoning; Wu, Hao

    2012-12-01

    The HIV viral load set point has long been used as a prognostic marker of disease progression and more recently as an end-point parameter in HIV vaccine clinical trials. The definition of set point, however, is variable. Moreover, the earliest time at which the set point is reached after the onset of infection has never been clearly defined. In this study, we obtained sequential plasma viral load data from 60 acutely HIV-infected Chinese patients among a cohort of men who have sex with men, mathematically determined viral load set point levels, and estimated time to attain set point after infection. We also compared the results derived from our models and that obtained from an empirical method. With novel uncomplicated mathematic model, we discovered that set points may vary from 21 to 119 days dependent on the patients' initial viral load trajectory. The viral load set points were 4.28 ± 0.86 and 4.25 ± 0.87 log10 copies per milliliter (P = 0.08), respectively, as determined by our model and an empirical method, suggesting an excellent agreement between the old and new methods. We provide a novel method to estimate viral load set point at the very early stage of HIV infection. Application of this model can accurately and reliably determine the set point, thus providing a new tool for physicians to better monitor early intervention strategies in acutely infected patients and scientists to rationally design preventative vaccine studies.

  2. The parallel-sequential field subtraction techniques for nonlinear ultrasonic imaging

    NASA Astrophysics Data System (ADS)

    Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.

    2018-04-01

    Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage and have sensitivity to particularly closed defects. This study utilizes two modes of focusing: parallel, in which the elements are fired together with a delay law, and sequential, in which elements are fired independently. In the parallel focusing, a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded; with elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images formed from the coherent component of the field and use this to characterize nonlinearity of closed fatigue cracks. In particular we monitor the reduction in amplitude at the fundamental frequency at each focal point and use this metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g., back wall or large scatters) and allow damage to be detected at an early stage.

  3. The impact of uncertainty on optimal emission policies

    NASA Astrophysics Data System (ADS)

    Botta, Nicola; Jansson, Patrik; Ionescu, Cezar

    2018-05-01

    We apply a computational framework for specifying and solving sequential decision problems to study the impact of three kinds of uncertainties on optimal emission policies in a stylized sequential emission problem.We find that uncertainties about the implementability of decisions on emission reductions (or increases) have a greater impact on optimal policies than uncertainties about the availability of effective emission reduction technologies and uncertainties about the implications of trespassing critical cumulated emission thresholds. The results show that uncertainties about the implementability of decisions on emission reductions (or increases) call for more precautionary policies. In other words, delaying emission reductions to the point in time when effective technologies will become available is suboptimal when these uncertainties are accounted for rigorously. By contrast, uncertainties about the implications of exceeding critical cumulated emission thresholds tend to make early emission reductions less rewarding.

  4. On-line sequential injection-capillary electrophoresis for near-real-time monitoring of extracellular lactate in cell culture flasks.

    PubMed

    Alhusban, Ala A; Gaudry, Adam J; Breadmore, Michael C; Gueven, Nuri; Guijt, Rosanne M

    2014-01-03

    Cell culture has replaced many in vivo studies because of ethical and regulatory measures as well as the possibility of increased throughput. Analytical assays to determine (bio)chemical changes are often based on end-point measurements rather than on a series of sequential determinations. The purpose of this work is to develop an analytical system for monitoring cell culture based on sequential injection-capillary electrophoresis (SI-CE) with capacitively coupled contactless conductivity detection (C(4)D). The system was applied for monitoring lactate production, an important metabolic indicator, during mammalian cell culture. Using a background electrolyte consisting of 25mM tris(hydroxymethyl)aminomethane, 35mM cyclohexyl-2-aminoethanesulfonic acid with 0.02% poly(ethyleneimine) (PEI) at pH 8.65 and a multilayer polymer coated capillary, lactate could be resolved from other compounds present in media with relative standard deviations 0.07% for intraday electrophoretic mobility and an analysis time of less than 10min. Using the human embryonic kidney cell line HEK293, lactate concentrations in the cell culture medium were measured every 20min over 3 days, requiring only 8.73μL of sample per run. Combining simplicity, portability, automation, high sample throughput, low limits of detection, low sample consumption and the ability to up- and outscale, this new methodology represents a promising technique for near real-time monitoring of chemical changes in diverse cell culture applications. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Automated Registration of Sequential Breath-Hold Dynamic Contrast-Enhanced MRI Images: a Comparison of 3 Techniques

    PubMed Central

    Rajaraman, Sivaramakrishnan; Rodriguez, Jeffery J.; Graff, Christian; Altbach, Maria I.; Dragovich, Tomislav; Sirlin, Claude B.; Korn, Ronald L.; Raghunand, Natarajan

    2011-01-01

    Dynamic Contrast-Enhanced MRI (DCE-MRI) is increasingly in use as an investigational biomarker of response in cancer clinical studies. Proper registration of images acquired at different time-points is essential for deriving diagnostic information from quantitative pharmacokinetic analysis of these data. Motion artifacts in the presence of time-varying intensity due to contrast-enhancement make this registration problem challenging. DCE-MRI of chest and abdominal lesions is typically performed during sequential breath-holds, which introduces misregistration due to inconsistent diaphragm positions, and also places constraints on temporal resolution vis-à-vis free-breathing. In this work, we have employed a computer-generated DCE-MRI phantom to compare the performance of two published methods, Progressive Principal Component Registration and Pharmacokinetic Model-Driven Registration, with Sequential Elastic Registration (SER) to register adjacent time-sample images using a published general-purpose elastic registration algorithm. In all 3 methods, a 3-D rigid-body registration scheme with a mutual information similarity measure was used as a pre-processing step. The DCE-MRI phantom images were mathematically deformed to simulate misregistration which was corrected using the 3 schemes. All 3 schemes were comparably successful in registering large regions of interest (ROIs) such as muscle, liver, and spleen. SER was superior in retaining tumor volume and shape, and in registering smaller but important ROIs such as tumor core and tumor rim. The performance of SER on clinical DCE-MRI datasets is also presented. PMID:21531108

  6. Effectiveness of sequential automatic-manual home respiratory polygraphy scoring.

    PubMed

    Masa, Juan F; Corral, Jaime; Pereira, Ricardo; Duran-Cantolla, Joaquin; Cabello, Marta; Hernández-Blasco, Luis; Monasterio, Carmen; Alonso-Fernandez, Alberto; Chiner, Eusebi; Vázquez-Polo, Francisco-José; Montserrat, Jose M

    2013-04-01

    Automatic home respiratory polygraphy (HRP) scoring functions can potentially confirm the diagnosis of sleep apnoea-hypopnoea syndrome (SAHS) (obviating technician scoring) in a substantial number of patients. The result would have important management and cost implications. The aim of this study was to determine the diagnostic cost-effectiveness of a sequential HRP scoring protocol (automatic and then manual for residual cases) compared with manual HRP scoring, and with in-hospital polysomnography. We included suspected SAHS patients in a multicentre study and assigned them to home and hospital protocols at random. We constructed receiver operating characteristic (ROC) curves for manual and automatic scoring. Diagnostic agreement for several cut-off points was explored and costs for two equally effective alternatives were calculated. Of 366 randomised patients, 348 completed the protocol. Manual scoring produced better ROC curves than automatic scoring. There was no sensitive automatic or subsequent manual HRP apnoea-hypopnoea index (AHI) cut-off point. The specific cut-off points for automatic and subsequent manual HRP scorings (AHI >25 and >20, respectively) had a specificity of 93% for automatic and 94% for manual scorings. The costs of manual protocol were 9% higher than sequential HRP protocol; these were 69% and 64%, respectively, of the cost of the polysomnography. A sequential HRP scoring protocol is a cost-effective alternative to polysomnography, although with limited cost savings compared to HRP manual scoring.

  7. Representative locations from time series of soil water content using time stability and wavelet analysis.

    PubMed

    Rivera, Diego; Lillo, Mario; Granda, Stalin

    2014-12-01

    The concept of time stability has been widely used in the design and assessment of monitoring networks of soil moisture, as well as in hydrological studies, because it is as a technique that allows identifying of particular locations having the property of representing mean values of soil moisture in the field. In this work, we assess the effect of time stability calculations as new information is added and how time stability calculations are affected at shorter periods, subsampled from the original time series, containing different amounts of precipitation. In doing so, we defined two experiments to explore the time stability behavior. The first experiment sequentially adds new data to the previous time series to investigate the long-term influence of new data in the results. The second experiment applies a windowing approach, taking sequential subsamples from the entire time series to investigate the influence of short-term changes associated with the precipitation in each window. Our results from an operating network (seven monitoring points equipped with four sensors each in a 2-ha blueberry field) show that as information is added to the time series, there are changes in the location of the most stable point (MSP), and that taking the moving 21-day windows, it is clear that most of the variability of soil water content changes is associated with both the amount and intensity of rainfall. The changes of the MSP over each window depend on the amount of water entering the soil and the previous state of the soil water content. For our case study, the upper strata are proxies for hourly to daily changes in soil water content, while the deeper strata are proxies for medium-range stored water. Thus, different locations and depths are representative of processes at different time scales. This situation must be taken into account when water management depends on soil water content values from fixed locations.

  8. A CPU benchmark for protein crystallographic refinement.

    PubMed

    Bourne, P E; Hendrickson, W A

    1990-01-01

    The CPU time required to complete a cycle of restrained least-squares refinement of a protein structure from X-ray crystallographic data using the FORTRAN codes PROTIN and PROLSQ are reported for 48 different processors, ranging from single-user workstations to supercomputers. Sequential, vector, VLIW, multiprocessor, and RISC hardware architectures are compared using both a small and a large protein structure. Representative compile times for each hardware type are also given, and the improvement in run-time when coding for a specific hardware architecture considered. The benchmarks involve scalar integer and vector floating point arithmetic and are representative of the calculations performed in many scientific disciplines.

  9. The sequentially discounting autoregressive (SDAR) method for on-line automatic seismic event detecting on long term observation

    NASA Astrophysics Data System (ADS)

    Wang, L.; Toshioka, T.; Nakajima, T.; Narita, A.; Xue, Z.

    2017-12-01

    In recent years, more and more Carbon Capture and Storage (CCS) studies focus on seismicity monitoring. For the safety management of geological CO2 storage at Tomakomai, Hokkaido, Japan, an Advanced Traffic Light System (ATLS) combined different seismic messages (magnitudes, phases, distributions et al.) is proposed for injection controlling. The primary task for ATLS is the seismic events detection in a long-term sustained time series record. Considering the time-varying characteristics of Signal to Noise Ratio (SNR) of a long-term record and the uneven energy distributions of seismic event waveforms will increase the difficulty in automatic seismic detecting, in this work, an improved probability autoregressive (AR) method for automatic seismic event detecting is applied. This algorithm, called sequentially discounting AR learning (SDAR), can identify the effective seismic event in the time series through the Change Point detection (CPD) of the seismic record. In this method, an anomaly signal (seismic event) can be designed as a change point on the time series (seismic record). The statistical model of the signal in the neighborhood of event point will change, because of the seismic event occurrence. This means the SDAR aims to find the statistical irregularities of the record thought CPD. There are 3 advantages of SDAR. 1. Anti-noise ability. The SDAR does not use waveform messages (such as amplitude, energy, polarization) for signal detecting. Therefore, it is an appropriate technique for low SNR data. 2. Real-time estimation. When new data appears in the record, the probability distribution models can be automatic updated by SDAR for on-line processing. 3. Discounting property. the SDAR introduces a discounting parameter to decrease the influence of present statistic value on future data. It makes SDAR as a robust algorithm for non-stationary signal processing. Within these 3 advantages, the SDAR method can handle the non-stationary time-varying long-term series and achieve real-time monitoring. Finally, we employ the SDAR on a synthetic model and Tomakomai Ocean Bottom Cable (OBC) baseline data to prove the feasibility and advantage of our method.

  10. Sequential recognition of the pre-mRNA branch point by U2AF65 and a novel spliceosome-associated 28-kDa protein.

    PubMed Central

    Gaur, R K; Valcárcel, J; Green, M R

    1995-01-01

    Splicing of pre-mRNAs occurs via a lariat intermediate in which an intronic adenosine, embedded within a branch point sequence, forms a 2',5'-phosphodiester bond (RNA branch) with the 5' end of the intron. How the branch point is recognized and activated remains largely unknown. Using site-specific photochemical cross-linking, we have identified two proteins that specifically interact with the branch point during the splicing reaction. U2AF65, an essential splicing factor that binds to the adjacent polypyrimidine tract, crosslinks to the branch point at the earliest stage of spliceosome formation in an ATP-independent manner. A novel 28-kDa protein, which is a constituent of the mature spliceosome, contacts the branch point after the first catalytic step. Our results indicate that the branch point is sequentially recognized by distinct splicing factors in the course of the splicing reaction. Images FIGURE 1 FIGURE 2 FIGURE 3 FIGURE 4 FIGURE 5 FIGURE 6 FIGURE 7 FIGURE 8 FIGURE 9 PMID:7493318

  11. Tracking Time Evolution of Collective Attention Clusters in Twitter: Time Evolving Nonnegative Matrix Factorisation.

    PubMed

    Saito, Shota; Hirata, Yoshito; Sasahara, Kazutoshi; Suzuki, Hideyuki

    2015-01-01

    Micro-blogging services, such as Twitter, offer opportunities to analyse user behaviour. Discovering and distinguishing behavioural patterns in micro-blogging services is valuable. However, it is difficult and challenging to distinguish users, and to track the temporal development of collective attention within distinct user groups in Twitter. In this paper, we formulate this problem as tracking matrices decomposed by Nonnegative Matrix Factorisation for time-sequential matrix data, and propose a novel extension of Nonnegative Matrix Factorisation, which we refer to as Time Evolving Nonnegative Matrix Factorisation (TENMF). In our method, we describe users and words posted in some time interval by a matrix, and use several matrices as time-sequential data. Subsequently, we apply Time Evolving Nonnegative Matrix Factorisation to these time-sequential matrices. TENMF can decompose time-sequential matrices, and can track the connection among decomposed matrices, whereas previous NMF decomposes a matrix into two lower dimension matrices arbitrarily, which might lose the time-sequential connection. Our proposed method has an adequately good performance on artificial data. Moreover, we present several results and insights from experiments using real data from Twitter.

  12. Sequential search leads to faster, more efficient fragment-based de novo protein structure prediction.

    PubMed

    de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M

    2018-04-01

    Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.

  13. Improving the Sequential Time Perception of Teenagers with Mild to Moderate Mental Retardation with 3D Immersive Virtual Reality (IVR)

    ERIC Educational Resources Information Center

    Passig, David

    2009-01-01

    Children with mental retardation have pronounced difficulties in using cognitive strategies and comprehending abstract concepts--among them, the concept of sequential time (Van-Handel, Swaab, De-Vries, & Jongmans, 2007). The perception of sequential time is generally tested by using scenarios presenting a continuum of actions. The goal of this…

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Besemer, A; Marsh, I; Bednarz, B

    Purpose: The calculation of 3D internal dose calculations in targeted radionuclide therapy requires the acquisition and temporal coregistration of a serial PET/CT or SPECT/CT images. This work investigates the dosimetric impact of different temporal coregistration methods commonly used for 3D internal dosimetry. Methods: PET/CT images of four mice were acquired at 1, 24, 48, 72, 96, 144 hrs post-injection of {sup 124}I-CLR1404. The therapeutic {sup 131}I-CLR1404 absorbed dose rate (ADR) was calculated at each time point using a Geant4-based MC dosimetry platform using three temporal image coregistration Methods: (1) no coregistration (NC), whole body sequential CT-CT affine coregistration (WBAC), andmore » individual sequential ROI-ROI affine coregistration (IRAC). For NC, only the ROI mean ADR was integrated to obtain ROI mean doses. For WBAC, the CT at each time point was coregistered to a single reference CT. The CT transformations were applied to the corresponding ADR images and the dose was calculated on a voxel-basis within the whole CT volume. For IRAC, each individual ROI was isolated and sequentially coregistered to a single reference ROI. The ROI transformations were applied to the corresponding ADR images and the dose was calculated on a voxel-basis within the ROI volumes. Results: The percent differences in the ROI mean doses were as large as 109%, 88%, and 32%, comparing the WBAC vs. IRAC, NC vs. IRAC, and NC vs. WBAC methods, respectively. The CoV in the mean dose between the all three methods ranged from 2–36%. The pronounced curvature of the spinal cord was not adequately coregistered using WBAC which resulted in large difference between the WBAC and IRAC. Conclusion: The method used for temporal image coregistration can result in large differences in 3D internal dosimetry calculations. Care must be taken to choose the most appropriate method depending on the imaging conditions, clinical site, and specific application. This work is partially funded by NIH Grant R21 CA198392-01.« less

  15. Effect of Casting Material on the Cast Pressure After Sequential Cast Splitting.

    PubMed

    Roberts, Aaron; Shaw, K Aaron; Boomsma, Shawn E; Cameron, Craig D

    2017-01-01

    Circumferential casting is a vital component of nonoperative fracture management. These casts are commonly valved to release pressure and decrease the risk of complications from swelling. However, little information exists regarding the effect of different casting supplies on the pressure within the cast. Seventy-five long-arm casts were performed on human volunteers, divided between 5 experimental groups with 15 casts in each groups. Testing groups consisted of 2 groups with a plaster short-arm cast overwrapped with fiberglass to a long arm with either cotton or synthetic cast padding. The 3 remaining groups included fiberglass long-arm casts with cotton, synthetic, or waterproof cast padding. A pediatric blood pressure cuff bladder was placed within the cast and inflated to 100 mm Hg. After inflation, the cast was sequentially released with pressure reading preformed after each stage. Order of release consisted of cast bivalve, cast padding release, and cotton stockinet release. After release, the cast was overwrapped with a loose elastic bandage. Difference in pressure readings were compared based upon the cast material. Pressures within the cast were found to decrease with sequential release of cast. The cast type had no effect of change in pressure. Post hoc testing demonstrated that the type of cast padding significantly affected the cast pressures with waterproof padding demonstrating the highest pressure readings at all time-points in the study, followed by synthetic padding. Cotton padding had the lowest pressure readings at all time-points. Type of cast padding significantly influences the amount of pressure within a long-arm cast, even after bivalving the cast and cutting the cast padding. Cotton cast padding allows for the greatest change in pressure. Cotton padding demonstrates the greatest change in pressure within a long-arm cast after undergoing bivalve. Synthetic and waterproof cast padding should not be used in the setting of an acute fracture to accommodate swelling.

  16. Tissue feature-based intra-fractional motion tracking for stereoscopic x-ray image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Xie, Yaoqin; Xing, Lei; Gu, Jia; Liu, Wu

    2013-06-01

    Real-time knowledge of tumor position during radiation therapy is essential to overcome the adverse effect of intra-fractional organ motion. The goal of this work is to develop a tumor tracking strategy by effectively utilizing the inherent image features of stereoscopic x-ray images acquired during dose delivery. In stereoscopic x-ray image guided radiation delivery, two orthogonal x-ray images are acquired either simultaneously or sequentially. The essence of markerless tumor tracking is the reliable identification of inherent points with distinct tissue features on each projection image and their association between two images. The identification of the feature points on a planar x-ray image is realized by searching for points with high intensity gradient. The feature points are associated by using the scale invariance features transform descriptor. The performance of the proposed technique is evaluated by using images of a motion phantom and four archived clinical cases acquired using either a CyberKnife equipped with a stereoscopic x-ray imaging system, or a LINAC equipped with an onboard kV imager and an electronic portal imaging device. In the phantom study, the results obtained using the proposed method agree with the measurements to within 2 mm in all three directions. In the clinical study, the mean error is 0.48 ± 0.46 mm for four patient data with 144 sequential images. In this work, a tissue feature-based tracking method for stereoscopic x-ray image guided radiation therapy is developed. The technique avoids the invasive procedure of fiducial implantation and may greatly facilitate the clinical workflow.

  17. Using timed event sequential data in nursing research.

    PubMed

    Pecanac, Kristen E; Doherty-King, Barbara; Yoon, Ju Young; Brown, Roger; Schiefelbein, Tony

    2015-01-01

    Measuring behavior is important in nursing research, and innovative technologies are needed to capture the "real-life" complexity of behaviors and events. The purpose of this article is to describe the use of timed event sequential data in nursing research and to demonstrate the use of this data in a research study. Timed event sequencing allows the researcher to capture the frequency, duration, and sequence of behaviors as they occur in an observation period and to link the behaviors to contextual details. Timed event sequential data can easily be collected with handheld computers, loaded with a software program designed for capturing observations in real time. Timed event sequential data add considerable strength to analysis of any nursing behavior of interest, which can enhance understanding and lead to improvement in nursing practice.

  18. Computer graphic visualization of orbiter lower surface boundary-layer transition

    NASA Technical Reports Server (NTRS)

    Throckmorton, D. A.; Hartung, L. C.

    1984-01-01

    Computer graphic techniques are applied to the processing of Shuttle Orbiter flight data in order to create a visual presentation of the extent and movement of the boundary-layer transition front over the orbiter lower surface during entry. Flight-measured surface temperature-time histories define the onset and completion of the boundary-layer transition process at any measurement location. The locus of points which define the spatial position of the boundary-layer transition front on the orbiter planform is plotted at each discrete time for which flight data are available. Displaying these images sequentially in real-time results in an animated simulation of the in-flight boundary-layer transition process.

  19. Impact of sequential proton density fat fraction for quantification of hepatic steatosis in nonalcoholic fatty liver disease.

    PubMed

    Idilman, Ilkay S; Keskin, Onur; Elhan, Atilla Halil; Idilman, Ramazan; Karcaaltincaba, Musturay

    2014-05-01

    To determine the utility of sequential MRI-estimated proton density fat fraction (MRI-PDFF) for quantification of the longitudinal changes in liver fat content in individuals with nonalcoholic fatty liver disease (NAFLD). A total of 18 consecutive individuals (M/F: 10/8, mean age: 47.7±9.8 years) diagnosed with NAFLD, who underwent sequential PDFF calculations for the quantification of hepatic steatosis at two different time points, were included in the study. All patients underwent T1-independent volumetric multi-echo gradient-echo imaging with T2* correction and spectral fat modeling. A close correlation for quantification of hepatic steatosis between the initial MRI-PDFF and liver biopsy was observed (rs=0.758, p<0.001). The median interval between two sequential MRI-PDFF measurements was 184 days. From baseline to the end of the follow-up period, serum GGT level and homeostasis model assessment score were significantly improved (p=0.015, p=0.006, respectively), whereas BMI, serum AST, and ALT levels were slightly decreased. MRI-PDFFs were significantly improved (p=0.004). A good correlation between two sequential MRI-PDFF calculations was observed (rs=0.714, p=0.001). With linear regression analyses, only delta serum ALT levels had a significant effect on delta MRI-PDFF calculations (r2=38.6%, p=0.006). At least 5.9% improvement in MRI-PDFF is needed to achieve a normalized abnormal ALT level. The improvement of MRI-PDFF score was associated with the improvement of biochemical parameters in patients who had improvement in delta MRI-PDFF (p<0.05). MRI-PDFF can be used for the quantification of the longitudinal changes of hepatic steatosis. The changes in serum ALT levels significantly reflected changes in MRI-PDFF in patients with NAFLD.

  20. A SEQUENTIAL, MULTIPLE-TREATMENT, TARGETED APPROACH TO REDUCE WOUND HEALING AND FAILURE OF GLAUCOMA FILTRATION SURGERY IN A RABBIT MODEL (AN AMERICAN OPHTHALMOLOGICAL SOCIETY THESIS)

    PubMed Central

    Sherwood, Mark Brian

    2006-01-01

    Purpose The purpose of this study was to evaluate the concept of targeting mediators of the scarring process at multiple points across the course of bleb failure, in order to prolong bleb survival. Methods There were three linked parts to the experiment. In the first part, a cannula glaucoma filtration surgery (GFS) was performed on 32 New Zealand White (NZW) rabbits, and bleb survival was assessed for six different regimens plus controls by grading bleb height and width. For the second part of the study, the same GFS surgery was performed on an additional 10 NZW rabbits. Two additional filtering blebs were treated with balanced saline solution (BSS), two received mitomycin-C (MMC) (0.4 mg/mL), and for the remaining six, a sequential regimen was given consisting of 200 mmol/L mannose-6-phosphate (M-6-P) solution at the time of surgery, followed by subconjunctival injections of antibody to connective tissue growth factor at days 2 and 4, and Ilomastat, a broad-spectrum matrix metalloproteinase inhibitor, at days 7, 12, and 20 postoperatively. Bleb survival was again assessed. In the final part of the experiment, blebs treated with either BSS, MMC, or the above sequential multitreatment regimen were examined histologically at 14 days postoperatively in three additional NZW rabbits. Results All six individual therapies selected resulted in some improvement of bleb survival compared to BSS control. Blebs treated with the new sequential, multitreatment protocol survived an average of 29 days (regression slope, P < .0001 compared to control), those receiving BSS an average of 17 days, and those treated with MMC (0.4 mg/mL) an average of 36 days. The sequential, multitreatment regimen was significantly superior to any of the six monotherapies for time to zero analysis (flattening) of the bleb (P < .002). Histologic examination of the bleb tissues showed a markedly less epithelial thinning, subepithelial collagen thinning, and goblet cell loss in the multitreatment group, when compared with the MMC blebs. Conclusions In a rabbit model of GFS, a sequential, targeted, multitreatment approach prolonged bleb survival compared to BSS controls and decreased bleb tissue morphological changes when compared to those treated with MMC. It is not known whether these findings can be reproduced in humans, and further work is needed to determine an optimum regimen and timing of therapeutic delivery. PMID:17471357

  1. The dynamics of behavior in modified dictator games

    PubMed Central

    2017-01-01

    We investigate the dynamics of individual pro-social behavior over time. The dynamics are tested by running the same experiment with the same subjects at several points in time. To exclude learning and reputation building, we employ non-strategic decision tasks and a sequential prisoners-dilemma as a control treatment. In the first wave, pro-social concerns explain a high share of individual decisions. Pro-social decisions decrease over time, however. In the final wave, most decisions can be accounted for by assuming pure selfishness. Stable behavior in the sense that subjects stick to their decisions over time is observed predominantly for purely selfish subjects. We offer two explanation for our results: diminishing experimenter demand effects and moral self-licensing. PMID:28448506

  2. Simultaneous dual contrast weighting using double echo rapid acquisition with relaxation enhancement (RARE) imaging.

    PubMed

    Fuchs, Katharina; Hezel, Fabian; Klix, Sabrina; Mekle, Ralf; Wuerfel, Jens; Niendorf, Thoralf

    2014-12-01

    This work proposes a dual contrast rapid acquisition with relaxation enhancement (RARE) variant (2in1-RARE), which provides simultaneous proton density (PD) and T2 * contrast in a single acquisition. The underlying concept of 2in1-RARE is the strict separation of spin echoes and stimulated echoes. This approach offers independent weighting of spin echoes and stimulated echoes. 2in1-RARE was evaluated in phantoms including signal-to-noise ratio (SNR) and point spread function assessment. 2in1-RARE was benchmarked versus coherent RARE and a split-echo RARE variant. The applicability of 2in1-RARE for brain imaging was demonstrated in a small cohort of healthy subjects (n = 10) and, exemplary, a multiple sclerosis patient at 3 Tesla as a precursor to a broader clinical study. 2in1-RARE enables the simultaneous acquisition of dual contrast weighted images without any significant image degradation and without sacrificing SNR versus split-echo RARE. This translates into a factor of two speed gain over multi-contrast, sequential split-echo RARE. A 15% broadening of the point spread function was observed in 2in1-RARE. T1 relaxation effects during the mixing time can be neglected for brain tissue. 2in1-RARE offers simultaneous acquisition of images of anatomical (PD) and functional (T2 *) contrast. It presents an alternative to address scan time constraints frequently encountered during sequential acquisition of T2 * or PD-weighted RARE. © 2013 Wiley Periodicals, Inc.

  3. Description and effects of sequential behavior practice in teacher education.

    PubMed

    Sharpe, T; Lounsbery, M; Bahls, V

    1997-09-01

    This study examined the effects of a sequential behavior feedback protocol on the practice-teaching experiences of undergraduate teacher trainees. The performance competencies of teacher trainees were analyzed using an alternative opportunities for appropriate action measure. Data support the added utility of sequential (Sharpe, 1997a, 1997b) behavior analysis information in systematic observation approaches to teacher education. One field-based undergraduate practicum using sequential behavior (i.e., field systems analysis) principles was monitored. Summarized are the key elements of the (a) classroom instruction provided as a precursor to the practice teaching experience, (b) practice teaching experience, and (c) field systems observation tool used for evaluation and feedback, including multiple-baseline data (N = 4) to support this approach to teacher education. Results point to (a) the strong relationship between sequential behavior feedback and the positive change in four preservice teachers' day-to-day teaching practices in challenging situational contexts, and (b) the relationship between changes in teacher practices and positive changes in the behavioral practices of gymnasium pupils. Sequential behavior feedback was also socially validated by the undergraduate participants and Professional Development School teacher supervisors in the study.

  4. Blending Velocities In Task Space In Computing Robot Motions

    NASA Technical Reports Server (NTRS)

    Volpe, Richard A.

    1995-01-01

    Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.

  5. Emotion Estimation Algorithm from Facial Image Analyses of e-Learning Users

    NASA Astrophysics Data System (ADS)

    Shigeta, Ayuko; Koike, Takeshi; Kurokawa, Tomoya; Nosu, Kiyoshi

    This paper proposes an emotion estimation algorithm from e-Learning user's facial image. The algorithm characteristics are as follows: The criteria used to relate an e-Learning use's emotion to a representative emotion were obtained from the time sequential analysis of user's facial expressions. By examining the emotions of the e-Learning users and the positional change of the facial expressions from the experiment results, the following procedures are introduce to improve the estimation reliability; (1) some effective features points are chosen by the emotion estimation (2) dividing subjects into two groups by the change rates of the face feature points (3) selection of the eigenvector of the variance-co-variance matrices (cumulative contribution rate>=95%) (4) emotion calculation using Mahalanobis distance.

  6. Piecewise multivariate modelling of sequential metabolic profiling data.

    PubMed

    Rantalainen, Mattias; Cloarec, Olivier; Ebbels, Timothy M D; Lundstedt, Torbjörn; Nicholson, Jeremy K; Holmes, Elaine; Trygg, Johan

    2008-02-19

    Modelling the time-related behaviour of biological systems is essential for understanding their dynamic responses to perturbations. In metabolic profiling studies, the sampling rate and number of sampling points are often restricted due to experimental and biological constraints. A supervised multivariate modelling approach with the objective to model the time-related variation in the data for short and sparsely sampled time-series is described. A set of piecewise Orthogonal Projections to Latent Structures (OPLS) models are estimated, describing changes between successive time points. The individual OPLS models are linear, but the piecewise combination of several models accommodates modelling and prediction of changes which are non-linear with respect to the time course. We demonstrate the method on both simulated and metabolic profiling data, illustrating how time related changes are successfully modelled and predicted. The proposed method is effective for modelling and prediction of short and multivariate time series data. A key advantage of the method is model transparency, allowing easy interpretation of time-related variation in the data. The method provides a competitive complement to commonly applied multivariate methods such as OPLS and Principal Component Analysis (PCA) for modelling and analysis of short time-series data.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, S; Zhu, X; Zhang, M

    Purpose Half-beam block is a field matching technique frequently used in radiotherapy. With no setup error, a well calibrated linac, and no internal organ motion, two photon fields can be matched seamlessly dosimetry-wise with their central axes passing the match line. However, in actual clinical situations, internal organ motion is often inevitable. This study was conducted to investigate its influence on radiation dose to patient internal points directly under the matching line. Methods A clinical setting is modeled as two half-space (x<0 and x<0) radiation fields that are turned on sequentially with a time gap of integer times of themore » patient internal organ motion period (T{sub 0}). Our point of interest moves with patient internal organs periodically and evenly in and out of the radiation fields, resulting in an average location at x=0. When the fields are delivered without any motion management, the initial phase of the point’s movement is unknown. Statistical methods are used to compute the expected value () and variance (σ) of the point dose given the uncertainty. Results Analytical solutions are obtained for and s of dose received by a point directly under the match line. is proportional to the total beam-on time (T1), and σ demonstrates previously unknown periodic behavior. /« less

  8. Comparison of Statistical Approaches Dealing with Time-dependent Confounding in Drug Effectiveness Studies

    PubMed Central

    Karim, Mohammad Ehsanul; Petkau, John; Gustafson, Paul; Platt, Robert W.; Tremlett, Helen

    2017-01-01

    In longitudinal studies, if the time-dependent covariates are affected by the past treatment, time-dependent confounding may be present. For a time-to-event response, marginal structural Cox models (MSCMs) are frequently used to deal with such confounding. To avoid some of the problems of fitting MSCM, the sequential Cox approach has been suggested as an alternative. Although the estimation mechanisms are different, both approaches claim to estimate the causal effect of treatment by appropriately adjusting for time-dependent confounding. We carry out simulation studies to assess the suitability of the sequential Cox approach for analyzing time-to-event data in the presence of a time-dependent covariate that may or may not be a time-dependent confounder. Results from these simulations revealed that the sequential Cox approach is not as effective as MSCM in addressing the time-dependent confounding. The sequential Cox approach was also found to be inadequate in the presence of a time-dependent covariate. We propose a modified version of the sequential Cox approach that correctly estimates the treatment effect in both of the above scenarios. All approaches are applied to investigate the impact of beta-interferon treatment in delaying disability progression in the British Columbia Multiple Sclerosis cohort (1995 – 2008). PMID:27659168

  9. Comparison of statistical approaches dealing with time-dependent confounding in drug effectiveness studies.

    PubMed

    Karim, Mohammad Ehsanul; Petkau, John; Gustafson, Paul; Platt, Robert W; Tremlett, Helen

    2018-06-01

    In longitudinal studies, if the time-dependent covariates are affected by the past treatment, time-dependent confounding may be present. For a time-to-event response, marginal structural Cox models are frequently used to deal with such confounding. To avoid some of the problems of fitting marginal structural Cox model, the sequential Cox approach has been suggested as an alternative. Although the estimation mechanisms are different, both approaches claim to estimate the causal effect of treatment by appropriately adjusting for time-dependent confounding. We carry out simulation studies to assess the suitability of the sequential Cox approach for analyzing time-to-event data in the presence of a time-dependent covariate that may or may not be a time-dependent confounder. Results from these simulations revealed that the sequential Cox approach is not as effective as marginal structural Cox model in addressing the time-dependent confounding. The sequential Cox approach was also found to be inadequate in the presence of a time-dependent covariate. We propose a modified version of the sequential Cox approach that correctly estimates the treatment effect in both of the above scenarios. All approaches are applied to investigate the impact of beta-interferon treatment in delaying disability progression in the British Columbia Multiple Sclerosis cohort (1995-2008).

  10. Comparative efficacy of simultaneous versus sequential multiple health behavior change interventions among adults: A systematic review of randomised trials.

    PubMed

    James, Erica; Freund, Megan; Booth, Angela; Duncan, Mitch J; Johnson, Natalie; Short, Camille E; Wolfenden, Luke; Stacey, Fiona G; Kay-Lambkin, Frances; Vandelanotte, Corneel

    2016-08-01

    Growing evidence points to the benefits of addressing multiple health behaviors rather than single behaviors. This review evaluates the relative effectiveness of simultaneous and sequentially delivered multiple health behavior change (MHBC) interventions. Secondary aims were to identify: a) the most effective spacing of sequentially delivered components; b) differences in efficacy of MHBC interventions for adoption/cessation behaviors and lifestyle/addictive behaviors, and; c) differences in trial retention between simultaneously and sequentially delivered interventions. MHBC intervention trials published up to October 2015 were identified through a systematic search. Eligible trials were randomised controlled trials that directly compared simultaneous and sequential delivery of a MHBC intervention. A narrative synthesis was undertaken. Six trials met the inclusion criteria and across these trials the behaviors targeted were smoking, diet, physical activity, and alcohol consumption. Three trials reported a difference in intervention effect between a sequential and simultaneous approach in at least one behavioral outcome. Of these, two trials favoured a sequential approach on smoking. One trial favoured a simultaneous approach on fat intake. There was no difference in retention between sequential and simultaneous approaches. There is limited evidence regarding the relative effectiveness of sequential and simultaneous approaches. Given only three of the six trials observed a difference in intervention effectiveness for one health behavior outcome, and the relatively consistent finding that the sequential and simultaneous approaches were more effective than a usual/minimal care control condition, it appears that both approaches should be considered equally efficacious. PROSPERO registration number: CRD42015027876. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Reducing sedation time for thyroplasty with arytenoid adduction with sequential anesthetic technique.

    PubMed

    Saadeh, Charles K; Rosero, Eric B; Joshi, Girish P; Ozayar, Esra; Mau, Ted

    2017-12-01

    To determine the extent to which a sequential anesthetic technique 1) shortens time under sedation for thyroplasty with arytenoid adduction (TP-AA), 2) affects the total operative time, and 3) changes the voice outcome compared to TP-AA performed entirely under sedation/analgesia. Case-control study. A new sequential anesthetic technique of performing most of the TP-AA surgery under general anesthesia (GA), followed by transition to sedation/analgesia (SA) for voice assessment, was developed to achieve smooth emergence from GA. Twenty-five TP-AA cases performed with the sequential GA-SA technique were compared with 25 TP-AA controls performed completely under sedation/analgesia. The primary outcome measure was the time under sedation. Voice improvement, as assessed by Consensus Auditory-Perceptual Evaluation of Voice, and total operative time were secondary outcome measures. With the conventional all-SA anesthetic, the duration of SA was 209 ± 26.3 minutes. With the sequential GA-SA technique, the duration of SA was 79.0 ± 18.9 minutes, a 62.3% reduction (P < 0.0001). There was no significant difference in the total operative time (209.5 vs. 200.9 minutes; P = 0.42) or in voice outcome. This sequential anesthetic technique has been easily adopted by multiple anesthesiologists and nurse anesthetists at our institution. TP-AA is effectively performed under sequential GA-SA technique with a significant reduction in the duration of time under sedation. This allows the surgeon to perform the technically more challenging part of the surgery under GA, without having to contend with variability in patient tolerance for laryngeal manipulation under sedation. 3b. Laryngoscope, 127:2813-2817, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  12. Modulation of Limbic and Prefrontal Connectivity by Electroconvulsive Therapy in Treatment-resistant Depression: A Preliminary Study.

    PubMed

    Cano, Marta; Cardoner, Narcís; Urretavizcaya, Mikel; Martínez-Zalacaín, Ignacio; Goldberg, Ximena; Via, Esther; Contreras-Rodríguez, Oren; Camprodon, Joan; de Arriba-Arnau, Aida; Hernández-Ribas, Rosa; Pujol, Jesús; Soriano-Mas, Carles; Menchón, José M

    2016-01-01

    Although current models of depression suggest that a sequential modulation of limbic and prefrontal connectivity is needed for illness recovery, neuroimaging studies of electroconvulsive therapy (ECT) have focused on assessing functional connectivity (FC) before and after an ECT course, without characterizing functional changes occurring at early treatment phases. To assess sequential changes in limbic and prefrontal FC during the course of ECT and their impact on clinical response. Longitudinal intralimbic and limbic-prefrontal networks connectivity study. We assessed 15 patients with treatment-resistant depression at four different time-points throughout the entire course of an ECT protocol and 10 healthy participants at two functional neuroimaging examinations. Furthermore, a path analysis to test direct and indirect predictive effects of limbic and prefrontal FC changes on clinical response measured with the Hamilton Rating Scale for Depression was also performed. An early significant intralimbic FC decrease significantly predicted a later increase in limbic-prefrontal FC, which in turn significantly predicted clinical improvement at the end of an ECT course. Our data support that treatment response involves sequential changes in FC within regions of the intralimbic and limbic-prefrontal networks. This approach may help in identifying potential early biomarkers of treatment response. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. The influence of spatial congruency and movement preparation time on saccade curvature in simultaneous and sequential dual-tasks.

    PubMed

    Moehler, Tobias; Fiehler, Katja

    2015-11-01

    Saccade curvature represents a sensitive measure of oculomotor inhibition with saccades curving away from covertly attended locations. Here we investigated whether and how saccade curvature depends on movement preparation time when a perceptual task is performed during or before saccade preparation. Participants performed a dual-task including a visual discrimination task at a cued location and a saccade task to the same location (congruent) or to a different location (incongruent). Additionally, we varied saccade preparation time (time between saccade cue and Go-signal) and the occurrence of the discrimination task (during saccade preparation=simultaneous vs. before saccade preparation=sequential). We found deteriorated perceptual performance in incongruent trials during simultaneous task performance while perceptual performance was unaffected during sequential task performance. Saccade accuracy and precision were deteriorated in incongruent trials during simultaneous and, to a lesser extent, also during sequential task performance. Saccades consistently curved away from covertly attended non-saccade locations. Saccade curvature was unaffected by movement preparation time during simultaneous task performance but decreased and finally vanished with increasing movement preparation time during sequential task performance. Our results indicate that the competing saccade plan to the covertly attended non-saccade location is maintained during simultaneous task performance until the perceptual task is solved while in the sequential condition, in which the discrimination task is solved prior to the saccade task, oculomotor inhibition decays gradually with movement preparation time. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Classification of epileptic EEG signals based on simple random sampling and sequential feature selection.

    PubMed

    Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui

    2016-06-01

    Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively.

  15. Clinical and logopaedic results of simultaneous and sequential bilateral implants in children with severe and/or profound bilateral sensorineural hearing loss: A literature review.

    PubMed

    López-Torrijo, Manuel; Mengual-Andrés, Santiago; Estellés-Ferrer, Remedios

    2015-06-01

    This article carries out a literature review of the advantages and limitations of the simultaneous bilateral cochlear implantation (SCI) compared to those of the sequential bilateral cochlear implantation (SBCI) and the unilateral cochlear implantation (UCI). The variables analysed in said comparison are: safety and surgical technique, SCI incidence, effectiveness, impact of the inter-implant interval, costs and financing, impact on brain plasticity, impact on speech and language development, main benefits, main disadvantages and concerns, and predictive factors of prognosis. Although the results are not conclusive, all variables analysed seem to point towards observable benefits of SCI in comparison with SBCI or UCI. This tendency should be studied in more depth in multicentre studies with higher methodological rigour, more comprehensive samples and periods and other determining variables (age at the time of implantation, duration and degree of the hearing loss, rehabilitation methodologies used, family involvement, etc.). Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Dynamics of Sequential Decision Making

    NASA Astrophysics Data System (ADS)

    Rabinovich, Mikhail I.; Huerta, Ramón; Afraimovich, Valentin

    2006-11-01

    We suggest a new paradigm for intelligent decision-making suitable for dynamical sequential activity of animals or artificial autonomous devices that depends on the characteristics of the internal and external world. To do it we introduce a new class of dynamical models that are described by ordinary differential equations with a finite number of possibilities at the decision points, and also include rules solving this uncertainty. Our approach is based on the competition between possible cognitive states using their stable transient dynamics. The model controls the order of choosing successive steps of a sequential activity according to the environment and decision-making criteria. Two strategies (high-risk and risk-aversion conditions) that move the system out of an erratic environment are analyzed.

  17. The learning curve for access creation in solo ultrasonography-guided percutaneous nephrolithotomy and the associated skills.

    PubMed

    Yu, Weimin; Rao, Ting; Li, Xing; Ruan, Yuan; Yuan, Run; Li, Chenglong; Li, Haoyong; Cheng, Fan

    2017-03-01

    The aim of the current trial was to evaluate the learning curve of access creation through solo ultrasonography (US)-guided percutaneous nephrolithotomy (PCNL), and clarify the technical details of the procedure. We evaluated the first 240 solo US-guided PCNLs performed by one surgeon at our institution. The data including the puncture procedure, access characteristics, access-related complications and stone-free rates were assessed in four sequential groups. The puncture duration and number of times decreased from a mean of 4.4 min and 2.1 times for the first 60 patients to 1.3 min and 1.2 times for the last 60 patients. There was a significant decrease from 3.7 min and 1.8 times for the 61th-120th patients to 1.5 min and 1.3 times for the 121th-180th patients. All of the access-related severe bleeding appeared in the first 120 patients, while perforations only occurred in the first 60 patients. The stone-free rates were 68.3, 83.3, 90.0, and 93.3% for the four sequential groups. The increase in experience lead to an improvement in the puncture duration and times, which accompany with better stone-free rates and lower complications. We propose that 60 operations are sufficient to gain competency, and a cutoff point of 120 operations will allow the surgeon to achieve excellence in the solo US-guided PCNL.

  18. Sequential Bayesian Geostatistical Inversion and Evaluation of Combined Data Worth for Aquifer Characterization at the Hanford 300 Area

    NASA Astrophysics Data System (ADS)

    Murakami, H.; Chen, X.; Hahn, M. S.; Over, M. W.; Rockhold, M. L.; Vermeul, V.; Hammond, G. E.; Zachara, J. M.; Rubin, Y.

    2010-12-01

    Subsurface characterization for predicting groundwater flow and contaminant transport requires us to integrate large and diverse datasets in a consistent manner, and quantify the associated uncertainty. In this study, we sequentially assimilated multiple types of datasets for characterizing a three-dimensional heterogeneous hydraulic conductivity field at the Hanford 300 Area. The datasets included constant-rate injection tests, electromagnetic borehole flowmeter tests, lithology profile and tracer tests. We used the method of anchored distributions (MAD), which is a modular-structured Bayesian geostatistical inversion method. MAD has two major advantages over the other inversion methods. First, it can directly infer a joint distribution of parameters, which can be used as an input in stochastic simulations for prediction. In MAD, in addition to typical geostatistical structural parameters, the parameter vector includes multiple point values of the heterogeneous field, called anchors, which capture local trends and reduce uncertainty in the prediction. Second, MAD allows us to integrate the datasets sequentially in a Bayesian framework such that it updates the posterior distribution, as a new dataset is included. The sequential assimilation can decrease computational burden significantly. We applied MAD to assimilate different combinations of the datasets, and then compared the inversion results. For the injection and tracer test assimilation, we calculated temporal moments of pressure build-up and breakthrough curves, respectively, to reduce the data dimension. A massive parallel flow and transport code PFLOTRAN is used for simulating the tracer test. For comparison, we used different metrics based on the breakthrough curves not used in the inversion, such as mean arrival time, peak concentration and early arrival time. This comparison intends to yield the combined data worth, i.e. which combination of the datasets is the most effective for a certain metric, which will be useful for guiding the further characterization effort at the site and also the future characterization projects at the other sites.

  19. Adolescents' Religiousness and Substance Use Are Linked via Afterlife Beliefs and Future Orientation.

    PubMed

    Holmes, Christopher J; Kim-Spoon, Jungmeen

    2017-10-01

    Although religiousness has been identified as a protective factor against adolescent substance use, processes through which these effects may operate are unclear. The current longitudinal study examined sequential mediation of afterlife beliefs and future orientation in the relation between adolescent religiousness and cigarette, alcohol, and marijuana use. Participants included 131 adolescents (mean age at Time 1 = 12 years) at three time points with approximately two year time intervals. Structural equation modeling indicated that higher religiousness at Time 1 was associated with higher afterlife beliefs at Time 2. Higher afterlife beliefs at Time 2 were associated with higher future orientation at Time 2, which in turn was associated with lower use of cigarettes, alcohol, and marijuana at Time 3. Our findings highlight the roles of afterlife beliefs and future orientation in explaining the beneficial effects of religiousness against adolescent substance use.

  20. The parallel-sequential field subtraction technique for coherent nonlinear ultrasonic imaging

    NASA Astrophysics Data System (ADS)

    Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.

    2018-06-01

    Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage than was previously possible and have sensitivity to partially closed defects. This study explores a coherent imaging technique based on the subtraction of two modes of focusing: parallel, in which the elements are fired together with a delay law and sequential, in which elements are fired independently. In the parallel focusing a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded and post-processed to form an image. Under linear elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images and use this to characterise the nonlinearity of small closed fatigue cracks. In particular we monitor the change in relative phase and amplitude at the fundamental frequencies for each focal point and use this nonlinear coherent imaging metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g. back wall or large scatters) effectively when instrumentation noise compensation in applied, thereby allowing damage to be detected at an early stage (c. 15% of fatigue life) and reliably quantified in later fatigue life.

  1. American Society of Clinical Oncology Clinical Practice Guideline: Update on Adjuvant Endocrine Therapy for Women With Hormone Receptor–Positive Breast Cancer

    PubMed Central

    Burstein, Harold J.; Prestrud, Ann Alexis; Seidenfeld, Jerome; Anderson, Holly; Buchholz, Thomas A.; Davidson, Nancy E.; Gelmon, Karen E.; Giordano, Sharon H.; Hudis, Clifford A.; Malin, Jennifer; Mamounas, Eleftherios P.; Rowden, Diana; Solky, Alexander J.; Sowers, MaryFran R.; Stearns, Vered; Winer, Eric P.; Somerfield, Mark R.; Griggs, Jennifer J.

    2010-01-01

    Purpose To develop evidence-based guidelines, based on a systematic review, for endocrine therapy for postmenopausal women with hormone receptor–positive breast cancer. Methods A literature search identified relevant randomized trials. Databases searched included MEDLINE, PREMEDLINE, the Cochrane Collaboration Library, and those for the Annual Meetings of the American Society of Clinical Oncology (ASCO) and the San Antonio Breast Cancer Symposium (SABCS). The primary outcomes of interest were disease-free survival, overall survival, and time to contralateral breast cancer. Secondary outcomes included adverse events and quality of life. An expert panel reviewed the literature, especially 12 major trials, and developed updated recommendations. Results An adjuvant treatment strategy incorporating an aromatase inhibitor (AI) as primary (initial endocrine therapy), sequential (using both tamoxifen and an AI in either order), or extended (AI after 5 years of tamoxifen) therapy reduces the risk of breast cancer recurrence compared with 5 years of tamoxifen alone. Data suggest that including an AI as primary monotherapy or as sequential treatment after 2 to 3 years of tamoxifen yields similar outcomes. Tamoxifen and AIs differ in their adverse effect profiles, and these differences may inform treatment preferences. Conclusion The Update Committee recommends that postmenopausal women with hormone receptor–positive breast cancer consider incorporating AI therapy at some point during adjuvant treatment, either as up-front therapy or as sequential treatment after tamoxifen. The optimal timing and duration of endocrine treatment remain unresolved. The Update Committee supports careful consideration of adverse effect profiles and patient preferences in deciding whether and when to incorporate AI therapy. PMID:20625130

  2. Monitoring dynamic loads on wind tunnel force balances

    NASA Technical Reports Server (NTRS)

    Ferris, Alice T.; White, William C.

    1989-01-01

    Two devices have been developed at NASA Langley to monitor the dynamic loads incurred during wind-tunnel testing. The Balance Dynamic Display Unit (BDDU), displays and monitors the combined static and dynamic forces and moments in the orthogonal axes. The Balance Critical Point Analyzer scales and sums each normalized signal from the BDDU to obtain combined dynamic and static signals that represent the dynamic loads at predefined high-stress points. The display of each instrument is a multiplex of six analog signals in a way that each channel is displayed sequentially as one-sixth of the horizontal axis on a single oscilloscope trace. Thus this display format permits the operator to quickly and easily monitor the combined static and dynamic level of up to six channels at the same time.

  3. Cost-effectiveness of simultaneous versus sequential surgery in head and neck reconstruction.

    PubMed

    Wong, Kevin K; Enepekides, Danny J; Higgins, Kevin M

    2011-02-01

    To determine whether simultaneous (ablation and reconstruction overlaps by two teams) head and neck reconstruction is cost effective compared to sequentially (ablation followed by reconstruction) performed surgery. Case-controlled study. Tertiary care hospital. Oncology patients undergoing free flap reconstruction of the head and neck. A match paired comparison study was performed with a retrospective chart review examining the total time of surgery for sequential and simultaneous surgery. Nine patients were selected for both the sequential and simultaneous groups. Sequential head and neck reconstruction patients were pair matched with patients who had undergone similar oncologic ablative or reconstructive procedures performed in a simultaneous fashion. A detailed cost analysis using the microcosting method was then undertaken looking at the direct costs of the surgeons, anesthesiologist, operating room, and nursing. On average, simultaneous surgery required 3 hours 15 minutes less operating time, leading to a cost savings of approximately $1200/case when compared to sequential surgery. This represents approximately a 15% reduction in the cost of the entire operation. Simultaneous head and neck reconstruction is more cost effective when compared to sequential surgery.

  4. The role of action control and action planning on fruit and vegetable consumption.

    PubMed

    Zhou, Guangyu; Gan, Yiqun; Miao, Miao; Hamilton, Kyra; Knoll, Nina; Schwarzer, Ralf

    2015-08-01

    Globally, fruit and vegetable intake is lower than recommended despite being an important component to a healthy diet. Adopting or maintaining a sufficient amount of fruit and vegetables in one's diet may require not only motivation but also self-regulatory processes. Action control and action planning are two key volitional determinants that have been identified in the literature; however, it is not fully understood how these two factors operate between intention and behavior. Thus, the aim of the current study was to explore the roles of action control and action planning as mediators between intentions and dietary behavior. A longitudinal study with three points in time was conducted. Participants (N = 286) were undergraduate students and invited to participate in a health behavior survey. At baseline (Time 1), measures of intention and fruit and vegetable intake were assessed. Two weeks later (Time 2), action control and action planning were assessed as putative sequential mediators. At Time 3 (two weeks after Time 2), fruit and vegetable consumption was measured as the outcome. The results revealed action control and action planning to sequentially mediate between intention and subsequent fruit and vegetable intake, controlling for baseline behavior. Both self-regulatory constructs, action control and action planning, make a difference when moving from motivation to action. Our preliminary evidence, therefore, suggests that planning may be more proximal to fruit and vegetable intake than action control. Further research, however, needs to be undertaken to substantiate this conclusion. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. In Situ Observation of Modulated Light Emission of Fiber Fuse Synchronized with Void Train over Hetero-Core Splice Point

    PubMed Central

    Todoroki, Shin-ichi

    2008-01-01

    Background Fiber fuse is a process of optical fiber destruction under the action of laser radiation, found 20 years ago. Once initiated, opical discharge runs along the fiber core region to the light source and leaves periodic voids whose shape looks like a bullet pointing the direction of laser beam. The relation between damage pattern and propagation mode of optical discharge is still unclear even after the first in situ observation three years ago. Methodology/Principal Findings Fiber fuse propagation over hetero-core splice point (Corning SMF-28e and HI 1060) was observed in situ. Sequential photographs obtained at intervals of 2.78 µs recorded a periodic emission at the tail of an optical discharge pumped by 1070 nm and 9 W light. The signal stopped when the discharge ran over the splice point. The corresponding damage pattern left in the fiber core region included a segment free of periodicity. Conclusions The spatial modulation pattern of the light emission agreed with the void train formed over the hetero-core splice point. Some segments included a bullet-shaped void pointing in the opposite direction to the laser beam propagation although the sequential photographs did not reveal any directional change in the optical discharge propagation. PMID:18815621

  6. Application of modified Martinez-Silva algorithm in determination of net cover

    NASA Astrophysics Data System (ADS)

    Stefanowicz, Łukasz; Grobelna, Iwona

    2016-12-01

    In the article we present the idea of modifications of Martinez-Silva algorithm, which allows for determination of place invariants (p-invariants) of Petri net. Their generation time is important in the parallel decomposition of discrete systems described by Petri nets. Decomposition process is essential from the point of view of discrete system design, as it allows for separation of smaller sequential parts. The proposed modifications of Martinez-Silva method concern the net cover by p-invariants and are focused on two important issues: cyclic reduction of invariant matrix and cyclic checking of net cover.

  7. Numbers of center points appropriate to blocked response surface experiments

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1979-01-01

    Tables are given for the numbers of center points to be used with blocked sequential designs of composite response surface experiments as used in empirical optimum seeking. The star point radii for exact orthogonal blocking is presented. The center point options varied from a lower limit of one to an upper limit equal to the numbers proposed by Box and Hunter for approximate rotatability and uniform variance, and exact orthogonal blocking. Some operating characteristics of the proposed options are described.

  8. Sequential weighted Wiener estimation for extraction of key tissue parameters in color imaging: a phantom study

    NASA Astrophysics Data System (ADS)

    Chen, Shuo; Lin, Xiaoqian; Zhu, Caigang; Liu, Quan

    2014-12-01

    Key tissue parameters, e.g., total hemoglobin concentration and tissue oxygenation, are important biomarkers in clinical diagnosis for various diseases. Although point measurement techniques based on diffuse reflectance spectroscopy can accurately recover these tissue parameters, they are not suitable for the examination of a large tissue region due to slow data acquisition. The previous imaging studies have shown that hemoglobin concentration and oxygenation can be estimated from color measurements with the assumption of known scattering properties, which is impractical in clinical applications. To overcome this limitation and speed-up image processing, we propose a method of sequential weighted Wiener estimation (WE) to quickly extract key tissue parameters, including total hemoglobin concentration (CtHb), hemoglobin oxygenation (StO2), scatterer density (α), and scattering power (β), from wide-band color measurements. This method takes advantage of the fact that each parameter is sensitive to the color measurements in a different way and attempts to maximize the contribution of those color measurements likely to generate correct results in WE. The method was evaluated on skin phantoms with varying CtHb, StO2, and scattering properties. The results demonstrate excellent agreement between the estimated tissue parameters and the corresponding reference values. Compared with traditional WE, the sequential weighted WE shows significant improvement in the estimation accuracy. This method could be used to monitor tissue parameters in an imaging setup in real time.

  9. Sequential activation of CD8+ T cells in the draining lymph nodes in response to pulmonary virus infection.

    PubMed

    Yoon, Heesik; Legge, Kevin L; Sung, Sun-sang J; Braciale, Thomas J

    2007-07-01

    We have used a TCR-transgenic CD8+ T cell adoptive transfer model to examine the tempo of T cell activation and proliferation in the draining lymph nodes (DLN) in response to respiratory virus infection. The T cell response in the DLN differed for mice infected with different type A influenza strains with the onset of T cell activation/proliferation to the A/JAPAN virus infection preceding the A/PR8 response by 12-24 h. This difference in T cell activation/proliferation correlated with the tempo of accelerated respiratory DC (RDC) migration from the infected lungs to the DLN in response to influenza virus infection, with the migrant RDC responding to the A/JAPAN infection exhibiting a more rapid accumulation in the lymph nodes (i.e., peak migration for A/JAPAN at 18 h, A/PR8 at 24-36 h). Furthermore, in vivo administration of blocking anti-CD62L Ab at various time points before/after infection revealed that the virus-specific CD8+ T cells entered the DLN and activated in a sequential "conveyor belt"-like fashion. These results indicate that the tempo of CD8+ T cell activation/proliferation after viral infection is dependent on the tempo of RDC migration to the DLN and that T cell activation occurs in an ordered sequential fashion.

  10. The nondeterministic divide

    NASA Technical Reports Server (NTRS)

    Charlesworth, Arthur

    1990-01-01

    The nondeterministic divide partitions a vector into two non-empty slices by allowing the point of division to be chosen nondeterministically. Support for high-level divide-and-conquer programming provided by the nondeterministic divide is investigated. A diva algorithm is a recursive divide-and-conquer sequential algorithm on one or more vectors of the same range, whose division point for a new pair of recursive calls is chosen nondeterministically before any computation is performed and whose recursive calls are made immediately after the choice of division point; also, access to vector components is only permitted during activations in which the vector parameters have unit length. The notion of diva algorithm is formulated precisely as a diva call, a restricted call on a sequential procedure. Diva calls are proven to be intimately related to associativity. Numerous applications of diva calls are given and strategies are described for translating a diva call into code for a variety of parallel computers. Thus diva algorithms separate logical correctness concerns from implementation concerns.

  11. Birth-death models and coalescent point processes: the shape and probability of reconstructed phylogenies.

    PubMed

    Lambert, Amaury; Stadler, Tanja

    2013-12-01

    Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Monte Carlo Simulation of Sudden Death Bearing Testing

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2003-01-01

    Monte Carlo simulations combined with sudden death testing were used to compare resultant bearing lives to the calculated hearing life and the cumulative test time and calendar time relative to sequential and censored sequential testing. A total of 30 960 virtual 50-mm bore deep-groove ball bearings were evaluated in 33 different sudden death test configurations comprising 36, 72, and 144 bearings each. Variations in both life and Weibull slope were a function of the number of bearings failed independent of the test method used and not the total number of bearings tested. Variation in L10 life as a function of number of bearings failed were similar to variations in lift obtained from sequentially failed real bearings and from Monte Carlo (virtual) testing of entire populations. Reductions up to 40 percent in bearing test time and calendar time can be achieved by testing to failure or the L(sub 50) life and terminating all testing when the last of the predetermined bearing failures has occurred. Sudden death testing is not a more efficient method to reduce bearing test time or calendar time when compared to censored sequential testing.

  13. Comparing the behavioural impact of a nudge-based handwashing intervention to high-intensity hygiene education: a cluster-randomised trial in rural Bangladesh.

    PubMed

    Grover, Elise; Hossain, Mohammed Kamal; Uddin, Saker; Venkatesh, Mohini; Ram, Pavani K; Dreibelbis, Robert

    2018-01-01

    To determine the impact of environmental nudges on handwashing behaviours among primary school children as compared to a high-intensity hygiene education intervention. In a cluster-randomised trial (CRT), we compared the rates of handwashing with soap (HWWS) after a toileting event among primary school students in rural Bangladesh. Eligible schools (government run, on-site sanitation and water, no hygiene interventions in last year, fewer than 450 students) were identified, and 20 schools were randomly selected and allocated without blinding to one of four interventions, five schools per group: simultaneous handwashing infrastructure and nudge construction, sequential infrastructure then nudge construction, simultaneous infrastructure and high-intensity hygiene education (HE) and sequential handwashing infrastructure and HE. The primary outcome, incidence of HWWS after a toileting event, was compared between the intervention groups at different data collection points with robust-Poisson regression analysis with generalised estimating equations, adjusting for school-level clustering of outcomes. The nudge intervention and the HE intervention were found to be equally effective at sustained impact over 5 months post-intervention (adjusted IRR 0.81, 95% CI 0.61-1.09). When comparing intervention delivery timing, the simultaneous delivery of the HE intervention significantly outperformed the sequential HE delivery (adjusted IRR 1.58 CI 1.20-2.08), whereas no significant difference was observed between sequential and simultaneous nudge intervention delivery (adjusted IRR 0.75, 95% CI 0.48-1.17). Our trial demonstrates sustained improved handwashing behaviour 5 months after the nudge intervention. The nudge intervention's comparable performance to a high-intensity hygiene education intervention is encouraging. © 2017 John Wiley & Sons Ltd.

  14. Improved Murine Blastocyst Quality and Development in a Single Culture Medium Compared to Sequential Culture Media

    PubMed Central

    Hennings, Justin M.; Zimmer, Randall L.; Nabli, Henda; Davis, J. Wade; Sutovsky, Peter; Sutovsky, Miriam; Sharpe-Timms, Kathy L.

    2015-01-01

    Objective: Validate single versus sequential culture media for murine embryo development. Design: Prospective laboratory experiment. Setting: Assisted Reproduction Laboratory. Animals: Murine embryos. Interventions: Thawed murine zygotes cultured for 3 or 5 days (d3 or d5) in single or sequential embryo culture media developed for human in vitro fertilization. Main Outcome Measures: On d3, zygotes developing to the 8 cell (8C) stage or greater were quantified using 4’,6-diamidino-2-phenylindole (DAPI), and quality was assessed by morphological analysis. On d5, the number of embryos reaching the blastocyst stage was counted. DAPI was used to quantify total nuclei and inner cell mass nuclei. Localization of ubiquitin C-terminal hydrolase L1 (UCHL1) and ubiquitin C-terminal hydrolase L3 (UCHL3) was reference points for evaluating cell quality. Results: Comparing outcomes in single versus to sequential media, the odds of embryos developing to the 8C stage on d3 were 2.34 time greater (P = .06). On d5, more embryos reached the blastocyst stage (P = <.0001), hatched, and had significantly more trophoblast cells (P = .005) contributing to the increased total cell number. Also at d5, localization of distinct cytoplasmic UCHL1 and nuclear UCHL3 was found in high-quality hatching blastocysts. Localization of UCHL1 and UCHL3 was diffuse and inappropriately dispersed throughout the cytoplasm in low-quality nonhatching blastocysts. Conclusions: Single medium yields greater cell numbers, an increased growth rate, and more hatching of murine embryos. Cytoplasmic UCHL1 and nuclear UHCL3 localization patterns were indicative of embryo quality. Our conclusions are limited to murine embryos but one might speculate that single medium may also be more beneficial for human embryo culture. Human embryo studies are needed. PMID:26668049

  15. Improved Murine Blastocyst Quality and Development in a Single Culture Medium Compared to Sequential Culture Media.

    PubMed

    Hennings, Justin M; Zimmer, Randall L; Nabli, Henda; Davis, J Wade; Sutovsky, Peter; Sutovsky, Miriam; Sharpe-Timms, Kathy L

    2016-03-01

    Validate single versus sequential culture media for murine embryo development. Prospective laboratory experiment. Assisted Reproduction Laboratory. Murine embryos. Thawed murine zygotes cultured for 3 or 5 days (d3 or d5) in single or sequential embryo culture media developed for human in vitro fertilization. On d3, zygotes developing to the 8 cell (8C) stage or greater were quantified using 4',6-diamidino-2-phenylindole (DAPI), and quality was assessed by morphological analysis. On d5, the number of embryos reaching the blastocyst stage was counted. DAPI was used to quantify total nuclei and inner cell mass nuclei. Localization of ubiquitin C-terminal hydrolase L1 (UCHL1) and ubiquitin C-terminal hydrolase L3 (UCHL3) was reference points for evaluating cell quality. Comparing outcomes in single versus to sequential media, the odds of embryos developing to the 8C stage on d3 were 2.34 time greater (P = .06). On d5, more embryos reached the blastocyst stage (P = <.0001), hatched, and had significantly more trophoblast cells (P = .005) contributing to the increased total cell number. Also at d5, localization of distinct cytoplasmic UCHL1 and nuclear UCHL3 was found in high-quality hatching blastocysts. Localization of UCHL1 and UCHL3 was diffuse and inappropriately dispersed throughout the cytoplasm in low-quality nonhatching blastocysts. Single medium yields greater cell numbers, an increased growth rate, and more hatching of murine embryos. Cytoplasmic UCHL1 and nuclear UHCL3 localization patterns were indicative of embryo quality. Our conclusions are limited to murine embryos but one might speculate that single medium may also be more beneficial for human embryo culture. Human embryo studies are needed. © The Author(s) 2015.

  16. Coalbed-methane pilots - timing, design, and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roadifer, R.D.; Moore, T.R.

    2009-10-15

    Four distinct sequential phases form a recommended process for coalbed-methane (CBM)-prospect assessment: initial screening reconnaissance, pilot testing, and final appraisal. Stepping through these four phases provides a program of progressively ramping work and cost, while creating a series of discrete decision points at which analysis of results and risks can be assessed. While discussing each of these phases in some degree, this paper focuses on the third, the critically important pilot-testing phase. This paper contains roughly 30 specific recommendations and the fundamental rationale behind each recommendation to help ensure that a CBM pilot will fulfill its primary objectives of (1)more » demonstrating whether the subject coal reservoir will desorb and produce consequential gas and (2) gathering the data critical to evaluate and risk the prospect at the next-often most critical-decision point.« less

  17. Soot Volume Fraction Imaging

    NASA Technical Reports Server (NTRS)

    Greenberg, Paul S.; Ku, Jerry C.

    1994-01-01

    A new technique is described for the full-field determination of soot volume fractions via laser extinction measurements. This technique differs from previously reported point-wise methods in that a two-dimensional array (i.e., image) of data is acquired simultaneously. In this fashion, the net data rate is increased, allowing the study of time-dependent phenomena and the investigation of spatial and temporal correlations. A telecentric imaging configuration is employed to provide depth-invariant magnification and to permit the specification of the collection angle for scattered light. To improve the threshold measurement sensitivity, a method is employed to suppress undesirable coherent imaging effects. A discussion of the tomographic inversion process is provided, including the results obtained from numerical simulation. Results obtained with this method from an ethylene diffusion flame are shown to be in close agreement with those previously obtained by sequential point-wise interrogation.

  18. Self-regulated learning of important information under sequential and simultaneous encoding conditions.

    PubMed

    Middlebrooks, Catherine D; Castel, Alan D

    2018-05-01

    Learners make a number of decisions when attempting to study efficiently: they must choose which information to study, for how long to study it, and whether to restudy it later. The current experiments examine whether documented impairments to self-regulated learning when studying information sequentially, as opposed to simultaneously, extend to the learning of and memory for valuable information. In Experiment 1, participants studied lists of words ranging in value from 1-10 points sequentially or simultaneously at a preset presentation rate; in Experiment 2, study was self-paced and participants could choose to restudy. Although participants prioritized high-value over low-value information, irrespective of presentation, those who studied the items simultaneously demonstrated superior value-based prioritization with respect to recall, study selections, and self-pacing. The results of the present experiments support the theory that devising, maintaining, and executing efficient study agendas is inherently different under sequential formatting than simultaneous. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Early Astronomical Sequential Photography, 1873-1923

    NASA Astrophysics Data System (ADS)

    Bonifácio, Vitor

    2011-11-01

    In 1873 Jules Janssen conceived the first automatic sequential photographic apparatus to observe the eagerly anticipated 1874 transit of Venus. This device, the 'photographic revolver', is commonly considered today as the earliest cinema precursor. In the following years, in order to study the variability or the motion of celestial objects, several instruments, either manually or automatically actuated, were devised to obtain as many photographs as possible of astronomical events in a short time interval. In this paper we strive to identify from the available documents the attempts made between 1873 and 1923, and discuss the motivations behind them and the results obtained. During the time period studied astronomical sequential photography was employed to determine the time of the instants of contact in transits and occultations, and to study total solar eclipses. The technique was seldom used but apparently the modern film camera invention played no role on this situation. Astronomical sequential photographs were obtained both before and after 1895. We conclude that the development of astronomical sequential photography was constrained by the reduced number of subjects to which the technique could be applied.

  20. Sequential Monte Carlo for inference of latent ARMA time-series with innovations correlated in time

    NASA Astrophysics Data System (ADS)

    Urteaga, Iñigo; Bugallo, Mónica F.; Djurić, Petar M.

    2017-12-01

    We consider the problem of sequential inference of latent time-series with innovations correlated in time and observed via nonlinear functions. We accommodate time-varying phenomena with diverse properties by means of a flexible mathematical representation of the data. We characterize statistically such time-series by a Bayesian analysis of their densities. The density that describes the transition of the state from time t to the next time instant t+1 is used for implementation of novel sequential Monte Carlo (SMC) methods. We present a set of SMC methods for inference of latent ARMA time-series with innovations correlated in time for different assumptions in knowledge of parameters. The methods operate in a unified and consistent manner for data with diverse memory properties. We show the validity of the proposed approach by comprehensive simulations of the challenging stochastic volatility model.

  1. A high-resolution full-field range imaging system

    NASA Astrophysics Data System (ADS)

    Carnegie, D. A.; Cree, M. J.; Dorrington, A. A.

    2005-08-01

    There exist a number of applications where the range to all objects in a field of view needs to be obtained. Specific examples include obstacle avoidance for autonomous mobile robots, process automation in assembly factories, surface profiling for shape analysis, and surveying. Ranging systems can be typically characterized as being either laser scanning systems where a laser point is sequentially scanned over a scene or a full-field acquisition where the range to every point in the image is simultaneously obtained. The former offers advantages in terms of range resolution, while the latter tend to be faster and involve no moving parts. We present a system for determining the range to any object within a camera's field of view, at the speed of a full-field system and the range resolution of some point laser scans. Initial results obtained have a centimeter range resolution for a 10 second acquisition time. Modifications to the existing system are discussed that should provide faster results with submillimeter resolution.

  2. Phylogenetic analysis accounting for age-dependent death and sampling with applications to epidemics.

    PubMed

    Lambert, Amaury; Alexander, Helen K; Stadler, Tanja

    2014-07-07

    The reconstruction of phylogenetic trees based on viral genetic sequence data sequentially sampled from an epidemic provides estimates of the past transmission dynamics, by fitting epidemiological models to these trees. To our knowledge, none of the epidemiological models currently used in phylogenetics can account for recovery rates and sampling rates dependent on the time elapsed since transmission, i.e. age of infection. Here we introduce an epidemiological model where infectives leave the epidemic, by either recovery or sampling, after some random time which may follow an arbitrary distribution. We derive an expression for the likelihood of the phylogenetic tree of sampled infectives under our general epidemiological model. The analytic concept developed in this paper will facilitate inference of past epidemiological dynamics and provide an analytical framework for performing very efficient simulations of phylogenetic trees under our model. The main idea of our analytic study is that the non-Markovian epidemiological model giving rise to phylogenetic trees growing vertically as time goes by can be represented by a Markovian "coalescent point process" growing horizontally by the sequential addition of pairs of coalescence and sampling times. As examples, we discuss two special cases of our general model, described in terms of influenza and HIV epidemics. Though phrased in epidemiological terms, our framework can also be used for instance to fit macroevolutionary models to phylogenies of extant and extinct species, accounting for general species lifetime distributions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Hybrid Model Predictive Control for Sequential Decision Policies in Adaptive Behavioral Interventions.

    PubMed

    Dong, Yuwen; Deshpande, Sunil; Rivera, Daniel E; Downs, Danielle S; Savage, Jennifer S

    2014-06-01

    Control engineering offers a systematic and efficient method to optimize the effectiveness of individually tailored treatment and prevention policies known as adaptive or "just-in-time" behavioral interventions. The nature of these interventions requires assigning dosages at categorical levels, which has been addressed in prior work using Mixed Logical Dynamical (MLD)-based hybrid model predictive control (HMPC) schemes. However, certain requirements of adaptive behavioral interventions that involve sequential decision making have not been comprehensively explored in the literature. This paper presents an extension of the traditional MLD framework for HMPC by representing the requirements of sequential decision policies as mixed-integer linear constraints. This is accomplished with user-specified dosage sequence tables, manipulation of one input at a time, and a switching time strategy for assigning dosages at time intervals less frequent than the measurement sampling interval. A model developed for a gestational weight gain (GWG) intervention is used to illustrate the generation of these sequential decision policies and their effectiveness for implementing adaptive behavioral interventions involving multiple components.

  4. Constant speed control of four-stroke micro internal combustion swing engine

    NASA Astrophysics Data System (ADS)

    Gao, Dedong; Lei, Yong; Zhu, Honghai; Ni, Jun

    2015-09-01

    The increasing demands on safety, emission and fuel consumption require more accurate control models of micro internal combustion swing engine (MICSE). The objective of this paper is to investigate the constant speed control models of four-stroke MICSE. The operation principle of the four-stroke MICSE is presented based on the description of MICSE prototype. A two-level Petri net based hybrid model is proposed to model the four-stroke MICSE engine cycle. The Petri net subsystem at the upper level controls and synchronizes the four Petri net subsystems at the lower level. The continuous sub-models, including breathing dynamics of intake manifold, thermodynamics of the chamber and dynamics of the torque generation, are investigated and integrated with the discrete model in MATLAB Simulink. Through the comparison of experimental data and simulated DC voltage output, it is demonstrated that the hybrid model is valid for the four-stroke MICSE system. A nonlinear model is obtained from the cycle average data via the regression method, and it is linearized around a given nominal equilibrium point for the controller design. The feedback controller of the spark timing and valve duration timing is designed with a sequential loop closing design approach. The simulation of the sequential loop closure control design applied to the hybrid model is implemented in MATLAB. The simulation results show that the system is able to reach its desired operating point within 0.2 s, and the designed controller shows good MICSE engine performance with a constant speed. This paper presents the constant speed control models of four-stroke MICSE and carries out the simulation tests, the models and the simulation results can be used for further study on the precision control of four-stroke MICSE.

  5. Orphan therapies: making best use of postmarket data.

    PubMed

    Maro, Judith C; Brown, Jeffrey S; Dal Pan, Gerald J; Li, Lingling

    2014-08-01

    Postmarket surveillance of the comparative safety and efficacy of orphan therapeutics is challenging, particularly when multiple therapeutics are licensed for the same orphan indication. To make best use of product-specific registry data collected to fulfill regulatory requirements, we propose the creation of a distributed electronic health data network among registries. Such a network could support sequential statistical analyses designed to detect early warnings of excess risks. We use a simulated example to explore the circumstances under which a distributed network may prove advantageous. We perform sample size calculations for sequential and non-sequential statistical studies aimed at comparing the incidence of hepatotoxicity following initiation of two newly licensed therapies for homozygous familial hypercholesterolemia. We calculate the sample size savings ratio, or the proportion of sample size saved if one conducted a sequential study as compared to a non-sequential study. Then, using models to describe the adoption and utilization of these therapies, we simulate when these sample sizes are attainable in calendar years. We then calculate the analytic calendar time savings ratio, analogous to the sample size savings ratio. We repeat these analyses for numerous scenarios. Sequential analyses detect effect sizes earlier or at the same time as non-sequential analyses. The most substantial potential savings occur when the market share is more imbalanced (i.e., 90% for therapy A) and the effect size is closest to the null hypothesis. However, due to low exposure prevalence, these savings are difficult to realize within the 30-year time frame of this simulation for scenarios in which the outcome of interest occurs at or more frequently than one event/100 person-years. We illustrate a process to assess whether sequential statistical analyses of registry data performed via distributed networks may prove a worthwhile infrastructure investment for pharmacovigilance.

  6. Derivation of sequential, real-time, process-control programs

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith; Schneider, Fred B.; Budhiraja, Navin

    1991-01-01

    The use of weakest-precondition predicate transformers in the derivation of sequential, process-control software is discussed. Only one extension to Dijkstra's calculus for deriving ordinary sequential programs was found to be necessary: function-valued auxiliary variables. These auxiliary variables are needed for reasoning about states of a physical process that exists during program transitions.

  7. Validation of Computerized Automatic Calculation of the Sequential Organ Failure Assessment Score

    PubMed Central

    Harrison, Andrew M.; Pickering, Brian W.; Herasevich, Vitaly

    2013-01-01

    Purpose. To validate the use of a computer program for the automatic calculation of the sequential organ failure assessment (SOFA) score, as compared to the gold standard of manual chart review. Materials and Methods. Adult admissions (age > 18 years) to the medical ICU with a length of stay greater than 24 hours were studied in the setting of an academic tertiary referral center. A retrospective cross-sectional analysis was performed using a derivation cohort to compare automatic calculation of the SOFA score to the gold standard of manual chart review. After critical appraisal of sources of disagreement, another analysis was performed using an independent validation cohort. Then, a prospective observational analysis was performed using an implementation of this computer program in AWARE Dashboard, which is an existing real-time patient EMR system for use in the ICU. Results. Good agreement between the manual and automatic SOFA calculations was observed for both the derivation (N=94) and validation (N=268) cohorts: 0.02 ± 2.33 and 0.29 ± 1.75 points, respectively. These results were validated in AWARE (N=60). Conclusion. This EMR-based automatic tool accurately calculates SOFA scores and can facilitate ICU decisions without the need for manual data collection. This tool can also be employed in a real-time electronic environment. PMID:23936639

  8. Non-parametric characterization of long-term rainfall time series

    NASA Astrophysics Data System (ADS)

    Tiwari, Harinarayan; Pandey, Brij Kishor

    2018-03-01

    The statistical study of rainfall time series is one of the approaches for efficient hydrological system design. Identifying, and characterizing long-term rainfall time series could aid in improving hydrological systems forecasting. In the present study, eventual statistics was applied for the long-term (1851-2006) rainfall time series under seven meteorological regions of India. Linear trend analysis was carried out using Mann-Kendall test for the observed rainfall series. The observed trend using the above-mentioned approach has been ascertained using the innovative trend analysis method. Innovative trend analysis has been found to be a strong tool to detect the general trend of rainfall time series. Sequential Mann-Kendall test has also been carried out to examine nonlinear trends of the series. The partial sum of cumulative deviation test is also found to be suitable to detect the nonlinear trend. Innovative trend analysis, sequential Mann-Kendall test and partial cumulative deviation test have potential to detect the general as well as nonlinear trend for the rainfall time series. Annual rainfall analysis suggests that the maximum changes in mean rainfall is 11.53% for West Peninsular India, whereas the maximum fall in mean rainfall is 7.8% for the North Mountainous Indian region. The innovative trend analysis method is also capable of finding the number of change point available in the time series. Additionally, we have performed von Neumann ratio test and cumulative deviation test to estimate the departure from homogeneity. Singular spectrum analysis has been applied in this study to evaluate the order of departure from homogeneity in the rainfall time series. Monsoon season (JS) of North Mountainous India and West Peninsular India zones has higher departure from homogeneity and singular spectrum analysis shows the results to be in coherence with the same.

  9. A Hybrid Shared-Memory Parallel Max-Tree Algorithm for Extreme Dynamic-Range Images.

    PubMed

    Moschini, Ugo; Meijster, Arnold; Wilkinson, Michael H F

    2018-03-01

    Max-trees, or component trees, are graph structures that represent the connected components of an image in a hierarchical way. Nowadays, many application fields rely on images with high-dynamic range or floating point values. Efficient sequential algorithms exist to build trees and compute attributes for images of any bit depth. However, we show that the current parallel algorithms perform poorly already with integers at bit depths higher than 16 bits per pixel. We propose a parallel method combining the two worlds of flooding and merging max-tree algorithms. First, a pilot max-tree of a quantized version of the image is built in parallel using a flooding method. Later, this structure is used in a parallel leaf-to-root approach to compute efficiently the final max-tree and to drive the merging of the sub-trees computed by the threads. We present an analysis of the performance both on simulated and actual 2D images and 3D volumes. Execution times are about better than the fastest sequential algorithm and speed-up goes up to on 64 threads.

  10. Different coding strategies for the perception of stable and changeable facial attributes.

    PubMed

    Taubert, Jessica; Alais, David; Burr, David

    2016-09-01

    Perceptual systems face competing requirements: improving signal-to-noise ratios of noisy images, by integration; and maximising sensitivity to change, by differentiation. Both processes occur in human vision, under different circumstances: they have been termed priming, or serial dependencies, leading to positive sequential effects; and adaptation or habituation, which leads to negative sequential effects. We reasoned that for stable attributes, such as the identity and gender of faces, the system should integrate: while for changeable attributes like facial expression, it should also engage contrast mechanisms to maximise sensitivity to change. Subjects viewed a sequence of images varying simultaneously in gender and expression, and scored each as male or female, and happy or sad. We found strong and consistent positive serial dependencies for gender, and negative dependency for expression, showing that both processes can operate at the same time, on the same stimuli, depending on the attribute being judged. The results point to highly sophisticated mechanisms for optimizing use of past information, either by integration or differentiation, depending on the permanence of that attribute.

  11. A Scanning Quantum Cryogenic Atom Microscope

    NASA Astrophysics Data System (ADS)

    Lev, Benjamin

    Microscopic imaging of local magnetic fields provides a window into the organizing principles of complex and technologically relevant condensed matter materials. However, a wide variety of intriguing strongly correlated and topologically nontrivial materials exhibit poorly understood phenomena outside the detection capability of state-of-the-art high-sensitivity, high-resolution scanning probe magnetometers. We introduce a quantum-noise-limited scanning probe magnetometer that can operate from room-to-cryogenic temperatures with unprecedented DC-field sensitivity and micron-scale resolution. The Scanning Quantum Cryogenic Atom Microscope (SQCRAMscope) employs a magnetically levitated atomic Bose-Einstein condensate (BEC), thereby providing immunity to conductive and blackbody radiative heating. The SQCRAMscope has a field sensitivity of 1.4 nT per resolution-limited point (2 um), or 6 nT / Hz1 / 2 per point at its duty cycle. Compared to point-by-point sensors, the long length of the BEC provides a naturally parallel measurement, allowing one to measure nearly one-hundred points with an effective field sensitivity of 600 pT / Hz1 / 2 each point during the same time as a point-by-point scanner would measure these points sequentially. Moreover, it has a noise floor of 300 pT and provides nearly two orders of magnitude improvement in magnetic flux sensitivity (down to 10- 6 Phi0 / Hz1 / 2) over previous atomic probe magnetometers capable of scanning near samples. These capabilities are for the first time carefully benchmarked by imaging magnetic fields arising from microfabricated wire patterns and done so using samples that may be scanned, cryogenically cooled, and easily exchanged. We anticipate the SQCRAMscope will provide charge transport images at temperatures from room to \\x9D4K in unconventional superconductors and topologically nontrivial materials.

  12. Heterogeneous Suppression of Sequential Effects in Random Sequence Generation, but Not in Operant Learning.

    PubMed

    Shteingart, Hanan; Loewenstein, Yonatan

    2016-01-01

    There is a long history of experiments in which participants are instructed to generate a long sequence of binary random numbers. The scope of this line of research has shifted over the years from identifying the basic psychological principles and/or the heuristics that lead to deviations from randomness, to one of predicting future choices. In this paper, we used generalized linear regression and the framework of Reinforcement Learning in order to address both points. In particular, we used logistic regression analysis in order to characterize the temporal sequence of participants' choices. Surprisingly, a population analysis indicated that the contribution of the most recent trial has only a weak effect on behavior, compared to more preceding trials, a result that seems irreconcilable with standard sequential effects that decay monotonously with the delay. However, when considering each participant separately, we found that the magnitudes of the sequential effect are a monotonous decreasing function of the delay, yet these individual sequential effects are largely averaged out in a population analysis because of heterogeneity. The substantial behavioral heterogeneity in this task is further demonstrated quantitatively by considering the predictive power of the model. We show that a heterogeneous model of sequential dependencies captures the structure available in random sequence generation. Finally, we show that the results of the logistic regression analysis can be interpreted in the framework of reinforcement learning, allowing us to compare the sequential effects in the random sequence generation task to those in an operant learning task. We show that in contrast to the random sequence generation task, sequential effects in operant learning are far more homogenous across the population. These results suggest that in the random sequence generation task, different participants adopt different cognitive strategies to suppress sequential dependencies when generating the "random" sequences.

  13. Reverse engineering gene regulatory networks from measurement with missing values.

    PubMed

    Ogundijo, Oyetunji E; Elmas, Abdulkadir; Wang, Xiaodong

    2016-12-01

    Gene expression time series data are usually in the form of high-dimensional arrays. Unfortunately, the data may sometimes contain missing values: for either the expression values of some genes at some time points or the entire expression values of a single time point or some sets of consecutive time points. This significantly affects the performance of many algorithms for gene expression analysis that take as an input, the complete matrix of gene expression measurement. For instance, previous works have shown that gene regulatory interactions can be estimated from the complete matrix of gene expression measurement. Yet, till date, few algorithms have been proposed for the inference of gene regulatory network from gene expression data with missing values. We describe a nonlinear dynamic stochastic model for the evolution of gene expression. The model captures the structural, dynamical, and the nonlinear natures of the underlying biomolecular systems. We present point-based Gaussian approximation (PBGA) filters for joint state and parameter estimation of the system with one-step or two-step missing measurements . The PBGA filters use Gaussian approximation and various quadrature rules, such as the unscented transform (UT), the third-degree cubature rule and the central difference rule for computing the related posteriors. The proposed algorithm is evaluated with satisfying results for synthetic networks, in silico networks released as a part of the DREAM project, and the real biological network, the in vivo reverse engineering and modeling assessment (IRMA) network of yeast Saccharomyces cerevisiae . PBGA filters are proposed to elucidate the underlying gene regulatory network (GRN) from time series gene expression data that contain missing values. In our state-space model, we proposed a measurement model that incorporates the effect of the missing data points into the sequential algorithm. This approach produces a better inference of the model parameters and hence, more accurate prediction of the underlying GRN compared to when using the conventional Gaussian approximation (GA) filters ignoring the missing data points.

  14. [Clinical study on the treatment of acute paraquat poisoning with sequential whole gastric and bowel irrigation].

    PubMed

    Zhao, Bo; Dai, Jingbin; Li, Jun; Xiao, Lei; Sun, Baoquan; Liu, Naizheng; Zhang, Yanmin; Jian, Xiangdong

    2015-03-01

    To explore the clinical efficacy of early application of sequential gastrointestinal lavage in patients with acute paraquat poisoning by analyzing the clinical data of 97 patients. A total of 97 eligible patients with acute paraquat poisoning were divided into conventional treatment group (n = 48) and sequential treatment group (n = 49). The conventional treatment group received routine gastric lavage with water. Then 30 g of montmorillonite powder, 30 g of activated charcoal, and mannitol were given to remove intestinal toxins once a day for five days. The sequential treatment group received 60 g of montmorillonite powder for oral administration, followed by small-volume low-pressure manual gastric lavage with 2.5%bicarbonate liquid. Then 30 g of activated charcoal, 30 g of montmorillonite powder, and polyethylene glycol electrolyte lavage solution were given one after another for gastrointestinal lavage once a day for five days. Both groups received large doses of corticosteroids, blood perfusion, and anti-oxidation treatment. The levels of serum potassium, serum amylase (AMY) alanine aminotransferase (ALT), total bilirubin (TBIL), blood urea nitrogen (BUN), creatinine (Cr), lactate (Lac), and PaO₂of patients were determined at 1, 3, 5, 7, and 10 days. Laxative time, mortality, and survival time of dead cases were evaluated in the two groups. The incidence rates of hypokalemia (<3.5 mmol/L) and AMY (>110 U/L) were significantly lower in the sequential treatment group than in the conventional treatment group (P < 0.05). There were no significant differences in the incidence of ALT (>80 U/L), TBIL (>34.2 µmol/L), BUN (>7.2 mmol/L), and Cr (>177 µmol/L) between the two groups (P>0.05). However, the highest levels of ALT, TBIL, BUN, Cr, and Lac were significantly lower in the sequential treatment group than in the conventional treatment group (P < 0.05). Moreover, the sequential treatment group had significantly lower incidence of PaO₂(<60 mmHg), shorter average laxative time, lower mortality, and longer survival time of dead cases than the conventional treatment group (P < 0.05). The early application of sequential gastrointestinal lavage can shorten laxative time, alleviate organ damage in the liver, kidney, lung, and pancreas, reduce mortality, and prolong the survival time of dead cases in patients with acute paraquat poisoning.

  15. Aging and sequential modulations of poorer strategy effects: An EEG study in arithmetic problem solving.

    PubMed

    Hinault, Thomas; Lemaire, Patrick; Phillips, Natalie

    2016-01-01

    This study investigated age-related differences in electrophysiological signatures of sequential modulations of poorer strategy effects. Sequential modulations of poorer strategy effects refer to decreased poorer strategy effects (i.e., poorer performance when the cued strategy is not the best) on current problem following poorer strategy problems compared to after better strategy problems. Analyses on electrophysiological (EEG) data revealed important age-related changes in time, frequency, and coherence of brain activities underlying sequential modulations of poorer strategy effects. More specifically, sequential modulations of poorer strategy effects were associated with earlier and later time windows (i.e., between 200- and 550 ms and between 850- and 1250 ms). Event-related potentials (ERPs) also revealed an earlier onset in older adults, together with more anterior and less lateralized activations. Furthermore, sequential modulations of poorer strategy effects were associated with theta and alpha frequencies in young adults while these modulations were found in delta frequency and theta inter-hemispheric coherence in older adults, consistent with qualitatively distinct patterns of brain activity. These findings have important implications to further our understanding of age-related differences and similarities in sequential modulations of cognitive control processes during arithmetic strategy execution. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Sequential Effects on Speeded Information Processing: A Developmental Study

    ERIC Educational Resources Information Center

    Smulders, S.F.A.; Notebaert, W.; Meijer, M.; Crone, E.A.; van der Molen, M.W.; Soetens, E.

    2005-01-01

    Two experiments were performed to assess age-related changes in sequential effects on choice reaction time (RT). Sequential effects portray the influence of previous trials on the RT to the current stimulus. In Experiment 1, three age groups (7-9, 10-12, and 18-25 years) performed a spatially compatible choice task, with response-to-stimulus…

  17. Logistics planning for phased programs.

    NASA Technical Reports Server (NTRS)

    Cook, W. H.

    1973-01-01

    It is pointed out that the proper and early integration of logistics planning into the phased program planning process will drastically reduce these logistics costs. Phased project planning is a phased approach to the planning, approval, and conduct of major research and development activity. A progressive build-up of knowledge of all aspects of the program is provided. Elements of logistics are discussed together with aspects of integrated logistics support, logistics program planning, and logistics activities for phased programs. Continuing logistics support can only be assured if there is a comprehensive sequential listing of all logistics activities tied to the program schedule and a real-time inventory of assets.

  18. Structure-from-motion for MAV image sequence analysis with photogrammetric applications

    NASA Astrophysics Data System (ADS)

    Schönberger, J. L.; Fraundorfer, F.; Frahm, J.-M.

    2014-08-01

    MAV systems have found increased attention in the photogrammetric community as an (autonomous) image acquisition platform for accurate 3D reconstruction. For an accurate reconstruction in feasible time, the acquired imagery requires specialized SfM software. Current systems typically use high-resolution sensors in pre-planned flight missions from far distance. We describe and evaluate a new SfM pipeline specifically designed for sequential, close-distance, and low-resolution imagery from mobile cameras with relatively high frame-rate and high overlap. Experiments demonstrate reduced computational complexity by leveraging the temporal consistency, comparable accuracy and point density with respect to state-of-the-art systems.

  19. Sequential processing of GNSS-R delay-Doppler maps (DDM's) for ocean wind retrieval

    NASA Astrophysics Data System (ADS)

    Garrison, J. L.; Rodriguez-Alvarez, N.; Hoffman, R.; Annane, B.; Leidner, M.; Kaitie, S.

    2016-12-01

    The delay-Doppler map (DDM) is the fundamental data product from GNSS-Reflectometry (GNSS-R), generated by cross-correlating the scattered signal with a local signal model over a range of delays and Doppler frequencies. Delay and Doppler form a set of coordinates on the ocean surface and the shape of the DDM is related to the distribution of ocean slopes. Wind speed can thus be estimated by fitting a scattering model to the shape of the observed DDM or defining an observable (e.g. average power or leading edge slope) which characterizes the change in DDM shape. For spaceborne measurements, the DDM is composed of signals scattered from a glistening zone, which can extend for up to 100 km or more. Setting a reasonable resolution requirement (25 km or less) will limit the usable portion of the DDM at each observation to only a small region near the specular point. Cyclone-GNSS (CYGNSS) is a NASA mission to study developing tropical cyclones using GNSS-R. CYGNSS science requirements call for wind retrieval with an accuracy of 10 percent above 20 m/s within a 25 km resolution. This requirement can be met using an observable defined for DDM samples between +/- 0.25 chips in delay and +/- 1 kHz in Doppler, with some filtering of the observations using a minimum threshold for range corrected gain (RCG). An improved approach, to be reviewed in this presentation, sequentially processes multiple DDM's, to combine observations generated from different "looks" at the same points on the surface. Applying this sequential process to synthetic data indicates a significant improvement in wind retrieval accuracy over a 10 km grid covering a region around the specular point. The attached figure illustrates this improvement, using simulated CYGNSS DDM's generated using the wind fields from hurricanes Earl and Danielle (left). The middle plots show wind retrievals using only an observable defined within the 25 km resolution cell. The plots on the right side show the retrievals from sequential processing of multiple DDM's. Recently, the assimilation of GNSS-R retrievals into weather forecast models has been studied. The authors have begun to investigate the direct assimilation of other data products, such as the DDM itself, or the results of sequential processing.

  20. Cortical responses following simultaneous and sequential retinal neurostimulation with different return configurations.

    PubMed

    Barriga-Rivera, Alejandro; Morley, John W; Lovell, Nigel H; Suaning, Gregg J

    2016-08-01

    Researchers continue to develop visual prostheses towards safer and more efficacious systems. However limitations still exist in the number of stimulating channels that can be integrated. Therefore there is a need for spatial and time multiplexing techniques to provide improved performance of the current technology. In particular, bright and high-contrast visual scenes may require simultaneous activation of several electrodes. In this research, a 24-electrode array was suprachoroidally implanted in three normally-sighted cats. Multi-unit activity was recorded from the primary visual cortex. Four stimulation strategies were contrasted to provide activation of seven electrodes arranged hexagonally: simultaneous monopolar, sequential monopolar, sequential bipolar and hexapolar. Both monopolar configurations showed similar cortical activation maps. Hexapolar and sequential bipolar configurations activated a lower number of cortical channels. Overall, the return configuration played a more relevant role in cortical activation than time multiplexing and thus, rapid sequential stimulation may assist in reducing the number of channels required to activate large retinal areas.

  1. Competitive interactions affect working memory performance for both simultaneous and sequential stimulus presentation.

    PubMed

    Ahmad, Jumana; Swan, Garrett; Bowman, Howard; Wyble, Brad; Nobre, Anna C; Shapiro, Kimron L; McNab, Fiona

    2017-07-06

    Competition between simultaneously presented visual stimuli lengthens reaction time and reduces both the BOLD response and neural firing. In contrast, conditions of sequential presentation have been assumed to be free from competition. Here we manipulated the spatial proximity of stimuli (Near versus Far conditions) to examine the effects of simultaneous and sequential competition on different measures of working memory (WM) for colour. With simultaneous presentation, the measure of WM precision was significantly lower for Near items, and participants reported the colour of the wrong item more often. These effects were preserved when the second stimulus immediately followed the first, disappeared when they were separated by 500 ms, and were partly recovered (evident for our measure of mis-binding but not WM precision) when the task was altered to encourage participants to maintain the sequentially presented items together in WM. Our results show, for the first time, that competition affects the measure of WM precision, and challenge the assumption that sequential presentation removes competition.

  2. Application of data cubes for improving detection of water cycle extreme events

    NASA Astrophysics Data System (ADS)

    Teng, W. L.; Albayrak, A.

    2015-12-01

    As part of an ongoing NASA-funded project to remove a longstanding barrier to accessing NASA data (i.e., accessing archived time-step array data as point-time series), for the hydrology and other point-time series-oriented communities, "data cubes" are created from which time series files (aka "data rods") are generated on-the-fly and made available as Web services from the Goddard Earth Sciences Data and Information Services Center (GES DISC). Data cubes are data as archived rearranged into spatio-temporal matrices, which allow for easy access to the data, both spatially and temporally. A data cube is a specific case of the general optimal strategy of reorganizing data to match the desired means of access. The gain from such reorganization is greater the larger the data set. As a use case for our project, we are leveraging existing software to explore the application of the data cubes concept to machine learning, for the purpose of detecting water cycle extreme (WCE) events, a specific case of anomaly detection, requiring time series data. We investigate the use of the sequential probability ratio test (SPRT) for anomaly detection and support vector machines (SVM) for anomaly classification. We show an example of detection of WCE events, using the Global Land Data Assimilation Systems (GLDAS) data set.

  3. Adapted cuing technique: facilitating sequential phoneme production.

    PubMed

    Klick, S L

    1994-09-01

    ACT is a visual cuing technique designed to facilitate dyspraxic speech by highlighting the sequential production of phonemes. In using ACT, cues are presented in such a way as to suggest sequential, coarticulatory movement in an overall pattern of motion. While using ACT, the facilitator's hand moves forward and back along the side of her (or his) own face. Finger movements signal specific speech sounds in formations loosely based on the manual alphabet for the hearing impaired. The best movements suggest the flowing, interactive nature of coarticulated phonemes. The synergistic nature of speech is suggested by coordinated hand motions which tighten and relax, move quickly or slowly, reflecting the motions of the vocal tract at various points during production of phonemic sequences. General principles involved in using ACT include a primary focus on speech-in-motion, the monitoring and fading of cues, and the presentation of stimuli based on motor-task analysis of phonemic sequences. Phonemic sequences are cued along three dimensions: place, manner, and vowel-related mandibular motion. Cuing vowels is a central feature of ACT. Two parameters of vowel production, focal point of resonance and mandibular closure, are cued. The facilitator's hand motions reflect the changing shape of the vocal tract and the trajectory of the tongue that result from the coarticulation of vowels and consonants. Rigid presentation of the phonemes is secondary to the facilitator's primary focus on presenting the overall sequential movement. The facilitator's goal is to self-tailor ACT in response to the changing needs and abilities of the client.(ABSTRACT TRUNCATED AT 250 WORDS)

  4. Stability analysis of chalk sea cliffs using UAV photogrammetry

    NASA Astrophysics Data System (ADS)

    Barlow, John; Gilham, Jamie

    2017-04-01

    Cliff erosion and instability poses a significant hazard to communities and infrastructure located is coastal areas. We use point cloud and spectral data derived from close range digital photogrammetry to assess the stability of chalk sea cliffs located at Telscombe, UK. Data captured from an unmanned aerial vehicle (UAV) were used to generate dense point clouds for a 712 m section of cliff face which ranges from 20 to 49 m in height. Generated models fitted our ground control network within a standard error of 0.03 m. Structural features such as joints, bedding planes, and faults were manually mapped and are consistent with results from other studies that have been conducted using direct measurement in the field. Kinematic analysis of these data was used to identify the primary modes of failure at the site. Our results indicate that wedge failure is by far the most likely mode of slope instability. An analysis of sequential surveys taken from the summer of 2016 to the winter of 2017 indicate several large failures have occurred at the site. We establish the volume of failure through change detection between sequential data sets and use back analysis to determine the strength of shear surfaces for each failure. Our results show that data capture through UAV photogrammetry can provide useful information for slope stability analysis over long sections of cliff. The use of this technology offers significant benefits in equipment costs and field time over existing methods.

  5. Delay test generation for synchronous sequential circuits

    NASA Astrophysics Data System (ADS)

    Devadas, Srinivas

    1989-05-01

    We address the problem of generating tests for delay faults in non-scan synchronous sequential circuits. Delay test generation for sequential circuits is a considerably more difficult problem than delay testing of combinational circuits and has received much less attention. In this paper, we present a method for generating test sequences to detect delay faults in sequential circuits using the stuck-at fault sequential test generator STALLION. The method is complete in that it will generate a delay test sequence for a targeted fault given sufficient CPU time, if such a sequence exists. We term faults for which no delay test sequence exists, under out test methodology, sequentially delay redundant. We describe means of eliminating sequential delay redundancies in logic circuits. We present a partial-scan methodology for enhancing the testability of difficult-to-test of untestable sequential circuits, wherein a small number of flip-flops are selected and made controllable/observable. The selection process guarantees the elimination of all sequential delay redundancies. We show that an intimate relationship exists between state assignment and delay testability of a sequential machine. We describe a state assignment algorithm for the synthesis of sequential machines with maximal delay fault testability. Preliminary experimental results using the test generation, partial-scan and synthesis algorithm are presented.

  6. Research on sparse feature matching of improved RANSAC algorithm

    NASA Astrophysics Data System (ADS)

    Kong, Xiangsi; Zhao, Xian

    2018-04-01

    In this paper, a sparse feature matching method based on modified RANSAC algorithm is proposed to improve the precision and speed. Firstly, the feature points of the images are extracted using the SIFT algorithm. Then, the image pair is matched roughly by generating SIFT feature descriptor. At last, the precision of image matching is optimized by the modified RANSAC algorithm,. The RANSAC algorithm is improved from three aspects: instead of the homography matrix, this paper uses the fundamental matrix generated by the 8 point algorithm as the model; the sample is selected by a random block selecting method, which ensures the uniform distribution and the accuracy; adds sequential probability ratio test(SPRT) on the basis of standard RANSAC, which cut down the overall running time of the algorithm. The experimental results show that this method can not only get higher matching accuracy, but also greatly reduce the computation and improve the matching speed.

  7. Transient Cognitive Dynamics, Metastability, and Decision Making

    PubMed Central

    Rabinovich, Mikhail I.; Huerta, Ramón; Varona, Pablo; Afraimovich, Valentin S.

    2008-01-01

    The idea that cognitive activity can be understood using nonlinear dynamics has been intensively discussed at length for the last 15 years. One of the popular points of view is that metastable states play a key role in the execution of cognitive functions. Experimental and modeling studies suggest that most of these functions are the result of transient activity of large-scale brain networks in the presence of noise. Such transients may consist of a sequential switching between different metastable cognitive states. The main problem faced when using dynamical theory to describe transient cognitive processes is the fundamental contradiction between reproducibility and flexibility of transient behavior. In this paper, we propose a theoretical description of transient cognitive dynamics based on the interaction of functionally dependent metastable cognitive states. The mathematical image of such transient activity is a stable heteroclinic channel, i.e., a set of trajectories in the vicinity of a heteroclinic skeleton that consists of saddles and unstable separatrices that connect their surroundings. We suggest a basic mathematical model, a strongly dissipative dynamical system, and formulate the conditions for the robustness and reproducibility of cognitive transients that satisfy the competing requirements for stability and flexibility. Based on this approach, we describe here an effective solution for the problem of sequential decision making, represented as a fixed time game: a player takes sequential actions in a changing noisy environment so as to maximize a cumulative reward. As we predict and verify in computer simulations, noise plays an important role in optimizing the gain. PMID:18452000

  8. Sequential multipoint motion of the tympanic membrane measured by laser Doppler vibrometry: preliminary results for normal tympanic membrane.

    PubMed

    Kunimoto, Yasuomi; Hasegawa, Kensaku; Arii, Shiro; Kataoka, Hideyuki; Yazama, Hiroaki; Kuya, Junko; Kitano, Hiroya

    2014-04-01

    Numerous studies have reported sound-induced motion of the tympanic membrane (TM). To demonstrate sequential motion characteristics of the entire TM by noncontact laser Doppler vibrometry (LDV), we have investigated multipoint TM measurement. A laser Doppler vibrometer was mounted on a surgical microscope. The velocity was measured at 33 points on the TM using noncontact LDV without any reflectors. Measurements were performed with tonal stimuli of 1, 3, and 6 kHz. Amplitudes were calculated from these measurements, and time-dependent changes in TM motion were described using a graphics application. TM motions were detected more clearly and stably at 1 and 3 kHz than at other frequencies. This is because the external auditory canal acted as a resonant tube near 3 kHz. TM motion displayed 1 peak at 1 kHz and 2 peaks at 3 kHz. Large amplitudes were detected in the posterosuperior quadrant (PSQ) at 1 kHz and in the PSQ and anteroinferior quadrant (AIQ) at 3 kHz. The entire TM showed synchronized movement centered on the PSQ at 1 kHz, with phase-shifting between PSQ and AIQ movement at 3 kHz. Amplitude was smaller at the umbo than at other parts. In contrast, amplitudes at high frequencies were too small and complicated to detect any obvious peaks. Sequential multipoint motion of the tympanic membrane showed that vibration characteristics of the TM differ according to the part and frequency.

  9. Cache directory look-up re-use as conflict check mechanism for speculative memory requests

    DOEpatents

    Ohmacht, Martin

    2013-09-10

    In a cache memory, energy and other efficiencies can be realized by saving a result of a cache directory lookup for sequential accesses to a same memory address. Where the cache is a point of coherence for speculative execution in a multiprocessor system, with directory lookups serving as the point of conflict detection, such saving becomes particularly advantageous.

  10. Sequential mediating effects of provided and received social support on trait emotional intelligence and subjective happiness: A longitudinal examination in Hong Kong Chinese university students.

    PubMed

    Ye, Jiawen; Yeung, Dannii Y; Liu, Elaine S C; Rochelle, Tina L

    2018-04-03

    Past research has often focused on the effects of emotional intelligence and received social support on subjective well-being yet paid limited attention to the effects of provided social support. This study adopted a longitudinal design to examine the sequential mediating effects of provided and received social support on the relationship between trait emotional intelligence and subjective happiness. A total of 214 Hong Kong Chinese undergraduates were asked to complete two assessments with a 6-month interval in between. The results of the sequential mediation analysis indicated that the trait emotional intelligence measured in Time 1 indirectly influenced the level of subjective happiness in Time 2 through a sequential pathway of social support provided for others in Time 1 and social support received from others in Time 2. These findings highlight the importance of trait emotional intelligence and the reciprocal exchanges of social support in the subjective well-being of university students. © 2018 International Union of Psychological Science.

  11. Isomer-dependent fragmentation dynamics of inner-shell photoionized difluoroiodobenzene

    DOE PAGES

    Ablikim, Utuq; Bomme, Cédric; Savelyev, Evgeny; ...

    2017-05-11

    The fragmentation dynamics of 2,6- and 3,5-difluoroiodobenzene after iodine 4d inner-shell photoionization with soft X-rays are studied using coincident electron and ion momentum imaging. By analyzing the momentum correlation between iodine and fluorine cations in three-fold ion coincidence events, we can distinguish the two isomers experimentally. Classical Coulomb explosion simulations are in overall agreement with the experimentally determined fragment ion kinetic energies and momentum correlations and point toward different fragmentation mechanisms and time scales. Finally, while most three-body fragmentation channels show clear evidence for sequential fragmentation on a time scale larger than the rotational period of the fragments, the breakupmore » into iodine and fluorine cations and a third charged co-fragment appears to occur within several hundred femtoseconds.« less

  12. Kinematic characteristics of tenodesis grasp in C6 quadriplegia.

    PubMed

    Mateo, S; Revol, P; Fourtassi, M; Rossetti, Y; Collet, C; Rode, G

    2013-02-01

    Descriptive control case study. To analyze the kinematics of tenodesis grasp in participants with C6 quadriplegia and healthy control participants in a pointing task and two daily life tasks involving a whole hand grip (apple) or a lateral grip (floppy disk). France. Four complete participants with C6 quadriplegia were age matched with four healthy control participants. All participants were right-handed. The measured kinematic parameters were the movement time (MT), the peak velocity (PV), the time of PV (TPV) and the wrist angle in the sagittal plane at movement onset, at the TPV and at the movement end point. The participants with C6 quadriplegia had significantly longer MTs in both prehension tasks. No significant differences in TPV were found between the two groups. Unlike control participants, for both prehension tasks the wrist of participants with C6 quadriplegia was in a neutral position at movement onset, in flexion at the TPV, and in extension at the movement end point. Two main kinematic parameters characterize tenodesis grasp movements in C6 quadriplegics: wrist flexion during reaching and wrist extension during the grasping phase, and increased MT reflecting the time required to adjust the wrist's position to achieve the tenodesis grasp. These characteristics were observed for two different grips (whole hand and lateral grip). These results suggest sequential planning of reaching and tenodesis grasp, and should be taken into account for prehension rehabilitation in patients with quadriplegia.

  13. Cooperative processing in primary somatosensory cortex and posterior parietal cortex during tactile working memory.

    PubMed

    Ku, Yixuan; Zhao, Di; Bodner, Mark; Zhou, Yong-Di

    2015-08-01

    In the present study, causal roles of both the primary somatosensory cortex (SI) and the posterior parietal cortex (PPC) were investigated in a tactile unimodal working memory (WM) task. Individual magnetic resonance imaging-based single-pulse transcranial magnetic stimulation (spTMS) was applied, respectively, to the left SI (ipsilateral to tactile stimuli), right SI (contralateral to tactile stimuli) and right PPC (contralateral to tactile stimuli), while human participants were performing a tactile-tactile unimodal delayed matching-to-sample task. The time points of spTMS were 300, 600 and 900 ms after the onset of the tactile sample stimulus (duration: 200 ms). Compared with ipsilateral SI, application of spTMS over either contralateral SI or contralateral PPC at those time points significantly impaired the accuracy of task performance. Meanwhile, the deterioration in accuracy did not vary with the stimulating time points. Together, these results indicate that the tactile information is processed cooperatively by SI and PPC in the same hemisphere, starting from the early delay of the tactile unimodal WM task. This pattern of processing of tactile information is different from the pattern in tactile-visual cross-modal WM. In a tactile-visual cross-modal WM task, SI and PPC contribute to the processing sequentially, suggesting a process of sensory information transfer during the early delay between modalities. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  14. CACTI: free, open-source software for the sequential coding of behavioral interactions.

    PubMed

    Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B

    2012-01-01

    The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.

  15. EEG Classification with a Sequential Decision-Making Method in Motor Imagery BCI.

    PubMed

    Liu, Rong; Wang, Yongxuan; Newman, Geoffrey I; Thakor, Nitish V; Ying, Sarah

    2017-12-01

    To develop subject-specific classifier to recognize mental states fast and reliably is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this paper, a sequential decision-making strategy is explored in conjunction with an optimal wavelet analysis for EEG classification. The subject-specific wavelet parameters based on a grid-search method were first developed to determine evidence accumulative curve for the sequential classifier. Then we proposed a new method to set the two constrained thresholds in the sequential probability ratio test (SPRT) based on the cumulative curve and a desired expected stopping time. As a result, it balanced the decision time of each class, and we term it balanced threshold SPRT (BTSPRT). The properties of the method were illustrated on 14 subjects' recordings from offline and online tests. Results showed the average maximum accuracy of the proposed method to be 83.4% and the average decision time of 2.77[Formula: see text]s, when compared with 79.2% accuracy and a decision time of 3.01[Formula: see text]s for the sequential Bayesian (SB) method. The BTSPRT method not only improves the classification accuracy and decision speed comparing with the other nonsequential or SB methods, but also provides an explicit relationship between stopping time, thresholds and error, which is important for balancing the speed-accuracy tradeoff. These results suggest that BTSPRT would be useful in explicitly adjusting the tradeoff between rapid decision-making and error-free device control.

  16. Timed sequential chemotherapy of cytoxan-refractory multiple myeloma with cytoxan and adriamycin based on induced tumor proliferation.

    PubMed

    Karp, J E; Humphrey, R L; Burke, P J

    1981-03-01

    Malignant plasma cell proliferation and induced humoral stimulatory activity (HSA) occur in vivo at a predictable time following drug administration. Sequential sera from 11 patients with poor-risk multiple myeloma (MM) undergoing treatment with Cytoxan (CY) 2400 mq/sq m were assayed for their in vitro effects on malignant bone marrow plasma cell tritiated thymidine (3HTdR) incorporation. Peak HSA was detected day 9 following CY. Sequential changes in marrow malignant plasma cell 3HTdR-labeling indices (LI) paralleled changes in serum activity, with peak LI occurring at the time of peak HS. An in vitro model of chemotherapy demonstrated that malignant plasma cell proliferation was enhanced by HSA, as determined by 3HTdR incorporation assay, 3HTdR LI, and tumor cells counts, and that stimulated plasma cells were more sensitive to cytotoxic effects of adriamycin (ADR) than were cells cultured in autologous pretreatment serum. Based on these studies, we designed a clinical trial to treat 12 CY-refractory poor-risk patients with MM in which ADR (60 mg/sq m) was administered at the time of peak HSA and residual tumor cell LI (day 9) following initial CY, 2400 mg/m (CY1ADR9). Eight of 12 (67%) responded to timed sequential chemotherapy with a greater than 50% decrement in monoclonal protein marker and a median survival projected to be greater than 8 mo duration (range 4-21+ mo). These clinical results using timed sequential CY1ADR9 compare favorably with results obtained using ADR in nonsequential chemotherapeutic regimens.

  17. Bursts and heavy tails in temporal and sequential dynamics of foraging decisions.

    PubMed

    Jung, Kanghoon; Jang, Hyeran; Kralik, Jerald D; Jeong, Jaeseung

    2014-08-01

    A fundamental understanding of behavior requires predicting when and what an individual will choose. However, the actual temporal and sequential dynamics of successive choices made among multiple alternatives remain unclear. In the current study, we tested the hypothesis that there is a general bursting property in both the timing and sequential patterns of foraging decisions. We conducted a foraging experiment in which rats chose among four different foods over a continuous two-week time period. Regarding when choices were made, we found bursts of rapidly occurring actions, separated by time-varying inactive periods, partially based on a circadian rhythm. Regarding what was chosen, we found sequential dynamics in affective choices characterized by two key features: (a) a highly biased choice distribution; and (b) preferential attachment, in which the animals were more likely to choose what they had previously chosen. To capture the temporal dynamics, we propose a dual-state model consisting of active and inactive states. We also introduce a satiation-attainment process for bursty activity, and a non-homogeneous Poisson process for longer inactivity between bursts. For the sequential dynamics, we propose a dual-control model consisting of goal-directed and habit systems, based on outcome valuation and choice history, respectively. This study provides insights into how the bursty nature of behavior emerges from the interaction of different underlying systems, leading to heavy tails in the distribution of behavior over time and choices.

  18. The Temporal Sequence of Social Anxiety and Depressive Symptoms following Interpersonal Stressors during Adolescence

    PubMed Central

    Hamilton, Jessica L.; Potter, Carrie M.; Olino, Thomas M.; Abramson, Lyn Y.; Heimberg, Richard G.; Alloy, Lauren B.

    2015-01-01

    Social anxiety and depressive symptoms dramatically increase and frequently co-occur during adolescence. Although research indicates that general interpersonal stressors, peer victimization, and familial emotional maltreatment predict symptoms of social anxiety and depression, it remains unclear how these stressors contribute to the sequential development of these internalizing symptoms. Thus, the present study examined the sequential development of social anxiety and depressive symptoms following the occurrence of interpersonal stressors, peer victimization, and familial emotional maltreatment. Participants included 410 early adolescents (53% female; 51% African American; Mean age =12.84 years) who completed measures of social anxiety and depressive symptoms at three time points (Times 1–3), as well as measures of general interpersonal stressors, peer victimization, and emotional maltreatment at Time 2. Path analyses revealed that interpersonal stressors, peer victimization, and emotional maltreatment predicted both depressive and social anxiety symptoms concurrently. However, depressive symptoms significantly mediated the pathway from interpersonal stressors, peer victimization, and familial emotional maltreatment to subsequent levels of social anxiety symptoms. In contrast, social anxiety did not mediate the relationship between these stressors and subsequent depressive symptoms. There was no evidence of sex or racial differences in these mediational pathways. Findings suggest that interpersonal stressors, including the particularly detrimental stressors of peer victimization and familial emotional maltreatment, may predict both depressive and social anxiety symptoms; however, adolescents who have more immediate depressogenic reactions may be at greater risk for later development of symptoms of social anxiety. PMID:26142495

  19. The Temporal Sequence of Social Anxiety and Depressive Symptoms Following Interpersonal Stressors During Adolescence.

    PubMed

    Hamilton, Jessica L; Potter, Carrie M; Olino, Thomas M; Abramson, Lyn Y; Heimberg, Richard G; Alloy, Lauren B

    2016-04-01

    Social anxiety and depressive symptoms dramatically increase and frequently co-occur during adolescence. Although research indicates that general interpersonal stressors, peer victimization, and familial emotional maltreatment predict symptoms of social anxiety and depression, it remains unclear how these stressors contribute to the sequential development of these internalizing symptoms. Thus, the present study examined the sequential development of social anxiety and depressive symptoms following the occurrence of interpersonal stressors, peer victimization, and familial emotional maltreatment. Participants included 410 early adolescents (53% female; 51% African American; Mean age =12.84 years) who completed measures of social anxiety and depressive symptoms at three time points (Times 1-3), as well as measures of general interpersonal stressors, peer victimization, and emotional maltreatment at Time 2. Path analyses revealed that interpersonal stressors, peer victimization, and emotional maltreatment predicted both depressive and social anxiety symptoms concurrently. However, depressive symptoms significantly mediated the pathway from interpersonal stressors, peer victimization, and familial emotional maltreatment to subsequent levels of social anxiety symptoms. In contrast, social anxiety did not mediate the relationship between these stressors and subsequent depressive symptoms. There was no evidence of sex or racial differences in these mediational pathways. Findings suggest that interpersonal stressors, including the particularly detrimental stressors of peer victimization and familial emotional maltreatment, may predict both depressive and social anxiety symptoms; however, adolescents who have more immediate depressogenic reactions may be at greater risk for later development of symptoms of social anxiety.

  20. 64 slice MDCT generally underestimates coronary calcium scores as compared to EBT: A phantom study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greuter, M. J. W.; Dijkstra, H.; Groen, J. M.

    The objective of our study was the determination of the influence of the sequential and spiral acquisition modes on the concordance and deviation of the calcium score on 64-slice multi-detector computed tomography (MDCT) scanners in comparison to electron beam tomography (EBT) as the gold standard. Our methods and materials were an anthropomorphic cardio CT phantom with different calcium inserts scanned in sequential and spiral acquisition modes on three identical 64-slice MDCT scanners of manufacturer A and on three identical 64-slice MDCT scanners of manufacturer B and on an EBT system. Every scan was repeated 30 times with and 15 timesmore » without a small random variation in the phantom position for both sequential and spiral modes. Significant differences were observed between EBT and 64-slice MDCT data for all inserts, both acquisition modes, and both manufacturers of MDCT systems. High regression coefficients (0.90-0.98) were found between the EBT and 64-slice MDCT data for both scoring methods and both systems with high correlation coefficients (R{sup 2}>0.94). System A showed more significant differences between spiral and sequential mode than system B. Almost no differences were observed in scanners of the same manufacturer for the Agatston score and no differences for the Volume score. The deviations of the Agatston and Volume scores showed regression dependencies approximately equal to the square root of the absolute score. The Agatston and Volume scores obtained with 64-slice MDCT imaging are highly correlated with EBT-obtained scores but are significantly underestimated (-10% to -2%) for both sequential and spiral acquisition modes. System B is more independent of acquisition mode to calcium score than system A. The Volume score shows no intramanufacturer dependency and its use is advocated versus the Agatston score. Using the same cut points for MDCT-based calcium scores as for EBT-based calcium scores can result in classifying individuals into a too low risk category. System information and scanprotocol is therefore needed for every calcium score procedure to ensure a correct clinical interpretation of the obtained calcium score results.« less

  1. The effect of a sequential structure of practice for the training of perceptual-cognitive skills in tennis

    PubMed Central

    2017-01-01

    Objective Anticipation of opponent actions, through the use of advanced (i.e., pre-event) kinematic information, can be trained using video-based temporal occlusion. Typically, this involves isolated opponent skills/shots presented as trials in a random order. However, two different areas of research concerning representative task design and contextual (non-kinematic) information, suggest this structure of practice restricts expert performance. The aim of this study was to examine the effect of a sequential structure of practice during video-based training of anticipatory behavior in tennis, as well as the transfer of these skills to the performance environment. Methods In a pre-practice-retention-transfer design, participants viewed life-sized video of tennis rallies across practice in either a sequential order (sequential group), in which participants were exposed to opponent skills/shots in the order they occur in the sport, or a non-sequential (non-sequential group) random order. Results In the video-based retention test, the sequential group was significantly more accurate in their anticipatory judgments when the retention condition replicated the sequential structure compared to the non-sequential group. In the non-sequential retention condition, the non-sequential group was more accurate than the sequential group. In the field-based transfer test, overall decision time was significantly faster in the sequential group compared to the non-sequential group. Conclusion Findings highlight the benefits of a sequential structure of practice for the transfer of anticipatory behavior in tennis. We discuss the role of contextual information, and the importance of representative task design, for the testing and training of perceptual-cognitive skills in sport. PMID:28355263

  2. Single-Molecule Reaction Chemistry in Patterned Nanowells

    PubMed Central

    2016-01-01

    A new approach to synthetic chemistry is performed in ultraminiaturized, nanofabricated reaction chambers. Using lithographically defined nanowells, we achieve single-point covalent chemistry on hundreds of individual carbon nanotube transistors, providing robust statistics and unprecedented spatial resolution in adduct position. Each device acts as a sensor to detect, in real-time and through quantized changes in conductance, single-point functionalization of the nanotube as well as consecutive chemical reactions, molecular interactions, and molecular conformational changes occurring on the resulting single-molecule probe. In particular, we use a set of sequential bioconjugation reactions to tether a single-strand of DNA to the device and record its repeated, reversible folding into a G-quadruplex structure. The stable covalent tether allows us to measure the same molecule in different solutions, revealing the characteristic increased stability of the G-quadruplex structure in the presence of potassium ions (K+) versus sodium ions (Na+). Nanowell-confined reaction chemistry on carbon nanotube devices offers a versatile method to isolate and monitor individual molecules during successive chemical reactions over an extended period of time. PMID:27270004

  3. First-Episode Psychosis and the Criminal Justice System: Using a Sequential Intercept Framework to Highlight Risks and Opportunities.

    PubMed

    Wasser, Tobias; Pollard, Jessica; Fisk, Deborah; Srihari, Vinod

    2017-10-01

    In first-episode psychosis there is a heightened risk of aggression and subsequent criminal justice involvement. This column reviews the evidence pointing to these heightened risks and highlights opportunities, using a sequential intercept model, for collaboration between mental health services and existing diversionary programs, particularly for patients whose behavior has already brought them to the attention of the criminal justice system. Coordinating efforts in these areas across criminal justice and clinical spheres can decrease the caseload burden on the criminal justice system and optimize clinical and legal outcomes for this population.

  4. Organometallic exposure dependence on organic–inorganic hybrid material formation in polyethylene terephthalate and polyamide 6 polymer fibers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akyildiz, Halil I.; Jur, Jesse S., E-mail: jsjur@ncsu.edu

    2015-03-15

    The effect of exposure conditions and surface area on hybrid material formation during sequential vapor infiltrations of trimethylaluminum (TMA) into polyamide 6 (PA6) and polyethylene terephthalate (PET) fibers is investigated. Mass gain of the fabric samples after infiltration was examined to elucidate the reaction extent with increasing number of sequential TMA single exposures, defined as the times for a TMA dose and a hold period. An interdependent relationship between dosing time and holding time on the hybrid material formation is observed for TMA exposure PET, exhibited as a linear trend between the mass gain and total exposure (dose time ×more » hold time × number of sequential exposures). Deviation from this linear relationship is only observed under very long dose or hold times. In comparison, amount of hybrid material formed during sequential exposures to PA6 fibers is found to be highly dependent on amount of TMA dosed. Increasing the surface area of the fiber by altering its cross-sectional dimension is shown to have little on the reaction behavior but does allow for improved diffusion of the TMA into the fiber. This work allows for the projection of exposure parameters necessary for future high-throughput hybrid modifications to polymer materials.« less

  5. Simultaneous Versus Sequential Side-by-Side Bilateral Metal Stent Placement for Malignant Hilar Biliary Obstructions.

    PubMed

    Inoue, Tadahisa; Ishii, Norimitsu; Kobayashi, Yuji; Kitano, Rena; Sakamoto, Kazumasa; Ohashi, Tomohiko; Nakade, Yukiomi; Sumida, Yoshio; Ito, Kiyoaki; Nakao, Haruhisa; Yoneda, Masashi

    2017-09-01

    Endoscopic bilateral self-expandable metallic stent (SEMS) placement for malignant hilar biliary obstructions (MHBOs) is technically demanding, and a second SEMS insertion is particularly challenging. A simultaneous side-by-side (SBS) placement technique using a thinner delivery system may mitigate these issues. We aimed to examine the feasibility and efficacy of simultaneous SBS SEMS placement for treating MHBOs using a novel SEMS that has a 5.7-Fr ultra-thin delivery system. Thirty-four patients with MHBOs underwent SBS SEMS placement between 2010 and 2016. We divided the patient cohort into those who underwent sequential (conventional) SBS placement between 2010 and 2014 (sequential group) and those who underwent simultaneous SBS placement between 2015 and 2016 (simultaneous group), and compared the groups with respect to the clinical outcomes. The technical success rates were 71% (12/17) and 100% (17/17) in the sequential and simultaneous groups, respectively, a difference that was significant (P = .045). The median procedure time was significantly shorter in the simultaneous group (22 min) than in the sequential group (52 min) (P = .017). There were no significant group differences in the time to recurrent biliary obstruction (sequential group: 113 days; simultaneous group: 140 days) or other adverse event rates (sequential group: 12%; simultaneous group: 12%). Simultaneous SBS placement using the novel 5.7-Fr SEMS delivery system may be more straightforward and have a higher success rate compared to that with sequential SBS placement. This new method may be useful for bilateral stenting to treat MHBOs.

  6. Buffer management for sequential decoding. [block erasure probability reduction

    NASA Technical Reports Server (NTRS)

    Layland, J. W.

    1974-01-01

    Sequential decoding has been found to be an efficient means of communicating at low undetected error rates from deep space probes, but erasure or computational overflow remains a significant problem. Erasure of a block occurs when the decoder has not finished decoding that block at the time that it must be output. By drawing upon analogies in computer time sharing, this paper develops a buffer-management strategy which reduces the decoder idle time to a negligible level, and therefore improves the erasure probability of a sequential decoder. For a decoder with a speed advantage of ten and a buffer size of ten blocks, operating at an erasure rate of .01, use of this buffer-management strategy reduces the erasure rate to less than .0001.

  7. A randomised trial and economic evaluation of the effect of response mode on response rate, response bias, and item non-response in a survey of doctors.

    PubMed

    Scott, Anthony; Jeon, Sung-Hee; Joyce, Catherine M; Humphreys, John S; Kalb, Guyonne; Witt, Julia; Leahy, Anne

    2011-09-05

    Surveys of doctors are an important data collection method in health services research. Ways to improve response rates, minimise survey response bias and item non-response, within a given budget, have not previously been addressed in the same study. The aim of this paper is to compare the effects and costs of three different modes of survey administration in a national survey of doctors. A stratified random sample of 4.9% (2,702/54,160) of doctors undertaking clinical practice was drawn from a national directory of all doctors in Australia. Stratification was by four doctor types: general practitioners, specialists, specialists-in-training, and hospital non-specialists, and by six rural/remote categories. A three-arm parallel trial design with equal randomisation across arms was used. Doctors were randomly allocated to: online questionnaire (902); simultaneous mixed mode (a paper questionnaire and login details sent together) (900); or, sequential mixed mode (online followed by a paper questionnaire with the reminder) (900). Analysis was by intention to treat, as within each primary mode, doctors could choose either paper or online. Primary outcome measures were response rate, survey response bias, item non-response, and cost. The online mode had a response rate 12.95%, followed by the simultaneous mixed mode with 19.7%, and the sequential mixed mode with 20.7%. After adjusting for observed differences between the groups, the online mode had a 7 percentage point lower response rate compared to the simultaneous mixed mode, and a 7.7 percentage point lower response rate compared to sequential mixed mode. The difference in response rate between the sequential and simultaneous modes was not statistically significant. Both mixed modes showed evidence of response bias, whilst the characteristics of online respondents were similar to the population. However, the online mode had a higher rate of item non-response compared to both mixed modes. The total cost of the online survey was 38% lower than simultaneous mixed mode and 22% lower than sequential mixed mode. The cost of the sequential mixed mode was 14% lower than simultaneous mixed mode. Compared to the online mode, the sequential mixed mode was the most cost-effective, although exhibiting some evidence of response bias. Decisions on which survey mode to use depend on response rates, response bias, item non-response and costs. The sequential mixed mode appears to be the most cost-effective mode of survey administration for surveys of the population of doctors, if one is prepared to accept a degree of response bias. Online surveys are not yet suitable to be used exclusively for surveys of the doctor population.

  8. A randomised trial and economic evaluation of the effect of response mode on response rate, response bias, and item non-response in a survey of doctors

    PubMed Central

    2011-01-01

    Background Surveys of doctors are an important data collection method in health services research. Ways to improve response rates, minimise survey response bias and item non-response, within a given budget, have not previously been addressed in the same study. The aim of this paper is to compare the effects and costs of three different modes of survey administration in a national survey of doctors. Methods A stratified random sample of 4.9% (2,702/54,160) of doctors undertaking clinical practice was drawn from a national directory of all doctors in Australia. Stratification was by four doctor types: general practitioners, specialists, specialists-in-training, and hospital non-specialists, and by six rural/remote categories. A three-arm parallel trial design with equal randomisation across arms was used. Doctors were randomly allocated to: online questionnaire (902); simultaneous mixed mode (a paper questionnaire and login details sent together) (900); or, sequential mixed mode (online followed by a paper questionnaire with the reminder) (900). Analysis was by intention to treat, as within each primary mode, doctors could choose either paper or online. Primary outcome measures were response rate, survey response bias, item non-response, and cost. Results The online mode had a response rate 12.95%, followed by the simultaneous mixed mode with 19.7%, and the sequential mixed mode with 20.7%. After adjusting for observed differences between the groups, the online mode had a 7 percentage point lower response rate compared to the simultaneous mixed mode, and a 7.7 percentage point lower response rate compared to sequential mixed mode. The difference in response rate between the sequential and simultaneous modes was not statistically significant. Both mixed modes showed evidence of response bias, whilst the characteristics of online respondents were similar to the population. However, the online mode had a higher rate of item non-response compared to both mixed modes. The total cost of the online survey was 38% lower than simultaneous mixed mode and 22% lower than sequential mixed mode. The cost of the sequential mixed mode was 14% lower than simultaneous mixed mode. Compared to the online mode, the sequential mixed mode was the most cost-effective, although exhibiting some evidence of response bias. Conclusions Decisions on which survey mode to use depend on response rates, response bias, item non-response and costs. The sequential mixed mode appears to be the most cost-effective mode of survey administration for surveys of the population of doctors, if one is prepared to accept a degree of response bias. Online surveys are not yet suitable to be used exclusively for surveys of the doctor population. PMID:21888678

  9. Optimization of the Switch Mechanism in a Circuit Breaker Using MBD Based Simulation

    PubMed Central

    Jang, Jin-Seok; Yoon, Chang-Gyu; Ryu, Chi-Young; Kim, Hyun-Woo; Bae, Byung-Tae; Yoo, Wan-Suk

    2015-01-01

    A circuit breaker is widely used to protect electric power system from fault currents or system errors; in particular, the opening mechanism in a circuit breaker is important to protect current overflow in the electric system. In this paper, multibody dynamic model of a circuit breaker including switch mechanism was developed including the electromagnetic actuator system. Since the opening mechanism operates sequentially, optimization of the switch mechanism was carried out to improve the current breaking time. In the optimization process, design parameters were selected from length and shape of each latch, which changes pivot points of bearings to shorten the breaking time. To validate optimization results, computational results were compared to physical tests with a high speed camera. Opening time of the optimized mechanism was decreased by 2.3 ms, which was proved by experiments. Switch mechanism design process can be improved including contact-latch system by using this process. PMID:25918740

  10. The Unicellular State as a Point Source in a Quantum Biological System

    PubMed Central

    Torday, John S.; Miller, William B.

    2016-01-01

    A point source is the central and most important point or place for any group of cohering phenomena. Evolutionary development presumes that biological processes are sequentially linked, but neither directed from, nor centralized within, any specific biologic structure or stage. However, such an epigenomic entity exists and its transforming effects can be understood through the obligatory recapitulation of all eukaryotic lifeforms through a zygotic unicellular phase. This requisite biological conjunction can now be properly assessed as the focal point of reconciliation between biology and quantum phenomena, illustrated by deconvoluting complex physiologic traits back to their unicellular origins. PMID:27240413

  11. Enhancing sequential time perception and storytelling ability of deaf and hard of hearing children.

    PubMed

    Ingber, Sara; Eden, Sigal

    2011-01-01

    A 3-month intervention was conducted to enhance the sequential time perception and storytelling ability of young children with hearing loss. The children were trained to arrange pictorial episodes of temporal scripts and tell the stories they created. Participants (N = 34, aged 4-7 years) were divided into 2 groups based on whether their spoken-language gap was more or less than 1 year compared to age norms. They completed A. Kaufman and N. Kaufman's (1983) picture series subtest and Guralnik's (1982) storytelling test at pretest and posttest. Measures demonstrated significant improvement in sequential time and storytelling achievement postintervention. Three of the examined demographic variables revealed correlations: Participants with genetic etiology showed greater improvement in time sequencing and storytelling than participants with unknown etiology; early onset of treatment correlated with better achievement in time sequencing; cochlear implant users showed greater storytelling improvement than hearing aid users.

  12. Anomaly Detection in Dynamic Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turcotte, Melissa

    2014-10-14

    Anomaly detection in dynamic communication networks has many important security applications. These networks can be extremely large and so detecting any changes in their structure can be computationally challenging; hence, computationally fast, parallelisable methods for monitoring the network are paramount. For this reason the methods presented here use independent node and edge based models to detect locally anomalous substructures within communication networks. As a first stage, the aim is to detect changes in the data streams arising from node or edge communications. Throughout the thesis simple, conjugate Bayesian models for counting processes are used to model these data streams. Amore » second stage of analysis can then be performed on a much reduced subset of the network comprising nodes and edges which have been identified as potentially anomalous in the first stage. The first method assumes communications in a network arise from an inhomogeneous Poisson process with piecewise constant intensity. Anomaly detection is then treated as a changepoint problem on the intensities. The changepoint model is extended to incorporate seasonal behavior inherent in communication networks. This seasonal behavior is also viewed as a changepoint problem acting on a piecewise constant Poisson process. In a static time frame, inference is made on this extended model via a Gibbs sampling strategy. In a sequential time frame, where the data arrive as a stream, a novel, fast Sequential Monte Carlo (SMC) algorithm is introduced to sample from the sequence of posterior distributions of the change points over time. A second method is considered for monitoring communications in a large scale computer network. The usage patterns in these types of networks are very bursty in nature and don’t fit a Poisson process model. For tractable inference, discrete time models are considered, where the data are aggregated into discrete time periods and probability models are fitted to the communication counts. In a sequential analysis, anomalous behavior is then identified from outlying behavior with respect to the fitted predictive probability models. Seasonality is again incorporated into the model and is treated as a changepoint model on the transition probabilities of a discrete time Markov process. Second stage analytics are then developed which combine anomalous edges to identify anomalous substructures in the network.« less

  13. PCTO-SIM: Multiple-point geostatistical modeling using parallel conditional texture optimization

    NASA Astrophysics Data System (ADS)

    Pourfard, Mohammadreza; Abdollahifard, Mohammad J.; Faez, Karim; Motamedi, Sayed Ahmad; Hosseinian, Tahmineh

    2017-05-01

    Multiple-point Geostatistics is a well-known general statistical framework by which complex geological phenomena have been modeled efficiently. Pixel-based and patch-based are two major categories of these methods. In this paper, the optimization-based category is used which has a dual concept in texture synthesis as texture optimization. Our extended version of texture optimization uses the energy concept to model geological phenomena. While honoring the hard point, the minimization of our proposed cost function forces simulation grid pixels to be as similar as possible to training images. Our algorithm has a self-enrichment capability and creates a richer training database from a sparser one through mixing the information of all surrounding patches of the simulation nodes. Therefore, it preserves pattern continuity in both continuous and categorical variables very well. It also shows a fuzzy result in its every realization similar to the expected result of multi realizations of other statistical models. While the main core of most previous Multiple-point Geostatistics methods is sequential, the parallel main core of our algorithm enabled it to use GPU efficiently to reduce the CPU time. One new validation method for MPS has also been proposed in this paper.

  14. TRUNCATED RANDOM MEASURES

    DTIC Science & Technology

    2018-01-12

    sequential representations, a method is required for deter- mining which to use for the application at hand and, once a representation is selected, for...DISTRIBUTION UNLIMITED Methods , Assumptions, and Procedures 3.1 Background 3.1.1 CRMs and truncation Consider a Poisson point process on R+ := [0...the heart of the study of truncated CRMs. They provide an itera- tive method that can be terminated at any point to yield a finite approximation to the

  15. Sequential Designs Based on Bayesian Uncertainty Quantification in Sparse Representation Surrogate Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.

    A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less

  16. Reduction of display artifacts by random sampling

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J., Jr.; Nagel, D. C.; Watson, A. B.; Yellott, J. I., Jr.

    1983-01-01

    The application of random-sampling techniques to remove visible artifacts (such as flicker, moire patterns, and paradoxical motion) introduced in TV-type displays by discrete sequential scanning is discussed and demonstrated. Sequential-scanning artifacts are described; the window of visibility defined in spatiotemporal frequency space by Watson and Ahumada (1982 and 1983) and Watson et al. (1983) is explained; the basic principles of random sampling are reviewed and illustrated by the case of the human retina; and it is proposed that the sampling artifacts can be replaced by random noise, which can then be shifted to frequency-space regions outside the window of visibility. Vertical sequential, single-random-sequence, and continuously renewed random-sequence plotting displays generating 128 points at update rates up to 130 Hz are applied to images of stationary and moving lines, and best results are obtained with the single random sequence for the stationary lines and with the renewed random sequence for the moving lines.

  17. Sequential Designs Based on Bayesian Uncertainty Quantification in Sparse Representation Surrogate Modeling

    DOE PAGES

    Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.

    2017-04-12

    A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less

  18. Spacecraft Station-Keeping Trajectory and Mission Design Tools

    NASA Technical Reports Server (NTRS)

    Chung, Min-Kun J.

    2009-01-01

    Two tools were developed for designing station-keeping trajectories and estimating delta-v requirements for designing missions to a small body such as a comet or asteroid. This innovation uses NPOPT, a non-sparse, general-purpose sequential quadratic programming (SQP) optimizer and the Two-Level Differential Corrector (T-LDC) in LTool (Libration point mission design Tool) to design three kinds of station-keeping scripts: vertical hovering, horizontal hovering, and orbiting. The T-LDC is used to differentially correct several trajectory legs that join hovering points. In a vertical hovering, the maximum and minimum range points must be connected smoothly while maintaining the spacecrafts range from a small body, all within the law of gravity and the solar radiation pressure. The same is true for a horizontal hover. A PatchPoint is an LTool class that denotes a space-time event with some extra information for differential correction, including a set of constraints to be satisfied by T-LDC. Given a set of PatchPoints, each with its own constraint, the T-LDC differentially corrects the entire trajectory by connecting each trajectory leg joined by PatchPoints while satisfying all specified constraints at the same time. Vertical and horizontal hover both are needed to minimize delta-v spent for station keeping. A Python I/F to NPOPT has been written to be used from an LTool script. In vertical hovering, the spacecraft stays along the line joining the Sun and a small body. An instantaneous delta-v toward the anti- Sun direction is applied at the closest approach to the small body for station keeping. For example, the spacecraft hovers between the minimum range (2 km) point and the maximum range (2.5 km) point from the asteroid 1989ML. Horizontal hovering buys more time for a spacecraft to recover if, for any reason, a planned thrust fails, by returning almost to the initial position after some time later via a near elliptical orbit around the small body. The mapping or staging orbit may be similarly generated using T-LDC with a set of constraints. Some delta-v tables are generated for several different asteroid masses.

  19. Sequential Super-Resolution Imaging of Bacterial Regulatory Proteins: The Nucleoid and the Cell Membrane in Single, Fixed E. coli Cells.

    PubMed

    Spahn, Christoph; Glaesmann, Mathilda; Gao, Yunfeng; Foo, Yong Hwee; Lampe, Marko; Kenney, Linda J; Heilemann, Mike

    2017-01-01

    Despite their small size and the lack of compartmentalization, bacteria exhibit a striking degree of cellular organization, both in time and space. During the last decade, a group of new microscopy techniques emerged, termed super-resolution microscopy or nanoscopy, which facilitate visualizing the organization of proteins in bacteria at the nanoscale. Single-molecule localization microscopy (SMLM) is especially well suited to reveal a wide range of new information regarding protein organization, interaction, and dynamics in single bacterial cells. Recent developments in click chemistry facilitate the visualization of bacterial chromatin with a resolution of ~20 nm, providing valuable information about the ultrastructure of bacterial nucleoids, especially at short generation times. In this chapter, we describe a simple-to-realize protocol that allows determining precise structural information of bacterial nucleoids in fixed cells, using direct stochastic optical reconstruction microscopy (dSTORM). In combination with quantitative photoactivated localization microscopy (PALM), the spatial relationship of proteins with the bacterial chromosome can be studied. The position of a protein of interest with respect to the nucleoids and the cell cylinder can be visualized by super-resolving the membrane using point accumulation for imaging in nanoscale topography (PAINT). The combination of the different SMLM techniques in a sequential workflow maximizes the information that can be extracted from single cells, while maintaining optimal imaging conditions for each technique.

  20. Context-Dependent Upper Limb Prosthesis Control for Natural and Robust Use.

    PubMed

    Amsuess, Sebastian; Vujaklija, Ivan; Goebel, Peter; Roche, Aidan D; Graimann, Bernhard; Aszmann, Oskar C; Farina, Dario

    2016-07-01

    Pattern recognition and regression methods applied to the surface EMG have been used for estimating the user intended motor tasks across multiple degrees of freedom (DOF), for prosthetic control. While these methods are effective in several conditions, they are still characterized by some shortcomings. In this study we propose a methodology that combines these two approaches for mutually alleviating their limitations. This resulted in a control method capable of context-dependent movement estimation that switched automatically between sequential (one DOF at a time) or simultaneous (multiple DOF) prosthesis control, based on an online estimation of signal dimensionality. The proposed method was evaluated in scenarios close to real-life situations, with the control of a physical prosthesis in applied tasks of varying difficulties. Test prostheses were individually manufactured for both able-bodied and transradial amputee subjects. With these prostheses, two amputees performed the Southampton Hand Assessment Procedure test with scores of 58 and 71 points. The five able-bodied individuals performed standardized tests, such as the box&block and clothes pin test, reducing the completion times by up to 30%, with respect to using a state-of-the-art pure sequential control algorithm. Apart from facilitating fast simultaneous movements, the proposed control scheme was also more intuitive to use, since human movements are predominated by simultaneous activations across joints. The proposed method thus represents a significant step towards intelligent, intuitive and natural control of upper limb prostheses.

  1. Sequential sampling of ribes populations in the control of white pine blister rust (Cronartium ribicola Fischer) in California

    Treesearch

    Harold R. Offord

    1966-01-01

    Sequential sampling based on a negative binomial distribution of ribes populations required less than half the time taken by regular systematic line transect sampling in a comparison test. It gave the same control decision as the regular method in 9 of 13 field trials. A computer program that permits sequential plans to be built readily for other white pine regions is...

  2. Iterative User Interface Design for Automated Sequential Organ Failure Assessment Score Calculator in Sepsis Detection

    PubMed Central

    Herasevich, Vitaly

    2017-01-01

    Background The new sepsis definition has increased the need for frequent sequential organ failure assessment (SOFA) score recalculation and the clerical burden of information retrieval makes this score ideal for automated calculation. Objective The aim of this study was to (1) estimate the clerical workload of manual SOFA score calculation through a time-motion analysis and (2) describe a user-centered design process for an electronic medical record (EMR) integrated, automated SOFA score calculator with subsequent usability evaluation study. Methods First, we performed a time-motion analysis by recording time-to-task-completion for the manual calculation of 35 baseline and 35 current SOFA scores by 14 internal medicine residents over a 2-month period. Next, we used an agile development process to create a user interface for a previously developed automated SOFA score calculator. The final user interface usability was evaluated by clinician end users with the Computer Systems Usability Questionnaire. Results The overall mean (standard deviation, SD) time-to-complete manual SOFA score calculation time was 61.6 s (33). Among the 24% (12/50) usability survey respondents, our user-centered user interface design process resulted in >75% favorability of survey items in the domains of system usability, information quality, and interface quality. Conclusions Early stakeholder engagement in our agile design process resulted in a user interface for an automated SOFA score calculator that reduced clinician workload and met clinicians’ needs at the point of care. Emerging interoperable platforms may facilitate dissemination of similarly useful clinical score calculators and decision support algorithms as “apps.” A user-centered design process and usability evaluation should be considered during creation of these tools. PMID:28526675

  3. Iterative User Interface Design for Automated Sequential Organ Failure Assessment Score Calculator in Sepsis Detection.

    PubMed

    Aakre, Christopher Ansel; Kitson, Jaben E; Li, Man; Herasevich, Vitaly

    2017-05-18

    The new sepsis definition has increased the need for frequent sequential organ failure assessment (SOFA) score recalculation and the clerical burden of information retrieval makes this score ideal for automated calculation. The aim of this study was to (1) estimate the clerical workload of manual SOFA score calculation through a time-motion analysis and (2) describe a user-centered design process for an electronic medical record (EMR) integrated, automated SOFA score calculator with subsequent usability evaluation study. First, we performed a time-motion analysis by recording time-to-task-completion for the manual calculation of 35 baseline and 35 current SOFA scores by 14 internal medicine residents over a 2-month period. Next, we used an agile development process to create a user interface for a previously developed automated SOFA score calculator. The final user interface usability was evaluated by clinician end users with the Computer Systems Usability Questionnaire. The overall mean (standard deviation, SD) time-to-complete manual SOFA score calculation time was 61.6 s (33). Among the 24% (12/50) usability survey respondents, our user-centered user interface design process resulted in >75% favorability of survey items in the domains of system usability, information quality, and interface quality. Early stakeholder engagement in our agile design process resulted in a user interface for an automated SOFA score calculator that reduced clinician workload and met clinicians' needs at the point of care. Emerging interoperable platforms may facilitate dissemination of similarly useful clinical score calculators and decision support algorithms as "apps." A user-centered design process and usability evaluation should be considered during creation of these tools. ©Christopher Ansel Aakre, Jaben E Kitson, Man Li, Vitaly Herasevich. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 18.05.2017.

  4. Optimal two-stage dynamic treatment regimes from a classification perspective with censored survival data.

    PubMed

    Hager, Rebecca; Tsiatis, Anastasios A; Davidian, Marie

    2018-05-18

    Clinicians often make multiple treatment decisions at key points over the course of a patient's disease. A dynamic treatment regime is a sequence of decision rules, each mapping a patient's observed history to the set of available, feasible treatment options at each decision point, and thus formalizes this process. An optimal regime is one leading to the most beneficial outcome on average if used to select treatment for the patient population. We propose a method for estimation of an optimal regime involving two decision points when the outcome of interest is a censored survival time, which is based on maximizing a locally efficient, doubly robust, augmented inverse probability weighted estimator for average outcome over a class of regimes. By casting this optimization as a classification problem, we exploit well-studied classification techniques such as support vector machines to characterize the class of regimes and facilitate implementation via a backward iterative algorithm. Simulation studies of performance and application of the method to data from a sequential, multiple assignment randomized clinical trial in acute leukemia are presented. © 2018, The International Biometric Society.

  5. A Nonrandomized, Phase II Study of Sequential Irinotecan and Flavopiridol in Patients With Advanced Hepatocellular Carcinoma

    PubMed Central

    Ang, Celina; O'Reilly, Eileen M.; Carvajal, Richard D.; Capanu, Marinela; Gonen, Mithat; Doyle, Laurence; Ghossein, Ronald; Schwartz, Lawrence; Jacobs, Gria; Ma, Jennifer; Schwartz, Gary K.

    2012-01-01

    ABSTRACT BACKGROUND: Flavopiridol, a Cdk inhibitor, potentiates irinotecan-induced apoptosis. In a phase I trial of sequential irinotecan and flavopiridol, 2 patients with advanced hepatocellular carcinoma (HCC) had stable disease (SD) for ≥14 months. We thus studied the sequential combination of irinotecan and flavopiridol in patients with HCC. METHODS: Patients with advanced HCC naïve to systemic therapy, Child-Pugh ≤B8, and Karnofsky performance score (KPS) ≥70% received 100 mg/m2 irinotecan followed 7 hours later by flavopiridol 60 mg/m2 weekly for 4 of 6 weeks. The primary end point was an improvement in progression-free survival at 4 months (PFS-4) from 33% to 54%, using a Simon's two-stage design. Tumors were stained for p53. RESULTS: Only 16 patients in the first stage were enrolled: median age, 64 years; median KPS, 80%; Child-Pugh A, 87.5%; and stage III/IV, 25%/75%. The primary end point was not met; PFS-4 was 20%, leading to early termination of the study. Ten patients were evaluable for response: 1 had SD >1 year and 9 had disease progression. Grade 3 fatigue, dehydration, diarrhea, neutropenia with or without fever, lymphopenia, anemia, hyperbilirubinemia, and transaminitis occurred in ≥10% of the patients. Of the 9 patients who progressed, 5 had mutant p53 and 4 had wild-type p53. The patient with stable disease had wild-type p53. CONCLUSION: Sequential irinotecan and flavopiridol are ineffective and poorly tolerated in patients with advanced HCC. Despite our limited assessments, it is possible that the presence of wild-type p53 is necessary but not sufficient to predict response in HCC. PMID:23293699

  6. Similar Neural Correlates for Language and Sequential Learning: Evidence from Event-Related Brain Potentials

    PubMed Central

    Christiansen, Morten H.; Conway, Christopher M.; Onnis, Luca

    2011-01-01

    We used event-related potentials (ERPs) to investigate the time course and distribution of brain activity while adults performed (a) a sequential learning task involving complex structured sequences, and (b) a language processing task. The same positive ERP deflection, the P600 effect, typically linked to difficult or ungrammatical syntactic processing, was found for structural incongruencies in both sequential learning as well as natural language, and with similar topographical distributions. Additionally, a left anterior negativity (LAN) was observed for language but not for sequential learning. These results are interpreted as an indication that the P600 provides an index of violations and the cost of integration of expectations for upcoming material when processing complex sequential structure. We conclude that the same neural mechanisms may be recruited for both syntactic processing of linguistic stimuli and sequential learning of structured sequence patterns more generally. PMID:23678205

  7. Sequential dengue virus infections detected in active and passive surveillance programs in Thailand, 1994-2010.

    PubMed

    Bhoomiboonchoo, Piraya; Nisalak, Ananda; Chansatiporn, Natkamol; Yoon, In-Kyu; Kalayanarooj, Siripen; Thipayamongkolgul, Mathuros; Endy, Timothy; Rothman, Alan L; Green, Sharone; Srikiatkhachorn, Anon; Buddhari, Darunee; Mammen, Mammen P; Gibbons, Robert V

    2015-03-14

    The effect of prior dengue virus (DENV) exposure on subsequent heterologous infection can be beneficial or detrimental depending on many factors including timing of infection. We sought to evaluate this effect by examining a large database of DENV infections captured by both active and passive surveillance encompassing a wide clinical spectrum of disease. We evaluated datasets from 17 years of hospital-based passive surveillance and nine years of cohort studies, including clinical and subclinical DENV infections, to assess the outcomes of sequential heterologous infections. Chi square or Fisher's exact test was used to compare proportions of infection outcomes such as disease severity; ANOVA was used for continuous variables. Multivariate logistic regression was used to assess risk factors for infection outcomes. Of 38,740 DENV infections, two or more infections were detected in 502 individuals; 14 had three infections. The mean ages at the time of the first and second detected infections were 7.6 ± 3.0 and 11.2 ± 3.0 years. The shortest time between sequential infections was 66 days. A longer time interval between sequential infections was associated with dengue hemorrhagic fever (DHF) in the second detected infection (OR 1.3, 95% CI 1.2-1.4). All possible sequential serotype pairs were observed among 201 subjects with DHF at the second detected infection, except DENV-4 followed by DENV-3. Among DENV infections detected in cohort subjects by active study surveillance and subsequent non-study hospital-based passive surveillance, hospitalization at the first detected infection increased the likelihood of hospitalization at the second detected infection. Increasing time between sequential DENV infections was associated with greater severity of the second detected infection, supporting the role of heterotypic immunity in both protection and enhancement. Hospitalization was positively associated between the first and second detected infections, suggesting a possible predisposition in some individuals to more severe dengue disease.

  8. Near real-time adverse drug reaction surveillance within population-based health networks: methodology considerations for data accrual.

    PubMed

    Avery, Taliser R; Kulldorff, Martin; Vilk, Yury; Li, Lingling; Cheetham, T Craig; Dublin, Sascha; Davis, Robert L; Liu, Liyan; Herrinton, Lisa; Brown, Jeffrey S

    2013-05-01

    This study describes practical considerations for implementation of near real-time medical product safety surveillance in a distributed health data network. We conducted pilot active safety surveillance comparing generic divalproex sodium to historical branded product at four health plans from April to October 2009. Outcomes reported are all-cause emergency room visits and fractures. One retrospective data extract was completed (January 2002-June 2008), followed by seven prospective monthly extracts (January 2008-November 2009). To evaluate delays in claims processing, we used three analytic approaches: near real-time sequential analysis, sequential analysis with 1.5 month delay, and nonsequential (using final retrospective data). Sequential analyses used the maximized sequential probability ratio test. Procedural and logistical barriers to active surveillance were documented. We identified 6586 new users of generic divalproex sodium and 43,960 new users of the branded product. Quality control methods identified 16 extract errors, which were corrected. Near real-time extracts captured 87.5% of emergency room visits and 50.0% of fractures, which improved to 98.3% and 68.7% respectively with 1.5 month delay. We did not identify signals for either outcome regardless of extract timeframe, and slight differences in the test statistic and relative risk estimates were found. Near real-time sequential safety surveillance is feasible, but several barriers warrant attention. Data quality review of each data extract was necessary. Although signal detection was not affected by delay in analysis, when using a historical control group differential accrual between exposure and outcomes may theoretically bias near real-time risk estimates towards the null, causing failure to detect a signal. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Distortion product otoacoustic emissions: comparison of sequential vs. simultaneous presentation of primary tones.

    PubMed

    Kumar, U Ajith; Maruthy, Sandeep; Chandrakant, Vishwakarma

    2009-03-01

    Distortion product otoacoustic emissions are one form of evoked otoacoustic emissions. DPOAEs provide the frequency specific information about the hearing status in mid and high frequency regions. But in most screening protocols TEOAEs are preferred as it requires less time compared to DPOAE. This is because, in DPOAE each stimulus is presented one after the other and responses are analyzed. Grason and Stadler Incorporation 60 (GSI-60) offer simultaneous presentation of four sets of primary tones at a time and checks for the DPOAE. In this mode of presentation, all the pairs are presented at a time and following that response is extracted separately whereas, in sequential mode primaries are presented in orderly fashion one after the other. In this article simultaneous and sequential protocols were used to compare the Distortion product otoacoustic emission amplitude, noise floor and administration time in individuals with normal hearing and mild sensori-neural (SN) hearing loss. In simultaneous protocols four sets of primary tones (i.e. 8 tones) were presented together whereas, in sequential presentation mode one set of primary tones was presented each time. Simultaneous protocol was completed in less than half the time required for the completion of sequential protocol. Two techniques yielded similar results at frequencies above 1000 Hz only in normal hearing group. In SN hearing loss group simultaneous presentation yielded signifi cantly higher noise floors and distortion product amplitudes. This result challenges the use of simultaneous presentation technique in neonatal hearing screening programmes and on other pathologies. This discrepancy between two protocols may be due to some changes in biomechanical process in the cochlear and/or due to higher distortion/noise produced by the system during the simultaneous presentation mode.

  10. Heuristic and optimal policy computations in the human brain during sequential decision-making.

    PubMed

    Korn, Christoph W; Bach, Dominik R

    2018-01-23

    Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.

  11. Enhancing Sequential Time Perception and Storytelling Ability of Deaf and Hard of Hearing Children

    ERIC Educational Resources Information Center

    Ingber, Sara; Eden, Sigal

    2011-01-01

    A 3-month intervention was conducted to enhance the sequential time perception and storytelling ability of young children with hearing loss. The children were trained to arrange pictorial episodes of temporal scripts and tell the stories they created. Participants (N = 34, aged 4-7 years) were divided into 2 groups based on whether their…

  12. CT fluoroscopy-assisted puncture of thoracic and abdominal masses: a randomized trial.

    PubMed

    Kirchner, Johannes; Kickuth, Ralph; Laufer, Ulf; Schilling, Esther Maria; Adams, Stephan; Liermann, Dieter

    2002-03-01

    We investigated the benefit of real-time guidance of interventional punctures by means of computed tomography fluoroscopy (CTF) compared with the conventional sequential acquisition guidance. In a prospective randomized trial, 75 patients underwent either CTF-guided (group A, n = 50) or sequential CT-guided (group B, n = 25) punctures of thoracic (n = 29) or abdominal (n = 46) masses. CTF was performed on the CT machine (Somatom Plus 4 Power, Siemens Corp., Forchheim, Germany) equipped with the C.A.R.E. Vision application (tube voltage 120 kV, tube current 50 mA, rotational time 0.75 s, slice thickness 10 mm, 8 frames/s). The average procedure time showed a statistically significant difference between the two study groups (group A: 564 s, group B 795 s, P = 0.0032). The mean total mAs was 7089 mAs for the CTF and 4856 mAs for the sequential image-guided intervention, respectively. The sensitivity was 71% specificity 100% positive predictive value 100% and negative predictive value 60% for the CTF-guided puncture, and 68, 100, 100 and 50% for sequential CT, respectively. CTF guidance realizes a time-saving but increases the radiation exposure dosage.

  13. Bursts and Heavy Tails in Temporal and Sequential Dynamics of Foraging Decisions

    PubMed Central

    Jung, Kanghoon; Jang, Hyeran; Kralik, Jerald D.; Jeong, Jaeseung

    2014-01-01

    A fundamental understanding of behavior requires predicting when and what an individual will choose. However, the actual temporal and sequential dynamics of successive choices made among multiple alternatives remain unclear. In the current study, we tested the hypothesis that there is a general bursting property in both the timing and sequential patterns of foraging decisions. We conducted a foraging experiment in which rats chose among four different foods over a continuous two-week time period. Regarding when choices were made, we found bursts of rapidly occurring actions, separated by time-varying inactive periods, partially based on a circadian rhythm. Regarding what was chosen, we found sequential dynamics in affective choices characterized by two key features: (a) a highly biased choice distribution; and (b) preferential attachment, in which the animals were more likely to choose what they had previously chosen. To capture the temporal dynamics, we propose a dual-state model consisting of active and inactive states. We also introduce a satiation-attainment process for bursty activity, and a non-homogeneous Poisson process for longer inactivity between bursts. For the sequential dynamics, we propose a dual-control model consisting of goal-directed and habit systems, based on outcome valuation and choice history, respectively. This study provides insights into how the bursty nature of behavior emerges from the interaction of different underlying systems, leading to heavy tails in the distribution of behavior over time and choices. PMID:25122498

  14. Multiple scene attitude estimator performance for LANDSAT-1

    NASA Technical Reports Server (NTRS)

    Rifman, S. S.; Monuki, A. T.; Shortwell, C. P.

    1979-01-01

    Initial results are presented to demonstrate the performance of a linear sequential estimator (Kalman Filter) used to estimate a LANDSAT 1 spacecraft attitude time series defined for four scenes. With the revised estimator a GCP poor scene - a scene with no usable geodetic control points (GCPs) - can be rectified to higher accuracies than otherwise based on the use of GCPs in adjacent scenes. Attitude estimation errors was determined by the use of GCPs located in the GCP-poor test scene, but which are not used to update the Kalman filter. Initial results achieved indicate that errors of 500m (rms) can be attained for the GCP-poor scenes. Operational factors are related to various scenarios.

  15. Reverse osmosis water purification system

    NASA Technical Reports Server (NTRS)

    Ahlstrom, H. G.; Hames, P. S.; Menninger, F. J.

    1986-01-01

    A reverse osmosis water purification system, which uses a programmable controller (PC) as the control system, was designed and built to maintain the cleanliness and level of water for various systems of a 64-m antenna. The installation operates with other equipment of the antenna at the Goldstone Deep Space Communication Complex. The reverse osmosis system was designed to be fully automatic; with the PC, many complex sequential and timed logic networks were easily implemented and are modified. The PC monitors water levels, pressures, flows, control panel requests, and set points on analog meters; with this information various processes are initiated, monitored, modified, halted, or eliminated as required by the equipment being supplied pure water.

  16. Continuous inline blending of antimisting kerosene

    NASA Technical Reports Server (NTRS)

    Parikh, P.; Yavrouian, A.; Sarohia, V.

    1985-01-01

    A continuous inline blender was developed to blend polymer slurries with a stream of jet A fuel. The viscosity of the slurries ranged widely. The key element of the blender was a static mixer placed immediately downstream of the slurry injection point. A positive displacement gear pump for jet A was employed, and a progressive cavity rotary screw pump was used for slurry pumping. Turbine flow meters were employed for jet A metering while the slurry flow rate was calibrated against the pressure drop in the injection tube. While using one of the FM-9 variant slurries, a provision was made for a time delay between the addition of slurry and the addition of amine sequentially into the jet A stream.

  17. Neurocognitive mechanisms of statistical-sequential learning: what do event-related potentials tell us?

    PubMed Central

    Daltrozzo, Jerome; Conway, Christopher M.

    2014-01-01

    Statistical-sequential learning (SL) is the ability to process patterns of environmental stimuli, such as spoken language, music, or one’s motor actions, that unfold in time. The underlying neurocognitive mechanisms of SL and the associated cognitive representations are still not well understood as reflected by the heterogeneity of the reviewed cognitive models. The purpose of this review is: (1) to provide a general overview of the primary models and theories of SL, (2) to describe the empirical research – with a focus on the event-related potential (ERP) literature – in support of these models while also highlighting the current limitations of this research, and (3) to present a set of new lines of ERP research to overcome these limitations. The review is articulated around three descriptive dimensions in relation to SL: the level of abstractness of the representations learned through SL, the effect of the level of attention and consciousness on SL, and the developmental trajectory of SL across the life-span. We conclude with a new tentative model that takes into account these three dimensions and also point to several promising new lines of SL research. PMID:24994975

  18. Precise algorithm to generate random sequential adsorption of hard polygons at saturation

    NASA Astrophysics Data System (ADS)

    Zhang, G.

    2018-04-01

    Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation" limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles and could thus determine the saturation density of spheres with high accuracy. In this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensional polygons. We also calculate the saturation density for regular polygons of three to ten sides and obtain results that are consistent with previous, extrapolation-based studies.

  19. Precise algorithm to generate random sequential adsorption of hard polygons at saturation.

    PubMed

    Zhang, G

    2018-04-01

    Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation" limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles and could thus determine the saturation density of spheres with high accuracy. In this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensional polygons. We also calculate the saturation density for regular polygons of three to ten sides and obtain results that are consistent with previous, extrapolation-based studies.

  20. Research on parallel algorithm for sequential pattern mining

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

    2008-03-01

    Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

  1. CACTI: Free, Open-Source Software for the Sequential Coding of Behavioral Interactions

    PubMed Central

    Glynn, Lisa H.; Hallgren, Kevin A.; Houck, Jon M.; Moyers, Theresa B.

    2012-01-01

    The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery. PMID:22815713

  2. Assessment of sequential same arm agreement of blood pressure measurements by a CVProfilor DO-2020 versus a Baumanometer mercury sphygmomanometer.

    PubMed

    Prisant, L M; Resnick, L M; Hollenberg, S M

    2001-06-01

    The aim of this study was to assess the accuracy of sequential same arm blood pressure measurement by the mercury sphygmomanometer with the oscillometric blood pressure measurements from a device that also determines arterial elasticity. A prospective, multicentre, clinical study evaluated sequential same arm blood pressure measurements, using a mercury sphygmomanometer (Baumanometer, W. A. Baum Co., Inc., Copiague, New York, USA) and an oscillometric non-invasive device that calculates arterial elasticity (CVProfilor DO-2020 Cardiovascular Profiling System, Hypertension Diagnostics, Inc., Eagan, Minnesota, USA). Blood pressure was measured supine in triplicate, 3 min apart in a randomized sequence after a period of rest. The study population of 230 normotensive and hypertensive subjects included 57% females, 51% Caucasians, and 33% African Americans. The mean difference between test methods of systolic blood pressure, diastolic blood pressure, and heart rate was -3.2 +/- 6.9 mmHg, +0.8 +/- 5.9 mmHg, and +1.0 +/- 5.7 beats/minute. For systolic and diastolic blood pressure, 60.9 and 70.4% of sequential measurements by each method were within +/- 5 mmHg. Few or no points fell beyond the mean +/- 2 standard deviations lines for each cuff bladder size. Sequential same arm measurements of the CVProfilor DO-2020 Cardiovascular Profiling System measures blood pressure by an oscillometric method (dynamic linear deflation) with reasonable agreement with a mercury sphygmomanometer.

  3. Image Quality of 3rd Generation Spiral Cranial Dual-Source CT in Combination with an Advanced Model Iterative Reconstruction Technique: A Prospective Intra-Individual Comparison Study to Standard Sequential Cranial CT Using Identical Radiation Dose

    PubMed Central

    Wenz, Holger; Maros, Máté E.; Meyer, Mathias; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O.; Flohr, Thomas; Leidecker, Christianne; Groden, Christoph; Scharf, Johann; Henzler, Thomas

    2015-01-01

    Objectives To prospectively intra-individually compare image quality of a 3rd generation Dual-Source-CT (DSCT) spiral cranial CT (cCT) to a sequential 4-slice Multi-Slice-CT (MSCT) while maintaining identical intra-individual radiation dose levels. Methods 35 patients, who had a non-contrast enhanced sequential cCT examination on a 4-slice MDCT within the past 12 months, underwent a spiral cCT scan on a 3rd generation DSCT. CTDIvol identical to initial 4-slice MDCT was applied. Data was reconstructed using filtered backward projection (FBP) and 3rd-generation iterative reconstruction (IR) algorithm at 5 different IR strength levels. Two neuroradiologists independently evaluated subjective image quality using a 4-point Likert-scale and objective image quality was assessed in white matter and nucleus caudatus with signal-to-noise ratios (SNR) being subsequently calculated. Results Subjective image quality of all spiral cCT datasets was rated significantly higher compared to the 4-slice MDCT sequential acquisitions (p<0.05). Mean SNR was significantly higher in all spiral compared to sequential cCT datasets with mean SNR improvement of 61.65% (p*Bonferroni0.05<0.0024). Subjective image quality improved with increasing IR levels. Conclusion Combination of 3rd-generation DSCT spiral cCT with an advanced model IR technique significantly improves subjective and objective image quality compared to a standard sequential cCT acquisition acquired at identical dose levels. PMID:26288186

  4. Image Quality of 3rd Generation Spiral Cranial Dual-Source CT in Combination with an Advanced Model Iterative Reconstruction Technique: A Prospective Intra-Individual Comparison Study to Standard Sequential Cranial CT Using Identical Radiation Dose.

    PubMed

    Wenz, Holger; Maros, Máté E; Meyer, Mathias; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O; Flohr, Thomas; Leidecker, Christianne; Groden, Christoph; Scharf, Johann; Henzler, Thomas

    2015-01-01

    To prospectively intra-individually compare image quality of a 3rd generation Dual-Source-CT (DSCT) spiral cranial CT (cCT) to a sequential 4-slice Multi-Slice-CT (MSCT) while maintaining identical intra-individual radiation dose levels. 35 patients, who had a non-contrast enhanced sequential cCT examination on a 4-slice MDCT within the past 12 months, underwent a spiral cCT scan on a 3rd generation DSCT. CTDIvol identical to initial 4-slice MDCT was applied. Data was reconstructed using filtered backward projection (FBP) and 3rd-generation iterative reconstruction (IR) algorithm at 5 different IR strength levels. Two neuroradiologists independently evaluated subjective image quality using a 4-point Likert-scale and objective image quality was assessed in white matter and nucleus caudatus with signal-to-noise ratios (SNR) being subsequently calculated. Subjective image quality of all spiral cCT datasets was rated significantly higher compared to the 4-slice MDCT sequential acquisitions (p<0.05). Mean SNR was significantly higher in all spiral compared to sequential cCT datasets with mean SNR improvement of 61.65% (p*Bonferroni0.05<0.0024). Subjective image quality improved with increasing IR levels. Combination of 3rd-generation DSCT spiral cCT with an advanced model IR technique significantly improves subjective and objective image quality compared to a standard sequential cCT acquisition acquired at identical dose levels.

  5. Conversation analysis can help to distinguish between epilepsy and non-epileptic seizure disorders: a case comparison.

    PubMed

    Plug, Leendert; Sharrack, Basil; Reuber, Markus

    2009-01-01

    Factual items in patients' histories are of limited discriminating value in the differential diagnosis of epilepsy and non-epileptic seizures (NES). A number of studies using a transcript-based sociolinguistic research method inspired by Conversation Analysis (CA) suggest that it is helpful to focus on how patients talk. Previous reports communicated these findings by using particularly clear examples of diagnostically relevant interactional, linguistic and topical features from different patients. They did not discuss the sequential display of different features although this is crucially important from a conversation analytic point of view. This case comparison aims to show clinicians how the discriminating features are displayed by individual patients over the course of a clinical encounter. CA-inspired brief sequential analysis of two first 30-min doctor-patient encounters by a linguist blinded to all medical information. A gold standard diagnosis was made by the recording of a typical seizure with video-EEG. The patient with epilepsy volunteered detailed first person accounts of seizures. The NES patient exhibited resistance to focusing on individual seizure episodes and only provided a detailed seizure description after repeated prompting towards the end of the interview. Although both patients also displayed some linguistic features favouring the alternative diagnosis, the linguist's final diagnostic hypothesis matched the diagnosis made by video-EEG in both cases. This study illustrates the importance of the time point at which patients share information with the doctor. It supports the notion that close attention to how patients communicate can help in the differential diagnosis of seizures.

  6. Time Scale Hierarchies in the Functional Organization of Complex Behaviors

    PubMed Central

    Perdikis, Dionysios; Huys, Raoul; Jirsa, Viktor K.

    2011-01-01

    Traditional approaches to cognitive modelling generally portray cognitive events in terms of ‘discrete’ states (point attractor dynamics) rather than in terms of processes, thereby neglecting the time structure of cognition. In contrast, more recent approaches explicitly address this temporal dimension, but typically provide no entry points into cognitive categorization of events and experiences. With the aim to incorporate both these aspects, we propose a framework for functional architectures. Our approach is grounded in the notion that arbitrary complex (human) behaviour is decomposable into functional modes (elementary units), which we conceptualize as low-dimensional dynamical objects (structured flows on manifolds). The ensemble of modes at an agent’s disposal constitutes his/her functional repertoire. The modes may be subjected to additional dynamics (termed operational signals), in particular, instantaneous inputs, and a mechanism that sequentially selects a mode so that it temporarily dominates the functional dynamics. The inputs and selection mechanisms act on faster and slower time scales then that inherent to the modes, respectively. The dynamics across the three time scales are coupled via feedback, rendering the entire architecture autonomous. We illustrate the functional architecture in the context of serial behaviour, namely cursive handwriting. Subsequently, we investigate the possibility of recovering the contributions of functional modes and operational signals from the output, which appears to be possible only when examining the output phase flow (i.e., not from trajectories in phase space or time). PMID:21980278

  7. Effects of neostriatal 6-OHDA lesion on performance in a rat sequential reaction time task.

    PubMed

    Domenger, D; Schwarting, R K W

    2008-10-31

    Work in humans and monkeys has provided evidence that the basal ganglia, and the neurotransmitter dopamine therein, play an important role for sequential learning and performance. Compared to primates, experimental work in rodents is rather sparse, largely due to the fact that tasks comparable to the human ones, especially serial reaction time tasks (SRTT), had been lacking until recently. We have developed a rat model of the SRTT, which allows to study neural correlates of sequential performance and motor sequence execution. Here, we report the effects of dopaminergic neostriatal lesions, performed using bilateral 6-hydroxydopamine injections, on performance of well-trained rats tested in our SRTT. Sequential behavior was measured in two ways: for one, the effects of small violations of otherwise well trained sequences were examined as a measure of attention and automation. Secondly, sequential versus random performance was compared as a measure of sequential learning. Neurochemically, the lesions led to sub-total dopamine depletions in the neostriatum, which ranged around 60% in the lateral, and around 40% in the medial neostriatum. These lesions led to a general instrumental impairment in terms of reduced speed (response latencies) and response rate, and these deficits were correlated with the degree of striatal dopamine loss. Furthermore, the violation test indicated that the lesion group conducted less automated responses. The comparison of random versus sequential responding showed that the lesion group did not retain its superior sequential performance in terms of speed, whereas they did in terms of accuracy. Also, rats with lesions did not improve further in overall performance as compared to pre-lesion values, whereas controls did. These results support previous results that neostriatal dopamine is involved in instrumental behaviour in general. Also, these lesions are not sufficient to completely abolish sequential performance, at least when acquired before lesion as tested here.

  8. Effect of short-term sequential administration of nonsteroidal anti-inflammatory drugs on the stomach and proximal portion of the duodenum in healthy dogs.

    PubMed

    Dowers, Kristy L; Uhrig, Samantha R; Mama, Khursheed R; Gaynor, James S; Hellyer, Peter W

    2006-10-01

    To evaluate effects of injection with a nonsteroidal anti-inflammatory drug (NSAID) followed by oral administration of an NSAID on the gastrointestinal tract (GIT) of healthy dogs. 6 healthy Walker Hounds. In a randomized, crossover design, dogs were administered 4 treatments consisting of an SC injection of an NSAID or control solution (day 0), followed by oral administration of an NSAID or inert substance for 4 days (days 1 through 4). Treatment regimens included carprofen (4 mg/kg) followed by inert substance; saline (0.9% NaCl) solution followed by deracoxib (4 mg/kg); carprofen (4 mg/kg) followed by carprofen (4 mg/kg); and carprofen (4 mg/kg) followed by deracoxib (4 mg/kg). Hematologic, serum biochemical, and fecal evaluations were conducted weekly, and clinical scores were obtained daily. Endoscopy of the GIT was performed before and on days 1, 2, and 5 for each treatment. Lesions were scored by use of a 6-point scale. No significant differences existed for clinical data, clinicopathologic data, or lesion scores in the esophagus, cardia, or duodenum. For the gastric fundus, antrum, and lesser curvature, an effect of time was observed for all treatments, with lesions worsening from before to day 2 of treatments but improving by day 5. Sequential administration of NSAIDs in this experiment did not result in clinically important gastroduodenal ulcers. A larger study to investigate the effect of sequential administration of NSAIDs for longer durations and in dogs with signs of acute and chronic pain is essential to substantiate these findings.

  9. Monitoring variations of dimethyl sulfide and dimethylsulfoniopropionate in seawater and the atmosphere based on sequential vapor generation and ion molecule reaction mass spectrometry.

    PubMed

    Iyadomi, Satoshi; Ezoe, Kentaro; Ohira, Shin-Ichi; Toda, Kei

    2016-04-01

    To monitor the fluctuations of dimethyl sulfur compounds at the seawater/atmosphere interface, an automated system was developed based on sequential injection analysis coupled with vapor generation-ion molecule reaction mass spectrometry (SIA-VG-IMRMS). Using this analytical system, dissolved dimethyl sulfide (DMS(aq)) and dimethylsulfoniopropionate (DMSP), a precursor to DMS in seawater, were monitored together sequentially with atmospheric dimethyl sulfide (DMS(g)). A shift from the equilibrium point between DMS(aq) and DMS(g) results in the emission of DMS to the atmosphere. Atmospheric DMS emitted from seawater plays an important role as a source of cloud condensation nuclei, which influences the oceanic climate. Water samples were taken periodically and dissolved DMS(aq) was vaporized for analysis by IMRMS. After that, DMSP was hydrolyzed to DMS and acrylic acid, and analyzed in the same manner as DMS(aq). The vaporization behavior and hydrolysis of DMSP to DMS were investigated to optimize these conditions. Frequent (every 30 min) determination of the three components, DMS(aq)/DMSP (nanomolar) and DMS(g) (ppbv), was carried out by SIA-VG-IMRMS. Field analysis of the dimethyl sulfur compounds was undertaken at a coastal station, which succeeded in showing detailed variations of the compounds in a natural setting. Observed concentrations of the dimethyl sulfur compounds both in the atmosphere and seawater largely changed with time and similar variations were repeatedly observed over several days, suggesting diurnal variations in the DMS flux at the seawater/atmosphere interface.

  10. Use of exocentric and egocentric representations in the concurrent planning of sequential saccades.

    PubMed

    Sharika, K M; Ramakrishnan, Arjun; Murthy, Aditya

    2014-11-26

    The concurrent planning of sequential saccades offers a simple model to study the nature of visuomotor transformations since the second saccade vector needs to be remapped to foveate the second target following the first saccade. Remapping is thought to occur through egocentric mechanisms involving an efference copy of the first saccade that is available around the time of its onset. In contrast, an exocentric representation of the second target relative to the first target, if available, can be used to directly code the second saccade vector. While human volunteers performed a modified double-step task, we examined the role of exocentric encoding in concurrent saccade planning by shifting the first target location well before the efference copy could be used by the oculomotor system. The impact of the first target shift on concurrent processing was tested by examining the end-points of second saccades following a shift of the second target during the first saccade. The frequency of second saccades to the old versus new location of the second target, as well as the propagation of first saccade localization errors, both indices of concurrent processing, were found to be significantly reduced in trials with the first target shift compared to those without it. A similar decrease in concurrent processing was obtained when we shifted the first target but kept constant the second saccade vector. Overall, these results suggest that the brain can use relatively stable visual landmarks, independent of efference copy-based egocentric mechanisms, for concurrent planning of sequential saccades. Copyright © 2014 the authors 0270-6474/14/3416009-13$15.00/0.

  11. United Kingdom national paediatric bilateral project: Results of professional rating scales and parent questionnaires.

    PubMed

    Cullington, H E; Bele, D; Brinton, J C; Cooper, S; Daft, M; Harding, J; Hatton, N; Humphries, J; Lutman, M E; Maddocks, J; Maggs, J; Millward, K; O'Donoghue, G; Patel, S; Rajput, K; Salmon, V; Sear, T; Speers, A; Wheeler, A; Wilson, K

    2017-01-01

    This fourteen-centre project used professional rating scales and parent questionnaires to assess longitudinal outcomes in a large non-selected population of children receiving simultaneous and sequential bilateral cochlear implants. This was an observational non-randomized service evaluation. Data were collected at four time points: before bilateral cochlear implants or before the sequential implant, one year, two years, and three years after. The measures reported are Categories of Auditory Performance II (CAPII), Speech Intelligibility Rating (SIR), Bilateral Listening Skills Profile (BLSP) and Parent Outcome Profile (POP). Thousand and one children aged from 8 months to almost 18 years were involved, although there were many missing data. In children receiving simultaneous implants after one, two, and three years respectively, median CAP scores were 4, 5, and 6; median SIR were 1, 2, and 3. Three years after receiving simultaneous bilateral cochlear implants, 61% of children were reported to understand conversation without lip-reading and 66% had intelligible speech if the listener concentrated hard. Auditory performance and speech intelligibility were significantly better in female children than males. Parents of children using sequential implants were generally positive about their child's well-being and behaviour since receiving the second device; those who were less positive about well-being changes also generally reported their children less willing to wear the second device. Data from 78% of paediatric cochlear implant centres in the United Kingdom provide a real-world picture of outcomes of children with bilateral implants in the UK. This large reference data set can be used to identify children in the lower quartile for targeted intervention.

  12. Ratio of sequential chromatograms for quantitative analysis and peak deconvolution: Application to standard addition method and process monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Synovec, R.E.; Johnson, E.L.; Bahowick, T.J.

    1990-08-01

    This paper describes a new technique for data analysis in chromatography, based on taking the point-by-point ratio of sequential chromatograms that have been base line corrected. This ratio chromatogram provides a robust means for the identification and the quantitation of analytes. In addition, the appearance of an interferent is made highly visible, even when it coelutes with desired analytes. For quantitative analysis, the region of the ratio chromatogram corresponding to the pure elution of an analyte is identified and is used to calculate a ratio value equal to the ratio of concentrations of the analyte in sequential injections. For themore » ratio value calculation, a variance-weighted average is used, which compensates for the varying signal-to-noise ratio. This ratio value, or equivalently the percent change in concentration, is the basis of a chromatographic standard addition method and an algorithm to monitor analyte concentration in a process stream. In the case of overlapped peaks, a spiking procedure is used to calculate both the original concentration of an analyte and its signal contribution to the original chromatogram. Thus, quantitation and curve resolution may be performed simultaneously, without peak modeling or curve fitting. These concepts are demonstrated by using data from ion chromatography, but the technique should be applicable to all chromatographic techniques.« less

  13. SSME propellant path leak detection real-time

    NASA Technical Reports Server (NTRS)

    Crawford, R. A.; Smith, L. M.

    1994-01-01

    Included are four documents that outline the technical aspects of the research performed on NASA Grant NAG8-140: 'A System for Sequential Step Detection with Application to Video Image Processing'; 'Leak Detection from the SSME Using Sequential Image Processing'; 'Digital Image Processor Specifications for Real-Time SSME Leak Detection'; and 'A Color Change Detection System for Video Signals with Applications to Spectral Analysis of Rocket Engine Plumes'.

  14. Microcomputer Applications in Interaction Analysis.

    ERIC Educational Resources Information Center

    Wadham, Rex A.

    The Timed Interval Categorical Observation Recorder (TICOR), a portable, battery powered microcomputer designed to automate the collection of sequential and simultaneous behavioral observations and their associated durations, was developed to overcome problems in gathering subtle interaction analysis data characterized by sequential flow of…

  15. A Strategy for Understanding Noise-Induced Annoyance

    DTIC Science & Technology

    1988-08-01

    Estimation by Sequential Testing (PEST) (Taylor and Creelman , 1967) can be used to efficiently establish the indifference point for each such pair of...population on applicability of noise rating procedures". Noise Control Engineering, 4, 65-70. Taylor, M. M. & Creelman , C. D. "PEST: Efficient

  16. A Method for Optimizing Lightweight-Gypsum Design Based on Sequential Measurements of Physical Parameters

    NASA Astrophysics Data System (ADS)

    Vimmrová, Alena; Kočí, Václav; Krejsová, Jitka; Černý, Robert

    2016-06-01

    A method for lightweight-gypsum material design using waste stone dust as the foaming agent is described. The main objective is to reach several physical properties which are inversely related in a certain way. Therefore, a linear optimization method is applied to handle this task systematically. The optimization process is based on sequential measurement of physical properties. The results are subsequently point-awarded according to a complex point criterion and new composition is proposed. After 17 trials the final mixture is obtained, having the bulk density equal to (586 ± 19) kg/m3 and compressive strength (1.10 ± 0.07) MPa. According to a detailed comparative analysis with reference gypsum, the newly developed material can be used as excellent thermally insulating interior plaster with the thermal conductivity of (0.082 ± 0.005) W/(m·K). In addition, its practical application can bring substantial economic and environmental benefits as the material contains 25 % of waste stone dust.

  17. Sequential motion of the ossicular chain measured by laser Doppler vibrometry.

    PubMed

    Kunimoto, Yasuomi; Hasegawa, Kensaku; Arii, Shiro; Kataoka, Hideyuki; Yazama, Hiroaki; Kuya, Junko; Fujiwara, Kazunori; Takeuchi, Hiromi

    2017-12-01

    In order to help a surgeon make the best decision, a more objective method of measuring ossicular motion is required. A laser Doppler vibrometer was mounted on a surgical microscope. To measure ossicular chain vibrations, eight patients with cochlear implants were investigated. To assess the motions of the ossicular chain, velocities at five points were measured with tonal stimuli of 1 and 3 kHz, which yielded reproducible results. The sequential amplitude change at each point was calculated with phase shifting from the tonal stimulus. Motion of the ossicular chain was visualized from the averaged results using the graphics application. The head of the malleus and the body of the incus showed synchronized movement as one unit. In contrast, the stapes (incudostapedial joint and posterior crus) moved synchronously in opposite phase to the malleus and incus. The amplitudes at 1 kHz were almost twice those at 3 kHz. Our results show that the malleus and incus unit and the stapes move with a phase difference.

  18. Reliability of sternal instability scale (SIS) for transverse sternotomy in lung transplantation (LTX).

    PubMed

    Fuller, Louise M; El-Ansary, Doa; Button, Brenda; Bondarenko, Janet; Marasco, Silvana; Snell, Greg; Holland, Anne E

    2018-01-25

    A surgical incision for bilateral sequential lung transplantation (BSLTX) is the "clam shell" (CSI) approach via bilateral anterior thoracotomies and a transverse sternotomy to allow for sequential replacement of the lungs. This can be associated with significant post-operative pain, bony overriding or sternal instability. The sternal instability scale (SIS) is a non-invasive manual assessment tool that can be used to detect early bony non-union or instability following CSI; however, its reliability is unknown. This prospective blinded reliability study aimed to assess intra-rater and inter-rater reliability of the SIS following lung transplantation. Participants post BSLTX aged older than 18 years underwent sternal assessment utilizing the SIS. Two assessors examined the sternum using a standardized protocol at two separate time points with a test-re-test time of 48 hours. The outcome measure was SIS tool using four categories from 0 (clinically stable) to 3 (separated sternum with overriding). In total, 20 participants (75% female) with a mean age of 48 years (SD 17) and mean pain score of 3 out of 10 were included, 60% having well healed wounds and 25% reporting symptoms of sternal clicking. The most painful self-reported painful activity was coughing. The SIS demonstrated excellent reliability with a kappa = 0.91 by different assessors on the same day, and kappa = 0.83 for assessments by the same assessor on different days. The SIS is a reliable manual assessment tool for evaluation of sternal instability after CSI following BSLTX and may facilitate the timely detection and management of sternal instability.

  19. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis.

    PubMed

    Tran-Duy, An; Boonen, Annelies; van de Laar, Mart A F J; Franke, Angelinus C; Severens, Johan L

    2011-12-01

    To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Discrete event simulation paradigm was selected for model development. Drug efficacy was modelled as changes in disease activity (Bath Ankylosing Spondylitis Disease Activity Index (BASDAI)) and functional status (Bath Ankylosing Spondylitis Functional Index (BASFI)), which were linked to costs and health utility using statistical models fitted based on an observational AS cohort. Published clinical data were used to estimate drug efficacy and time to events. Two strategies were compared: (1) five available non-steroidal anti-inflammatory drugs (strategy 1) and (2) same as strategy 1 plus two tumour necrosis factor α inhibitors (strategy 2). 13,000 patients were followed up individually until death. For probability sensitivity analysis, Monte Carlo simulations were performed with 1000 sets of parameters sampled from the appropriate probability distributions. The models successfully generated valid data on treatments, BASDAI, BASFI, utility, quality-adjusted life years (QALYs) and costs at time points with intervals of 1-3 months during the simulation length of 70 years. Incremental cost per QALY gained in strategy 2 compared with strategy 1 was €35,186. At a willingness-to-pay threshold of €80,000, it was 99.9% certain that strategy 2 was cost-effective. The modelling framework provides great flexibility to implement complex algorithms representing treatment selection, disease progression and changes in costs and utilities over time of patients with AS. Results obtained from the simulation are plausible.

  20. Time-lapse evaluation of human embryo development in single versus sequential culture media--a sibling oocyte study.

    PubMed

    Ciray, Haydar Nadir; Aksoy, Turan; Goktas, Cihan; Ozturk, Bilgen; Bahceci, Mustafa

    2012-09-01

    To compare the dynamics of early development between embryos cultured in single and sequential media. Randomized, comparative study. Private IVF centre. A total of 446 metaphase II oocytes from 51 couples who underwent oocyte retrieval procedure for intracytoplasmic sperm injection. Forty-nine resulted in embryo transfer. Oocytes were split between single and sequential media produced by the same manufacturer and cultured in a time-lapse incubator. Morphokinetic parameters until the embryos reached the 5-cell stage (t5), utilization, clinical pregnancy and implantation rates. Embryos cultured in single media were advanced from the first mitosis cycle and reached 2- to 5-cell stages earlier. There was not any difference between the durations for cell cycle two (cc2 = t3-t2) and s2 (t4-t3). The utilization, clinical pregnancy and implantation rates did not differ between groups. The proportion of cryopreserved day 6 embryos to two pronuclei oocytes was significantly higher in sequential than in single media. Morphokinetics of embryo development vary between single and sequential culture media at least until the 5-cell stage. The overall clinical and embryological parameters remain similar regardless of the culture system.

  1. A field test of three LQAS designs to assess the prevalence of acute malnutrition.

    PubMed

    Deitchler, Megan; Valadez, Joseph J; Egge, Kari; Fernandez, Soledad; Hennigan, Mary

    2007-08-01

    The conventional method for assessing the prevalence of Global Acute Malnutrition (GAM) in emergency settings is the 30 x 30 cluster-survey. This study describes alternative approaches: three Lot Quality Assurance Sampling (LQAS) designs to assess GAM. The LQAS designs were field-tested and their results compared with those from a 30 x 30 cluster-survey. Computer simulations confirmed that small clusters instead of a simple random sample could be used for LQAS assessments of GAM. Three LQAS designs were developed (33 x 6, 67 x 3, Sequential design) to assess GAM thresholds of 10, 15 and 20%. The designs were field-tested simultaneously with a 30 x 30 cluster-survey in Siraro, Ethiopia during June 2003. Using a nested study design, anthropometric, morbidity and vaccination data were collected on all children 6-59 months in sampled households. Hypothesis tests about GAM thresholds were conducted for each LQAS design. Point estimates were obtained for the 30 x 30 cluster-survey and the 33 x 6 and 67 x 3 LQAS designs. Hypothesis tests showed GAM as <10% for the 33 x 6 design and GAM as > or =10% for the 67 x 3 and Sequential designs. Point estimates for the 33 x 6 and 67 x 3 designs were similar to those of the 30 x 30 cluster-survey for GAM (6.7%, CI = 3.2-10.2%; 8.2%, CI = 4.3-12.1%, 7.4%, CI = 4.8-9.9%) and all other indicators. The CIs for the LQAS designs were only slightly wider than the CIs for the 30 x 30 cluster-survey; yet the LQAS designs required substantially less time to administer. The LQAS designs provide statistically appropriate alternatives to the more time-consuming 30 x 30 cluster-survey. However, additional field-testing is needed using independent samples rather than a nested study design.

  2. Re-constructing nutritional history of Serengeti wildebeest from stable isotopes in tail hair: seasonal starvation patterns in an obligate grazer.

    PubMed

    Rysava, K; McGill, R A R; Matthiopoulos, J; Hopcraft, J G C

    2016-07-15

    Nutritional bottlenecks often limit the abundance of animal populations and alter individual behaviours; however, establishing animal condition over extended periods of time using non-invasive techniques has been a major limitation in population ecology. We test if the sequential measurement of δ(15) N values in a continually growing tissue, such as hair, can be used as a natural bio-logger akin to tree rings or ice cores to provide insights into nutritional stress. Nitrogen stable isotope ratios were measured by continuous-flow isotope-ratio mass spectrometry (IRMS) from 20 sequential segments along the tail hairs of 15 migratory wildebeest. Generalized Linear Models were used to test for variation between concurrent segments of hair from the same individual, and to compare the δ(15) N values of starved and non-starved animals. Correlations between δ(15) N values in the hair and periods of above-average energy demand during the annual cycle were tested using Generalized Additive Mixed Models. The time series of nitrogen isotope ratios in the tail hair are comparable between strands from the same individual. The most likely explanation for the pattern of (15) N enrichment between individuals is determined by life phase, and especially the energetic demands associated with reproduction. The mean δ(15) N value of starved animals was greater than that of non-starved animals, suggesting that higher δ(15) N values correlate with periods of nutritional stress. High δ(15) N values in the tail hair of wildebeest are correlated with periods of negative energy balance, suggesting they may be used as a reliable indicator of the animal's nutritional history. This technique might be applicable to other obligate grazers. Most importantly, the sequential isotopic analysis of hair offers a continuous record of the chronic condition of wildebeest (effectively converting point data into time series) and allows researchers to establish the animal's nutritional diary. © 2016 The Authors. Rapid Communications in Mass Spectrometry Published by John Wiley & Sons Ltd.

  3. Precise algorithm to generate random sequential adsorption of hard polygons at saturation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, G.

    Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation'' limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles, and could thus determine the saturation density of spheres with high accuracy. Here in this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensionalmore » polygons. We also calculate the saturation density for regular polygons of three to ten sides, and obtain results that are consistent with previous, extrapolation-based studies.« less

  4. Precise algorithm to generate random sequential adsorption of hard polygons at saturation

    DOE PAGES

    Zhang, G.

    2018-04-30

    Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation'' limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles, and could thus determine the saturation density of spheres with high accuracy. Here in this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensionalmore » polygons. We also calculate the saturation density for regular polygons of three to ten sides, and obtain results that are consistent with previous, extrapolation-based studies.« less

  5. Evaluation of energy expenditure in adult spring Chinook salmon migrating upstream in the Columbia River Basin: an assessment based on sequential proximate analysis

    USGS Publications Warehouse

    Mesa, M.G.; Magie, C.D.

    2006-01-01

    The upstream migration of adult anadromous salmonids in the Columbia River Basin (CRB) has been dramatically altered and fish may be experiencing energetically costly delays at dams. To explore this notion, we estimated the energetic costs of migration and reproduction of Yakima River-bound spring Chinook salmon Oncorhynchus tshawytscha using a sequential analysis of their proximate composition (i.e., percent water, fat, protein, and ash). Tissues (muscle, viscera, and gonad) were sampled from fish near the start of their migration (Bonneville Dam), at a mid point (Roza Dam, 510 km upstream from Bonneville Dam) and from fresh carcasses on the spawning grounds (about 100 km above Roza Dam). At Bonneville Dam, the energy reserves of these fish were remarkably high, primarily due to the high percentage of fat in the muscle (18-20%; energy content over 11 kJ g-1). The median travel time for fish from Bonneville to Roza Dam was 27 d and ranged from 18 to 42 d. Fish lost from 6 to 17% of their energy density in muscle, depending on travel time. On average, fish taking a relatively long time for migration between dams used from 5 to 8% more energy from the muscle than faster fish. From the time they passed Bonneville Dam to death, these fish, depending on gender, used 95-99% of their muscle and 73-86% of their viscera lipid stores. Also, both sexes used about 32% of their muscular and very little of their visceral protein stores. However, we were unable to relate energy use and reproductive success to migration history. Our results suggest a possible influence of the CRB hydroelectric system on adult salmonid energetics.

  6. Space-time measurements of oceanic sea states

    NASA Astrophysics Data System (ADS)

    Fedele, Francesco; Benetazzo, Alvise; Gallego, Guillermo; Shih, Ping-Chang; Yezzi, Anthony; Barbariol, Francesco; Ardhuin, Fabrice

    2013-10-01

    Stereo video techniques are effective for estimating the space-time wave dynamics over an area of the ocean. Indeed, a stereo camera view allows retrieval of both spatial and temporal data whose statistical content is richer than that of time series data retrieved from point wave probes. We present an application of the Wave Acquisition Stereo System (WASS) for the analysis of offshore video measurements of gravity waves in the Northern Adriatic Sea and near the southern seashore of the Crimean peninsula, in the Black Sea. We use classical epipolar techniques to reconstruct the sea surface from the stereo pairs sequentially in time, viz. a sequence of spatial snapshots. We also present a variational approach that exploits the entire data image set providing a global space-time imaging of the sea surface, viz. simultaneous reconstruction of several spatial snapshots of the surface in order to guarantee continuity of the sea surface both in space and time. Analysis of the WASS measurements show that the sea surface can be accurately estimated in space and time together, yielding associated directional spectra and wave statistics at a point in time that agrees well with probabilistic models. In particular, WASS stereo imaging is able to capture typical features of the wave surface, especially the crest-to-trough asymmetry due to second order nonlinearities, and the observed shape of large waves are fairly described by theoretical models based on the theory of quasi-determinism (Boccotti, 2000). Further, we investigate space-time extremes of the observed stationary sea states, viz. the largest surface wave heights expected over a given area during the sea state duration. The WASS analysis provides the first experimental proof that a space-time extreme is generally larger than that observed in time via point measurements, in agreement with the predictions based on stochastic theories for global maxima of Gaussian fields.

  7. A stochastic method for computing hadronic matrix elements

    DOE PAGES

    Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; ...

    2014-01-24

    In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.

  8. The Magnitude, Generality, and Determinants of Flynn Effects on Forms of Declarative Memory and Visuospatial Ability: Time-Sequential Analyses of Data from a Swedish Cohort Study

    ERIC Educational Resources Information Center

    Ronnlund, Michael; Nilsson, Lars-Goran

    2008-01-01

    To estimate Flynn effects (FEs) on forms of declarative memory (episodic, semantic) and visuospatial ability (Block Design) time-sequential analyses of data for Swedish adult samples (35-80 years) assessed on either of four occasions (1989, 1994, 1999, 2004; n = 2995) were conducted. The results demonstrated cognitive gains across occasions,…

  9. Forced guidance and distribution of practice in sequential information processing.

    NASA Technical Reports Server (NTRS)

    Decker, L. R.; Rogers, C. A., Jr.

    1973-01-01

    Distribution of practice and forced guidance were used in a sequential information-processing task in an attempt to increase the capacity of human information-processing mechanisms. A reaction time index of the psychological refractory period was used as the response measure. Massing of practice lengthened response times while forced guidance shortened them. Interpretation was in terms of load reduction upon the response-selection stage of the information-processing system.-

  10. Differential-Game Examination of Optimal Time-Sequential Fire-Support Strategies

    DTIC Science & Technology

    1976-09-01

    77 004033 NPS-55Tw76091 NAVAL POSTGRADUATE SCHOOL 4Monterey, California i ’ DIFFERENTIAL- GAME EXAMINATION OF OPTIMAL TIME-SEQUENTIAL FIRE...CATALOG NUMBER NPS-55Tw76091 4. TITLE (and Subtitle) S. TYPE OF REPDRT & PERIOD COVERED Differential- Game Examination of Optimal Tir Technical Report...NOTES 19. KEY WORDS (Continue on reverse side If necessary and identify by block number) Differential Games Lanchester Theory of Combat Military Tactics

  11. Challenges in predicting climate change impacts on pome fruit phenology

    NASA Astrophysics Data System (ADS)

    Darbyshire, Rebecca; Webb, Leanne; Goodwin, Ian; Barlow, E. W. R.

    2014-08-01

    Climate projection data were applied to two commonly used pome fruit flowering models to investigate potential differences in predicted full bloom timing. The two methods, fixed thermal time and sequential chill-growth, produced different results for seven apple and pear varieties at two Australian locations. The fixed thermal time model predicted incremental advancement of full bloom, while results were mixed from the sequential chill-growth model. To further investigate how the sequential chill-growth model reacts under climate perturbed conditions, four simulations were created to represent a wider range of species physiological requirements. These were applied to five Australian locations covering varied climates. Lengthening of the chill period and contraction of the growth period was common to most results. The relative dominance of the chill or growth component tended to predict whether full bloom advanced, remained similar or was delayed with climate warming. The simplistic structure of the fixed thermal time model and the exclusion of winter chill conditions in this method indicate it is unlikely to be suitable for projection analyses. The sequential chill-growth model includes greater complexity; however, reservations in using this model for impact analyses remain. The results demonstrate that appropriate representation of physiological processes is essential to adequately predict changes to full bloom under climate perturbed conditions with greater model development needed.

  12. Sequential Probability Ratio Testing with Power Projective Base Method Improves Decision-Making for BCI

    PubMed Central

    Liu, Rong

    2017-01-01

    Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781

  13. A simple method for HPLC retention time prediction: linear calibration using two reference substances.

    PubMed

    Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng

    2017-01-01

    Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.

  14. Sleep to the beat: A nap favours consolidation of timing.

    PubMed

    Verweij, Ilse M; Onuki, Yoshiyuki; Van Someren, Eus J W; Van der Werf, Ysbrand D

    2016-06-01

    Growing evidence suggests that sleep is important for procedural learning, but few studies have investigated the effect of sleep on the temporal aspects of motor skill learning. We assessed the effect of a 90-min day-time nap on learning a motor timing task, using 2 adaptations of a serial interception sequence learning (SISL) task. Forty-two right-handed participants performed the task before and after a 90-min period of sleep or wake. Electroencephalography (EEG) was recorded throughout. The motor task consisted of a sequential spatial pattern and was performed according to 2 different timing conditions, that is, either following a sequential or a random temporal pattern. The increase in accuracy was compared between groups using a mixed linear regression model. Within the sleep group, performance improvement was modeled based on sleep characteristics, including spindle- and slow-wave density. The sleep group, but not the wake group, showed improvement in the random temporal, but especially and significantly more strongly in the sequential temporal condition. None of the sleep characteristics predicted improvement on either general of the timing conditions. In conclusion, a daytime nap improves performance on a timing task. We show that performance on the task with a sequential timing sequence benefits more from sleep than motor timing. More important, the temporal sequence did not benefit initial learning, because differences arose only after an offline period and specifically when this period contained sleep. Sleep appears to aid in the extraction of regularities for optimal subsequent performance. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. A self-timed multipurpose delay sensor for Field Programmable Gate Arrays (FPGAs).

    PubMed

    Osuna, Carlos Gómez; Ituero, Pablo; López-Vallejo, Marisa

    2013-12-20

    This paper presents a novel self-timed multi-purpose sensor especially conceived for Field Programmable Gate Arrays (FPGAs). The aim of the sensor is to measure performance variations during the life-cycle of the device, such as process variability, critical path timing and temperature variations. The proposed topology, through the use of both combinational and sequential FPGA elements, amplifies the time of a signal traversing a delay chain to produce a pulse whose width is the sensor's measurement. The sensor is fully self-timed, avoiding the need for clock distribution networks and eliminating the limitations imposed by the system clock. One single off- or on-chip time-to-digital converter is able to perform digitization of several sensors in a single operation. These features allow for a simplified approach for designers wanting to intertwine a multi-purpose sensor network with their application logic. Employed as a temperature sensor, it has been measured to have an error of  ±0.67 °C, over the range of 20-100 °C, employing 20 logic elements with a 2-point calibration.

  16. A Self-Timed Multipurpose Delay Sensor for Field Programmable Gate Arrays (FPGAs)

    PubMed Central

    Osuna, Carlos Gómez; Ituero, Pablo; López-Vallejo, Marisa

    2014-01-01

    This paper presents a novel self-timed multi-purpose sensor especially conceived for Field Programmable Gate Arrays (FPGAs). The aim of the sensor is to measure performance variations during the life-cycle of the device, such as process variability, critical path timing and temperature variations. The proposed topology, through the use of both combinational and sequential FPGA elements, amplifies the time of a signal traversing a delay chain to produce a pulse whose width is the sensor's measurement. The sensor is fully self-timed, avoiding the need for clock distribution networks and eliminating the limitations imposed by the system clock. One single off- or on-chip time-to-digital converter is able to perform digitization of several sensors in a single operation. These features allow for a simplified approach for designers wanting to intertwine a multi-purpose sensor network with their application logic. Employed as a temperature sensor, it has been measured to have an error of ±0.67 °C, over the range of 20–100 °C, employing 20 logic elements with a 2-point calibration. PMID:24361927

  17. The Inner Processes Underlying Vocational Development.

    ERIC Educational Resources Information Center

    Bujold, Charles E.; And Others

    Educational and occupational choices may be considered, from a developmental point of view, as long-term problems whose solutions imply a number of tasks. These tasks sequentially might be called exploration, crystallization, specification, and implementation. What are the processes involved in these tasks? What are the abilities which make…

  18. Simple and flexible SAS and SPSS programs for analyzing lag-sequential categorical data.

    PubMed

    O'Connor, B P

    1999-11-01

    This paper describes simple and flexible programs for analyzing lag-sequential categorical data, using SAS and SPSS. The programs read a stream of codes and produce a variety of lag-sequential statistics, including transitional frequencies, expected transitional frequencies, transitional probabilities, adjusted residuals, z values, Yule's Q values, likelihood ratio tests of stationarity across time and homogeneity across groups or segments, transformed kappas for unidirectional dependence, bidirectional dependence, parallel and nonparallel dominance, and significance levels based on both parametric and randomization tests.

  19. High data rate coding for the space station telemetry links.

    NASA Technical Reports Server (NTRS)

    Lumb, D. R.; Viterbi, A. J.

    1971-01-01

    Coding systems for high data rates were examined from the standpoint of potential application in space-station telemetry links. Approaches considered included convolutional codes with sequential, Viterbi, and cascaded-Viterbi decoding. It was concluded that a high-speed (40 Mbps) sequential decoding system best satisfies the requirements for the assumed growth potential and specified constraints. Trade-off studies leading to this conclusion are viewed, and some sequential (Fano) algorithm improvements are discussed, together with real-time simulation results.

  20. Meta-heuristic algorithm to solve two-sided assembly line balancing problems

    NASA Astrophysics Data System (ADS)

    Wirawan, A. D.; Maruf, A.

    2016-02-01

    Two-sided assembly line is a set of sequential workstations where task operations can be performed at two sides of the line. This type of line is commonly used for the assembly of large-sized products: cars, buses, and trucks. This paper propose a Decoding Algorithm with Teaching-Learning Based Optimization (TLBO), a recently developed nature-inspired search method to solve the two-sided assembly line balancing problem (TALBP). The algorithm aims to minimize the number of mated-workstations for the given cycle time without violating the synchronization constraints. The correlation between the input parameters and the emergence point of objective function value is tested using scenarios generated by design of experiments. A two-sided assembly line operated in an Indonesia's multinational manufacturing company is considered as the object of this paper. The result of the proposed algorithm shows reduction of workstations and indicates that there is negative correlation between the emergence point of objective function value and the size of population used.

  1. Methods of Real Time Image Enhancement of Flash LIDAR Data and Navigating a Vehicle Using Flash LIDAR Data

    NASA Technical Reports Server (NTRS)

    Vanek, Michael D. (Inventor)

    2014-01-01

    A method for creating a digital elevation map ("DEM") from frames of flash LIDAR data includes generating a first distance R(sub i) from a first detector i to a first point on a surface S(sub i). After defining a map with a mesh THETA having cells k, a first array S(k), a second array M(k), and a third array D(k) are initialized. The first array corresponds to the surface, the second array corresponds to the elevation map, and the third array D(k) receives an output for the DEM. The surface is projected onto the mesh THETA, so that a second distance R(sub k) from a second point on the mesh THETA to the detector can be found. From this, a height may be calculated, which permits the generation of a digital elevation map. Also, using sequential frames of flash LIDAR data, vehicle control is possible using an offset between successive frames.

  2. High-performance parallel approaches for three-dimensional light detection and ranging point clouds gridding

    NASA Astrophysics Data System (ADS)

    Rizki, Permata Nur Miftahur; Lee, Heezin; Lee, Minsu; Oh, Sangyoon

    2017-01-01

    With the rapid advance of remote sensing technology, the amount of three-dimensional point-cloud data has increased extraordinarily, requiring faster processing in the construction of digital elevation models. There have been several attempts to accelerate the computation using parallel methods; however, little attention has been given to investigating different approaches for selecting the most suited parallel programming model for a given computing environment. We present our findings and insights identified by implementing three popular high-performance parallel approaches (message passing interface, MapReduce, and GPGPU) on time demanding but accurate kriging interpolation. The performances of the approaches are compared by varying the size of the grid and input data. In our empirical experiment, we demonstrate the significant acceleration by all three approaches compared to a C-implemented sequential-processing method. In addition, we also discuss the pros and cons of each method in terms of usability, complexity infrastructure, and platform limitation to give readers a better understanding of utilizing those parallel approaches for gridding purposes.

  3. A new paper-based platform technology for point-of-care diagnostics.

    PubMed

    Gerbers, Roman; Foellscher, Wilke; Chen, Hong; Anagnostopoulos, Constantine; Faghri, Mohammad

    2014-10-21

    Currently, the Lateral flow Immunoassays (LFIAs) are not able to perform complex multi-step immunodetection tests because of their inability to introduce multiple reagents in a controlled manner to the detection area autonomously. In this research, a point-of-care (POC) paper-based lateral flow immunosensor was developed incorporating a novel microfluidic valve technology. Layers of paper and tape were used to create a three-dimensional structure to form the fluidic network. Unlike the existing LFIAs, multiple directional valves are embedded in the test strip layers to control the order and the timing of mixing for the sample and multiple reagents. In this paper, we report a four-valve device which autonomously directs three different fluids to flow sequentially over the detection area. As proof of concept, a three-step alkaline phosphatase based Enzyme-Linked ImmunoSorbent Assay (ELISA) protocol with Rabbit IgG as the model analyte was conducted to prove the suitability of the device for immunoassays. The detection limit of about 4.8 fm was obtained.

  4. Eyewitness accuracy rates in sequential and simultaneous lineup presentations: a meta-analytic comparison.

    PubMed

    Steblay, N; Dysart, J; Fulero, S; Lindsay, R C

    2001-10-01

    Most police lineups use a simultaneous presentation technique in which eyewitnesses view all lineup members at the same time. Lindsay and Wells (R. C. L. Lindsay & G. L. Wells, 1985) devised an alternative procedure, the sequential lineup, in which witnesses view one lineup member at a time and decide whether or not that person is the perpetrator prior to viewing the next lineup member. The present work uses the technique of meta-analysis to compare the accuracy rates of these presentation styles. Twenty-three papers were located (9 published and 14 unpublished), providing 30 tests of the hypothesis and including 4,145 participants. Results showed that identification of perpetrators from target-present lineups occurs at a higher rate from simultaneous than from sequential lineups. However, this difference largely disappears when moderator variables approximating real world conditions are considered. Also, correct rejection rates were significantly higher for sequential than simultaneous lineups and this difference is maintained or increased by greater approximation to real world conditions. Implications of these findings are discussed.

  5. Decision making and sequential sampling from memory

    PubMed Central

    Shadlen, Michael N.; Shohamy, Daphna

    2016-01-01

    Decisions take time, and as a rule more difficult decisions take more time. But this only raises the question of what consumes the time. For decisions informed by a sequence of samples of evidence, the answer is straightforward: more samples are available with more time. Indeed the speed and accuracy of such decisions are explained by the accumulation of evidence to a threshold or bound. However, the same framework seems to apply to decisions that are not obviously informed by sequences of evidence samples. Here we proffer the hypothesis that the sequential character of such tasks involves retrieval of evidence from memory. We explore this hypothesis by focusing on value-based decisions and argue that mnemonic processes can account for regularities in choice and decision time. We speculate on the neural mechanisms that link sampling of evidence from memory to circuits that represent the accumulated evidence bearing on a choice. We propose that memory processes may contribute to a wider class of decisions that conform to the regularities of choice-reaction time predicted by the sequential sampling framework. PMID:27253447

  6. Early math matters: kindergarten number competence and later mathematics outcomes.

    PubMed

    Jordan, Nancy C; Kaplan, David; Ramineni, Chaitanya; Locuniak, Maria N

    2009-05-01

    Children's number competencies over 6 time points, from the beginning of kindergarten to the middle of 1st grade, were examined in relation to their mathematics achievement over 5 later time points, from the end of 1st grade to the end of 3rd grade. The relation between early number competence and mathematics achievement was strong and significant throughout the study period. A sequential process growth curve model showed that kindergarten number competence predicted rate of growth in mathematics achievement between 1st and 3rd grades as well as achievement level through 3rd grade. Further, rate of growth in early number competence predicted mathematics performance level in 3rd grade. Although low-income children performed more poorly than their middle-income counterparts in mathematics achievement and progressed at a slower rate, their performance and growth were mediated through relatively weak kindergarten number competence. Similarly, the better performance and faster growth of children who entered kindergarten at an older age were explained by kindergarten number competence. The findings show the importance of early number competence for setting children's learning trajectories in elementary school mathematics. Copyright 2009 APA, all rights reserved

  7. A new similarity index for nonlinear signal analysis based on local extrema patterns

    NASA Astrophysics Data System (ADS)

    Niknazar, Hamid; Motie Nasrabadi, Ali; Shamsollahi, Mohammad Bagher

    2018-02-01

    Common similarity measures of time domain signals such as cross-correlation and Symbolic Aggregate approximation (SAX) are not appropriate for nonlinear signal analysis. This is because of the high sensitivity of nonlinear systems to initial points. Therefore, a similarity measure for nonlinear signal analysis must be invariant to initial points and quantify the similarity by considering the main dynamics of signals. The statistical behavior of local extrema (SBLE) method was previously proposed to address this problem. The SBLE similarity index uses quantized amplitudes of local extrema to quantify the dynamical similarity of signals by considering patterns of sequential local extrema. By adding time information of local extrema as well as fuzzifying quantized values, this work proposes a new similarity index for nonlinear and long-term signal analysis, which extends the SBLE method. These new features provide more information about signals and reduce noise sensitivity by fuzzifying them. A number of practical tests were performed to demonstrate the ability of the method in nonlinear signal clustering and classification on synthetic data. In addition, epileptic seizure detection based on electroencephalography (EEG) signal processing was done by the proposed similarity to feature the potentials of the method as a real-world application tool.

  8. Statistical Attitude Determination

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    2010-01-01

    All spacecraft require attitude determination at some level of accuracy. This can be a very coarse requirement of tens of degrees, in order to point solar arrays at the sun, or a very fine requirement in the milliarcsecond range, as required by Hubble Space Telescope. A toolbox of attitude determination methods, applicable across this wide range, has been developed over the years. There have been many advances in the thirty years since the publication of Reference, but the fundamentals remain the same. One significant change is that onboard attitude determination has largely superseded ground-based attitude determination, due to the greatly increased power of onboard computers. The availability of relatively inexpensive radiation-hardened microprocessors has led to the development of "smart" sensors, with autonomous star trackers being the first spacecraft application. Another new development is attitude determination using interferometry of radio signals from the Global Positioning System (GPS) constellation. This article reviews both the classic material and these newer developments at approximately the level of, with emphasis on. methods suitable for use onboard a spacecraft. We discuss both "single frame" methods that are based on measurements taken at a single point in time, and sequential methods that use information about spacecraft dynamics to combine the information from a time series of measurements.

  9. Early Math Matters: Kindergarten Number Competence and Later Mathematics Outcomes

    PubMed Central

    Jordan, Nancy C.; Kaplan, David; Ramineni, Chaitanya; Locuniak, Maria N.

    2009-01-01

    Children’s number competencies over 6 time points, from the beginning of kindergarten to the middle of 1st grade, were examined in relation to their mathematics achievement over 5 later time points, from the end of 1st grade to the end of 3rd grade. The relation between early number competence and mathematics achievement was strong and significant throughout the study period. A sequential process growth curve model showed that kindergarten number competence predicted rate of growth in mathematics achievement between 1st and 3rd grades as well as achievement level through 3rd grade. Further, rate of growth in early number competence predicted mathematics performance level in 3rd grade. Although low-income children performed more poorly than their middle-income counterparts in mathematics achievement and progressed at a slower rate, their performance and growth were mediated through relatively weak kindergarten number competence. Similarly, the better performance and faster growth of children who entered kindergarten at an older age were explained by kindergarten number competence. The findings show the importance of early number competence for setting children’s learning trajectories in elementary school mathematics. PMID:19413436

  10. Intra-individual diagnostic image quality and organ-specific-radiation dose comparison between spiral cCT with iterative image reconstruction and z-axis automated tube current modulation and sequential cCT.

    PubMed

    Wenz, Holger; Maros, Máté E; Meyer, Mathias; Gawlitza, Joshua; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O; Groden, Christoph; Henzler, Thomas

    2016-01-01

    To prospectively evaluate image quality and organ-specific-radiation dose of spiral cranial CT (cCT) combined with automated tube current modulation (ATCM) and iterative image reconstruction (IR) in comparison to sequential tilted cCT reconstructed with filtered back projection (FBP) without ATCM. 31 patients with a previous performed tilted non-contrast enhanced sequential cCT aquisition on a 4-slice CT system with only FBP reconstruction and no ATCM were prospectively enrolled in this study for a clinical indicated cCT scan. All spiral cCT examinations were performed on a 3rd generation dual-source CT system using ATCM in z-axis direction. Images were reconstructed using both, FBP and IR (level 1-5). A Monte-Carlo-simulation-based analysis was used to compare organ-specific-radiation dose. Subjective image quality for various anatomic structures was evaluated using a 4-point Likert-scale and objective image quality was evaluated by comparing signal-to-noise ratios (SNR). Spiral cCT led to a significantly lower (p < 0.05) organ-specific-radiation dose in all targets including eye lense. Subjective image quality of spiral cCT datasets with an IR reconstruction level 5 was rated significantly higher compared to the sequential cCT acquisitions (p < 0.0001). Consecutive mean SNR was significantly higher in all spiral datasets (FBP, IR 1-5) when compared to sequential cCT with a mean SNR improvement of 44.77% (p < 0.0001). Spiral cCT combined with ATCM and IR allows for significant-radiation dose reduction including a reduce eye lens organ-dose when compared to a tilted sequential cCT while improving subjective and objective image quality.

  11. Sequential Versus Concurrent Trastuzumab in Adjuvant Chemotherapy for Breast Cancer

    PubMed Central

    Perez, Edith A.; Suman, Vera J.; Davidson, Nancy E.; Gralow, Julie R.; Kaufman, Peter A.; Visscher, Daniel W.; Chen, Beiyun; Ingle, James N.; Dakhil, Shaker R.; Zujewski, JoAnne; Moreno-Aspitia, Alvaro; Pisansky, Thomas M.; Jenkins, Robert B.

    2011-01-01

    Purpose NCCTG (North Central Cancer Treatment Group) N9831 is the only randomized phase III trial evaluating trastuzumab added sequentially or used concurrently with chemotherapy in resected stages I to III invasive human epidermal growth factor receptor 2–positive breast cancer. Patients and Methods Patients received doxorubicin and cyclophosphamide every 3 weeks for four cycles, followed by paclitaxel weekly for 12 weeks (arm A), paclitaxel plus sequential trastuzumab weekly for 52 weeks (arm B), or paclitaxel plus concurrent trastuzumab for 12 weeks followed by trastuzumab for 40 weeks (arm C). The primary end point was disease-free survival (DFS). Results Comparison of arm A (n = 1,087) and arm B (n = 1,097), with 6-year median follow-up and 390 events, revealed 5-year DFS rates of 71.8% and 80.1%, respectively. DFS was significantly increased with trastuzumab added sequentially to paclitaxel (log-rank P < .001; arm B/arm A hazard ratio [HR], 0.69; 95% CI, 0.57 to 0.85). Comparison of arm B (n = 954) and arm C (n = 949), with 6-year median follow-up and 313 events, revealed 5-year DFS rates of 80.1% and 84.4%, respectively. There was an increase in DFS with concurrent trastuzumab and paclitaxel relative to sequential administration (arm C/arm B HR, 0.77; 99.9% CI, 0.53 to 1.11), but the P value (.02) did not cross the prespecified O'Brien-Fleming boundary (.00116) for the interim analysis. Conclusion DFS was significantly improved with 52 weeks of trastuzumab added to adjuvant chemotherapy. On the basis of a positive risk-benefit ratio, we recommend that trastuzumab be incorporated into a concurrent regimen with taxane chemotherapy as an important standard-of-care treatment alternative to a sequential regimen. PMID:22042958

  12. Efficient Controls for Finitely Convergent Sequential Algorithms

    PubMed Central

    Chen, Wei; Herman, Gabor T.

    2010-01-01

    Finding a feasible point that satisfies a set of constraints is a common task in scientific computing: examples are the linear feasibility problem and the convex feasibility problem. Finitely convergent sequential algorithms can be used for solving such problems; an example of such an algorithm is ART3, which is defined in such a way that its control is cyclic in the sense that during its execution it repeatedly cycles through the given constraints. Previously we found a variant of ART3 whose control is no longer cyclic, but which is still finitely convergent and in practice it usually converges faster than ART3 does. In this paper we propose a general methodology for automatic transformation of finitely convergent sequential algorithms in such a way that (i) finite convergence is retained and (ii) the speed of convergence is improved. The first of these two properties is proven by mathematical theorems, the second is illustrated by applying the algorithms to a practical problem. PMID:20953327

  13. Estimating the reliability of eyewitness identifications from police lineups

    PubMed Central

    Wixted, John T.; Mickes, Laura; Dunn, John C.; Clark, Steven E.; Wells, William

    2016-01-01

    Laboratory-based mock crime studies have often been interpreted to mean that (i) eyewitness confidence in an identification made from a lineup is a weak indicator of accuracy and (ii) sequential lineups are diagnostically superior to traditional simultaneous lineups. Largely as a result, juries are increasingly encouraged to disregard eyewitness confidence, and up to 30% of law enforcement agencies in the United States have adopted the sequential procedure. We conducted a field study of actual eyewitnesses who were assigned to simultaneous or sequential photo lineups in the Houston Police Department over a 1-y period. Identifications were made using a three-point confidence scale, and a signal detection model was used to analyze and interpret the results. Our findings suggest that (i) confidence in an eyewitness identification from a fair lineup is a highly reliable indicator of accuracy and (ii) if there is any difference in diagnostic accuracy between the two lineup formats, it likely favors the simultaneous procedure. PMID:26699467

  14. Estimating the reliability of eyewitness identifications from police lineups.

    PubMed

    Wixted, John T; Mickes, Laura; Dunn, John C; Clark, Steven E; Wells, William

    2016-01-12

    Laboratory-based mock crime studies have often been interpreted to mean that (i) eyewitness confidence in an identification made from a lineup is a weak indicator of accuracy and (ii) sequential lineups are diagnostically superior to traditional simultaneous lineups. Largely as a result, juries are increasingly encouraged to disregard eyewitness confidence, and up to 30% of law enforcement agencies in the United States have adopted the sequential procedure. We conducted a field study of actual eyewitnesses who were assigned to simultaneous or sequential photo lineups in the Houston Police Department over a 1-y period. Identifications were made using a three-point confidence scale, and a signal detection model was used to analyze and interpret the results. Our findings suggest that (i) confidence in an eyewitness identification from a fair lineup is a highly reliable indicator of accuracy and (ii) if there is any difference in diagnostic accuracy between the two lineup formats, it likely favors the simultaneous procedure.

  15. United Kingdom national paediatric bilateral cochlear implant audit: preliminary results.

    PubMed

    Cullington, Helen; Bele, Devyanee; Brinton, Julie; Lutman, Mark

    2013-11-01

    Prior to 2009, United Kingdom (UK) public funding was mainly only available for children to receive unilateral cochlear implants. In 2009, the National Institute for Health and Care Excellence published guidance for cochlear implantation following their review. According to these guidelines, all suitable children are eligible to have simultaneous bilateral cochlear implants or a sequential bilateral cochlear implant if they had received the first before the guidelines were published. Fifteen UK cochlear implant centres formed a consortium to carry out a multi-centre audit. The audit involves collecting data from simultaneously and sequentially implanted children at four intervals: before bilateral cochlear implants or before the sequential implant, 1, 2, and 3 years after bilateral implants. The measures include localization, speech recognition in quiet and background noise, speech production, listening, vocabulary, parental perception, quality of life, and surgical data including complications. The audit has now passed the 2-year point, and data have been received on 850 children. This article provides a first view of some data received up until March 2012.

  16. Real-time estimation of BDS/GPS high-rate satellite clock offsets using sequential least squares

    NASA Astrophysics Data System (ADS)

    Fu, Wenju; Yang, Yuanxi; Zhang, Qin; Huang, Guanwen

    2018-07-01

    The real-time precise satellite clock product is one of key prerequisites for real-time Precise Point Positioning (PPP). The accuracy of the 24-hour predicted satellite clock product with 15 min sampling interval and an update of 6 h provided by the International GNSS Service (IGS) is only 3 ns, which could not meet the needs of all real-time PPP applications. The real-time estimation of high-rate satellite clock offsets is an efficient method for improving the accuracy. In this paper, the sequential least squares method to estimate real-time satellite clock offsets with high sample rate is proposed to improve the computational speed by applying an optimized sparse matrix operation to compute the normal equation and using special measures to take full advantage of modern computer power. The method is first applied to BeiDou Navigation Satellite System (BDS) and provides real-time estimation with a 1 s sample rate. The results show that the amount of time taken to process a single epoch is about 0.12 s using 28 stations. The Standard Deviation (STD) and Root Mean Square (RMS) of the real-time estimated BDS satellite clock offsets are 0.17 ns and 0.44 ns respectively when compared to German Research Center for Geosciences (GFZ) final clock products. The positioning performance of the real-time estimated satellite clock offsets is evaluated. The RMSs of the real-time BDS kinematic PPP in east, north, and vertical components are 7.6 cm, 6.4 cm and 19.6 cm respectively. The method is also applied to Global Positioning System (GPS) with a 10 s sample rate and the computational time of most epochs is less than 1.5 s with 75 stations. The STD and RMS of the real-time estimated GPS satellite clocks are 0.11 ns and 0.27 ns, respectively. The accuracies of 5.6 cm, 2.6 cm and 7.9 cm in east, north, and vertical components are achieved for the real-time GPS kinematic PPP.

  17. Sequential Data Assimilation for Seismicity: a Proof of Concept

    NASA Astrophysics Data System (ADS)

    van Dinther, Y.; Fichtner, A.; Kuensch, H. R.

    2015-12-01

    Our physical understanding and probabilistic forecasting ability of earthquakes is significantly hampered by limited indications of the state of stress and strength on faults and their governing parameters. Using the sequential data assimilation framework developed in meteorology and oceanography (e.g., Evensen, JGR, 1994) and a seismic cycle forward model based on Navier-Stokes Partial Differential Equations (van Dinther et al., JGR, 2013), we show that such information with its uncertainties is within reach, at least for laboratory setups. We aim to provide the first, thorough proof of concept for seismicity related PDE applications via a perfect model test of seismic cycles in a simplified wedge-like subduction setup. By evaluating the performance with respect to known numerical input and output, we aim to answer wether there is any probabilistic forecast value for this laboratory-like setup, which and how many parameters can be constrained, and how much data in both space and time would be needed to do so. Thus far our implementation of an Ensemble Kalman Filter demonstrated that probabilistic estimates of both the state of stress and strength on a megathrust fault can be obtained and utilized even when assimilating surface velocity data at a single point in time and space. An ensemble-based error covariance matrix containing velocities, stresses and pressure links surface velocity observations to fault stresses and strengths well enough to update fault coupling accordingly. Depending on what synthetic data show, coseismic events can then be triggered or inhibited.

  18. Focusing light through random photonic layers by four-element division algorithm

    NASA Astrophysics Data System (ADS)

    Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin

    2018-02-01

    The propagation of waves in turbid media is a fundamental problem of optics with vast applications. Optical phase optimization approaches for focusing light through turbid media using phase control algorithm have been widely studied in recent years due to the rapid development of spatial light modulator. The existing approaches include element-based algorithms - stepwise sequential algorithm, continuous sequential algorithm and whole element optimization approaches - partitioning algorithm, transmission matrix approach and genetic algorithm. The advantage of element-based approaches is that the phase contribution of each element is very clear; however, because the intensity contribution of each element to the focal point is small especially for the case of large number of elements, the determination of the optimal phase for a single element would be difficult. In other words, the signal to noise ratio of the measurement is weak, leading to possibly local maximal during the optimization. As for whole element optimization approaches, all elements are employed for the optimization. Of course, signal to noise ratio during the optimization is improved. However, because more random processings are introduced into the processing, optimizations take more time to converge than the single element based approaches. Based on the advantages of both single element based approaches and whole element optimization approaches, we propose FEDA approach. Comparisons with the existing approaches show that FEDA only takes one third of measurement time to reach the optimization, which means that FEDA is promising in practical application such as for deep tissue imaging.

  19. Effects of scalding method and sequential tanks on broiler processing wastewater loadings

    USDA-ARS?s Scientific Manuscript database

    The effects of scalding time and temperature, and sequential scalding tanks was evaluated based on impact to poultry processing wastewater (PPW) stream loading rates following the slaughter of commercially raised broilers. On 3 separate weeks (trials), broilers were obtained following feed withdrawa...

  20. Optimal design and use of retry in fault tolerant real-time computer systems

    NASA Technical Reports Server (NTRS)

    Lee, Y. H.; Shin, K. G.

    1983-01-01

    A new method to determin an optimal retry policy and for use in retry of fault characterization is presented. An optimal retry policy for a given fault characteristic, which determines the maximum allowable retry durations to minimize the total task completion time was derived. The combined fault characterization and retry decision, in which the characteristics of fault are estimated simultaneously with the determination of the optimal retry policy were carried out. Two solution approaches were developed, one based on the point estimation and the other on the Bayes sequential decision. The maximum likelihood estimators are used for the first approach, and the backward induction for testing hypotheses in the second approach. Numerical examples in which all the durations associated with faults have monotone hazard functions, e.g., exponential, Weibull and gamma distributions are presented. These are standard distributions commonly used for modeling analysis and faults.

  1. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    PubMed

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  2. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*

    PubMed Central

    Onorante, Luca; Raftery, Adrian E.

    2015-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859

  3. Episodic-like memory trace in awake replay of hippocampal place cell activity sequences.

    PubMed

    Takahashi, Susumu

    2015-10-20

    Episodic memory retrieval of events at a specific place and time is effective for future planning. Sequential reactivation of the hippocampal place cells along familiar paths while the animal pauses is well suited to such a memory retrieval process. It is, however, unknown whether this awake replay represents events occurring along the path. Using a subtask switching protocol in which the animal experienced three subtasks as 'what' information in a maze, I here show that the replay represents a trial type, consisting of path and subtask, in terms of neuronal firing timings and rates. The actual trial type to be rewarded could only be reliably predicted from replays that occurred at the decision point. This trial-type representation implies that not only 'where and when' but also 'what' information is contained in the replay. This result supports the view that awake replay is an episodic-like memory retrieval process.

  4. Integrated optical detection of autonomous capillary microfluidic immunoassays:a hand-held point-of-care prototype.

    PubMed

    Novo, P; Chu, V; Conde, J P

    2014-07-15

    The miniaturization of biosensors using microfluidics has potential in enabling the development of point-of-care devices, with the added advantages of reduced time and cost of analysis with limits-of-detection comparable to those obtained through traditional laboratory techniques. Interfacing microfluidic devices with the external world can be difficult especially in aspects involving fluid handling and the need for simple sample insertion that avoids special equipment or trained personnel. In this work we present a point-of-care prototype system by integrating capillary microfluidics with a microfabricated photodiode array and electronic instrumentation into a hand-held unit. The capillary microfluidic device is capable of autonomous and sequential fluid flow, including control of the average fluid velocity at any given point of the analysis. To demonstrate the functionality of the prototype, a model chemiluminescence ELISA was performed. The performance of the integrated optical detection in the point-of-care prototype is equal to that obtained with traditional bench-top instrumentation. The photodiode signals were acquired, displayed and processed by a simple graphical user interface using a computer connected to the microcontroller through USB. The prototype performed integrated chemiluminescence ELISA detection in about 15 min with a limit-of-detection of ≈2 nM with an antibody-antigen affinity constant of ≈2×10(7) M(-1). Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Cardiac conduction velocity estimation from sequential mapping assuming known Gaussian distribution for activation time estimation error.

    PubMed

    Shariat, Mohammad Hassan; Gazor, Saeed; Redfearn, Damian

    2016-08-01

    In this paper, we study the problem of the cardiac conduction velocity (CCV) estimation for the sequential intracardiac mapping. We assume that the intracardiac electrograms of several cardiac sites are sequentially recorded, their activation times (ATs) are extracted, and the corresponding wavefronts are specified. The locations of the mapping catheter's electrodes and the ATs of the wavefronts are used here for the CCV estimation. We assume that the extracted ATs include some estimation errors, which we model with zero-mean white Gaussian noise values with known variances. Assuming stable planar wavefront propagation, we derive the maximum likelihood CCV estimator, when the synchronization times between various recording sites are unknown. We analytically evaluate the performance of the CCV estimator and provide its mean square estimation error. Our simulation results confirm the accuracy of the proposed method and the error analysis of the proposed CCV estimator.

  6. Winnerless competition principle and prediction of the transient dynamics in a Lotka-Volterra model

    NASA Astrophysics Data System (ADS)

    Afraimovich, Valentin; Tristan, Irma; Huerta, Ramon; Rabinovich, Mikhail I.

    2008-12-01

    Predicting the evolution of multispecies ecological systems is an intriguing problem. A sufficiently complex model with the necessary predicting power requires solutions that are structurally stable. Small variations of the system parameters should not qualitatively perturb its solutions. When one is interested in just asymptotic results of evolution (as time goes to infinity), then the problem has a straightforward mathematical image involving simple attractors (fixed points or limit cycles) of a dynamical system. However, for an accurate prediction of evolution, the analysis of transient solutions is critical. In this paper, in the framework of the traditional Lotka-Volterra model (generalized in some sense), we show that the transient solution representing multispecies sequential competition can be reproducible and predictable with high probability.

  7. Winnerless competition principle and prediction of the transient dynamics in a Lotka-Volterra model.

    PubMed

    Afraimovich, Valentin; Tristan, Irma; Huerta, Ramon; Rabinovich, Mikhail I

    2008-12-01

    Predicting the evolution of multispecies ecological systems is an intriguing problem. A sufficiently complex model with the necessary predicting power requires solutions that are structurally stable. Small variations of the system parameters should not qualitatively perturb its solutions. When one is interested in just asymptotic results of evolution (as time goes to infinity), then the problem has a straightforward mathematical image involving simple attractors (fixed points or limit cycles) of a dynamical system. However, for an accurate prediction of evolution, the analysis of transient solutions is critical. In this paper, in the framework of the traditional Lotka-Volterra model (generalized in some sense), we show that the transient solution representing multispecies sequential competition can be reproducible and predictable with high probability.

  8. Sequential single shot X-ray photon correlation spectroscopy at the SACLA free electron laser

    DOE PAGES

    Lehmkühler, Felix; Kwaśniewski, Paweł; Roseker, Wojciech; ...

    2015-11-27

    In this study, hard X-ray free electron lasers allow for the first time to access dynamics of condensed matter samples ranging from femtoseconds to several hundred seconds. In particular, the exceptional large transverse coherence of the X-ray pulses and the high time-averaged flux promises to reach time and length scales that have not been accessible up to now with storage ring based sources. However, due to the fluctuations originating from the stochastic nature of the self-amplified spontaneous emission (SASE) process the application of well established techniques such as X-ray photon correlation spectroscopy (XPCS) is challenging. Here we demonstrate a single-shotmore » based sequential XPCS study on a colloidal suspension with a relaxation time comparable to the SACLA free-electron laser pulse repetition rate. High quality correlation functions could be extracted without any indications for sample damage. This opens the way for systematic sequential XPCS experiments at FEL sources.« less

  9. Sequential single shot X-ray photon correlation spectroscopy at the SACLA free electron laser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehmkühler, Felix; Kwaśniewski, Paweł; Roseker, Wojciech

    In this study, hard X-ray free electron lasers allow for the first time to access dynamics of condensed matter samples ranging from femtoseconds to several hundred seconds. In particular, the exceptional large transverse coherence of the X-ray pulses and the high time-averaged flux promises to reach time and length scales that have not been accessible up to now with storage ring based sources. However, due to the fluctuations originating from the stochastic nature of the self-amplified spontaneous emission (SASE) process the application of well established techniques such as X-ray photon correlation spectroscopy (XPCS) is challenging. Here we demonstrate a single-shotmore » based sequential XPCS study on a colloidal suspension with a relaxation time comparable to the SACLA free-electron laser pulse repetition rate. High quality correlation functions could be extracted without any indications for sample damage. This opens the way for systematic sequential XPCS experiments at FEL sources.« less

  10. Comparison between variable and fixed dwell-time PN acquisition algorithms. [for synchronization in pseudonoise spread spectrum systems

    NASA Technical Reports Server (NTRS)

    Braun, W. R.

    1981-01-01

    Pseudo noise (PN) spread spectrum systems require a very accurate alignment between the PN code epochs at the transmitter and receiver. This synchronism is typically established through a two-step algorithm, including a coarse synchronization procedure and a fine synchronization procedure. A standard approach for the coarse synchronization is a sequential search over all code phases. The measurement of the power in the filtered signal is used to either accept or reject the code phase under test as the phase of the received PN code. This acquisition strategy, called a single dwell-time system, has been analyzed by Holmes and Chen (1977). A synopsis of the field of sequential analysis as it applies to the PN acquisition problem is provided. From this, the implementation of the variable dwell time algorithm as a sequential probability ratio test is developed. The performance of this algorithm is compared to the optimum detection algorithm and to the fixed dwell-time system.

  11. Time scale of random sequential adsorption.

    PubMed

    Erban, Radek; Chapman, S Jonathan

    2007-04-01

    A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.

  12. Low dimensional temporal organization of spontaneous eye blinks in adults with developmental disabilities and stereotyped movement disorder.

    PubMed

    Lee, Mei-Hua; Bodfish, James W; Lewis, Mark H; Newell, Karl M

    2010-01-01

    This study investigated the mean rate and time-dependent sequential organization of spontaneous eye blinks in adults with intellectual and developmental disability (IDD) and individuals from this group who were additionally categorized with stereotypic movement disorder (IDD+SMD). The mean blink rate was lower in the IDD+SMD group than the IDD group and both of these groups had a lower blink rate than a contrast group of healthy adults. In the IDD group the n to n+1 sequential organization over time of the eye-blink durations showed a stronger compensatory organization than the contrast group suggesting decreased complexity/dimensionality of eye-blink behavior. Very low blink rate (and thus insufficient time series data) precluded analysis of time-dependent sequential properties in the IDD+SMD group. These findings support the hypothesis that both IDD and SMD are associated with a reduction in the dimension and adaptability of movement behavior and that this may serve as a risk factor for the expression of abnormal movements.

  13. The application of intraoperative transit time flow measurement to accurately assess anastomotic quality in sequential vein grafting

    PubMed Central

    Yu, Yang; Zhang, Fan; Gao, Ming-Xin; Li, Hai-Tao; Li, Jing-Xing; Song, Wei; Huang, Xin-Sheng; Gu, Cheng-Xiong

    2013-01-01

    OBJECTIVES Intraoperative transit time flow measurement (TTFM) is widely used to assess anastomotic quality in coronary artery bypass grafting (CABG). However, in sequential vein grafting, the flow characteristics collected by the conventional TTFM method are usually associated with total graft flow and might not accurately indicate the quality of every distal anastomosis in a sequential graft. The purpose of our study was to examine a new TTFM method that could assess the quality of each distal anastomosis in a sequential graft more reliably than the conventional TTFM approach. METHODS Two TTFM methods were tested in 84 patients who underwent sequential saphenous off-pump CABG in Beijing An Zhen Hospital between April and August 2012. In the conventional TTFM method, normal blood flow in the sequential graft was maintained during the measurement, and the flow probe was placed a few centimetres above the anastomosis to be evaluated. In the new method, blood flow in the sequential graft was temporarily reduced during the measurement by placing an atraumatic bulldog clamp at the graft a few centimetres distal to the anastomosis to be evaluated, while the position of the flow probe remained the same as in the conventional method. This new TTFM method was named the flow reduction TTFM. Graft flow parameters measured by both methods were compared. RESULTS Compared with the conventional TTFM, the flow reduction TTFM resulted in significantly lower mean graft blood flow (P < 0.05); in contrast, yielded significantly higher pulsatility index (P < 0.05). Diastolic filling was not significantly different between the two methods and was >50% in both cases. Interestingly, the flow reduction TTFM identified two defective middle distal anastomoses that the conventional TTFM failed to detect. Graft flows near the defective distal anastomoses were improved substantially after revision. CONCLUSIONS In this study, we found that temporary reduction of graft flow during TTFM seemed to enhance the sensitivity of TTFM to less-than-critical anastomotic defects in a sequential graft and to improve the overall accuracy of the intraoperative assessment of anastomotic quality in sequential vein grafting. PMID:24000314

  14. Induction of simultaneous and sequential malolactic fermentation in durian wine.

    PubMed

    Taniasuri, Fransisca; Lee, Pin-Rou; Liu, Shao-Quan

    2016-08-02

    This study represented for the first time the impact of malolactic fermentation (MLF) induced by Oenococcus oeni and its inoculation strategies (simultaneous vs. sequential) on the fermentation performance as well as aroma compound profile of durian wine. There was no negative impact of simultaneous inoculation of O. oeni and Saccharomyces cerevisiae on the growth and fermentation kinetics of S. cerevisiae as compared to sequential fermentation. Simultaneous MLF did not lead to an excessive increase in volatile acidity as compared to sequential MLF. The kinetic changes of organic acids (i.e. malic, lactic, succinic, acetic and α-ketoglutaric acids) varied with simultaneous and sequential MLF relative to yeast alone. MLF, regardless of inoculation mode, resulted in higher production of fermentation-derived volatiles as compared to control (alcoholic fermentation only), including esters, volatile fatty acids, and terpenes, except for higher alcohols. Most indigenous volatile sulphur compounds in durian were decreased to trace levels with little differences among the control, simultaneous and sequential MLF. Among the different wines, the wine with simultaneous MLF had higher concentrations of terpenes and acetate esters while sequential MLF had increased concentrations of medium- and long-chain ethyl esters. Relative to alcoholic fermentation only, both simultaneous and sequential MLF reduced acetaldehyde substantially with sequential MLF being more effective. These findings illustrate that MLF is an effective and novel way of modulating the volatile and aroma compound profile of durian wine. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Color Breakup In Sequentially-Scanned LC Displays

    NASA Technical Reports Server (NTRS)

    Arend, L.; Lubin, J.; Gille, J.; Larimer, J.; Statler, Irving C. (Technical Monitor)

    1994-01-01

    In sequentially-scanned liquid-crystal displays the chromatic components of color pixels are distributed in time. For such displays eye, head, display, and image-object movements can cause the individual color elements to be visible. We analyze conditions (scan designs, types of eye movement) likely to produce color breakup.

  16. Individuation of Pairs of Objects in Infancy

    ERIC Educational Resources Information Center

    Leslie, Alan M.; Chen, Marian L.

    2007-01-01

    Looking-time studies examined whether 11-month-old infants can individuate two pairs of objects using only shape information. In order to test individuation, the object pairs were presented sequentially. Infants were familiarized either with the sequential pairs, disk-triangle/disk-triangle (XY/XY), whose shapes differed within but not across…

  17. To Catch a Thief.

    ERIC Educational Resources Information Center

    Lindsay, R. C. L.

    1998-01-01

    Describes the differences between normal and sequential lineups and their effects on eyewitnesses to crime. After a staged crime in a college lab, students were shown photographs in either normal lineup style or sequential style. 35% of eyewitnesses shown all photographs at the same time mistakenly picked an innocent person. Only 18% shown…

  18. Probabilistic Guidance of Swarms using Sequential Convex Programming

    DTIC Science & Technology

    2014-01-01

    quadcopter fleet [24]. In this paper, sequential convex programming (SCP) [25] is implemented using model predictive control (MPC) to provide real-time...in order to make Problem 1 convex. The details for convexifying this problem can be found in [26]. The main steps are discretizing the problem using

  19. Sequential Dependencies in Driving

    ERIC Educational Resources Information Center

    Doshi, Anup; Tran, Cuong; Wilder, Matthew H.; Mozer, Michael C.; Trivedi, Mohan M.

    2012-01-01

    The effect of recent experience on current behavior has been studied extensively in simple laboratory tasks. We explore the nature of sequential effects in the more naturalistic setting of automobile driving. Driving is a safety-critical task in which delayed response times may have severe consequences. Using a realistic driving simulator, we find…

  20. A mixed-mode traffic assignment model with new time-flow impedance function

    NASA Astrophysics Data System (ADS)

    Lin, Gui-Hua; Hu, Yu; Zou, Yuan-Yang

    2018-01-01

    Recently, with the wide adoption of electric vehicles, transportation network has shown different characteristics and been further developed. In this paper, we present a new time-flow impedance function, which may be more realistic than the existing time-flow impedance functions. Based on this new impedance function, we present an optimization model for a mixed-mode traffic network in which battery electric vehicles (BEVs) and gasoline vehicles (GVs) are chosen. We suggest two approaches to handle the model: One is to use the interior point (IP) algorithm and the other is to employ the sequential quadratic programming (SQP) algorithm. Three numerical examples are presented to illustrate the efficiency of these approaches. In particular, our numerical results show that more travelers prefer to choosing BEVs when the distance limit of BEVs is long enough and the unit operating cost of GVs is higher than that of BEVs, and the SQP algorithm is faster than the IP algorithm.

  1. Real-time autocorrelator for fluorescence correlation spectroscopy based on graphical-processor-unit architecture: method, implementation, and comparative studies

    NASA Astrophysics Data System (ADS)

    Laracuente, Nicholas; Grossman, Carl

    2013-03-01

    We developed an algorithm and software to calculate autocorrelation functions from real-time photon-counting data using the fast, parallel capabilities of graphical processor units (GPUs). Recent developments in hardware and software have allowed for general purpose computing with inexpensive GPU hardware. These devices are more suited for emulating hardware autocorrelators than traditional CPU-based software applications by emphasizing parallel throughput over sequential speed. Incoming data are binned in a standard multi-tau scheme with configurable points-per-bin size and are mapped into a GPU memory pattern to reduce time-expensive memory access. Applications include dynamic light scattering (DLS) and fluorescence correlation spectroscopy (FCS) experiments. We ran the software on a 64-core graphics pci card in a 3.2 GHz Intel i5 CPU based computer running Linux. FCS measurements were made on Alexa-546 and Texas Red dyes in a standard buffer (PBS). Software correlations were compared to hardware correlator measurements on the same signals. Supported by HHMI and Swarthmore College

  2. Research on early-warning index of the spatial temperature field in concrete dams.

    PubMed

    Yang, Guang; Gu, Chongshi; Bao, Tengfei; Cui, Zhenming; Kan, Kan

    2016-01-01

    Warning indicators of the dam body's temperature are required for the real-time monitoring of the service conditions of concrete dams to ensure safety and normal operations. Warnings theories are traditionally targeted at a single point which have limitations, and the scientific warning theories on global behavior of the temperature field are non-existent. In this paper, first, in 3D space, the behavior of temperature field has regional dissimilarity. Through the Ward spatial clustering method, the temperature field was divided into regions. Second, the degree of order and degree of disorder of the temperature monitoring points were defined by the probability method. Third, the weight values of monitoring points of each regions were explored via projection pursuit. Forth, a temperature entropy expression that can describe degree of order of the spatial temperature field in concrete dams was established. Fifth, the early-warning index of temperature entropy was set up according to the calculated sequential value of temperature entropy. Finally, project cases verified the feasibility of the proposed theories. The early-warning index of temperature entropy is conducive to the improvement of early-warning ability and safety management levels during the operation of high concrete dams.

  3. Using Zebra-speech to study sequential and simultaneous speech segregation in a cochlear-implant simulation.

    PubMed

    Gaudrain, Etienne; Carlyon, Robert P

    2013-01-01

    Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish the target and the masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed.

  4. Using Zebra-speech to study sequential and simultaneous speech segregation in a cochlear-implant simulation

    PubMed Central

    Gaudrain, Etienne; Carlyon, Robert P.

    2013-01-01

    Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish target and masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed. PMID:23297922

  5. Implementation of a web-based medication tracking system in a large academic medical center.

    PubMed

    Calabrese, Sam V; Williams, Jonathan P

    2012-10-01

    Pharmacy workflow efficiencies achieved through the use of an electronic medication-tracking system are described. Medication dispensing turnaround times at the inpatient pharmacy of a large hospital were evaluated before and after transition from manual medication tracking to a Web-based tracking process involving sequential bar-code scanning and real-time monitoring of medication status. The transition was carried out in three phases: (1) a workflow analysis, including the identification of optimal points for medication scanning with hand-held wireless devices, (2) the phased implementation of an automated solution and associated hardware at a central dispensing pharmacy and three satellite locations, and (3) postimplementation data collection to evaluate the impact of the new tracking system and areas for improvement. Relative to the manual tracking method, electronic medication tracking allowed the capture of far more data points, enabling the pharmacy team to delineate the time required for each step of the medication dispensing process and to identify the steps most likely to involve delays. A comparison of baseline and postimplementation data showed substantial reductions in overall medication turnaround times with the use of the Web-based tracking system (time reductions of 45% and 22% at the central and satellite sites, respectively). In addition to more accurate projections and documentation of turnaround times, the Web-based tracking system has facilitated quality-improvement initiatives. Implementation of an electronic tracking system for monitoring the delivery of medications provided a comprehensive mechanism for calculating turnaround times and allowed the pharmacy to identify bottlenecks within the medication distribution system. Altering processes removed these bottlenecks and decreased delivery turnaround times.

  6. Comparative study of lesions created by high-intensity focused ultrasound using sequential discrete and continuous scanning strategies.

    PubMed

    Fan, Tingbo; Liu, Zhenbo; Zhang, Dong; Tang, Mengxing

    2013-03-01

    Lesion formation and temperature distribution induced by high-intensity focused ultrasound (HIFU) were investigated both numerically and experimentally via two energy-delivering strategies, i.e., sequential discrete and continuous scanning modes. Simulations were presented based on the combination of Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation and bioheat equation. Measurements were performed on tissue-mimicking phantoms sonicated by a 1.12-MHz single-element focused transducer working at an acoustic power of 75 W. Both the simulated and experimental results show that, in the sequential discrete mode, obvious saw-tooth-like contours could be observed for the peak temperature distribution and the lesion boundaries, with the increasing interval space between two adjacent exposure points. In the continuous scanning mode, more uniform peak temperature distributions and lesion boundaries would be produced, and the peak temperature values would decrease significantly with the increasing scanning speed. In addition, compared to the sequential discrete mode, the continuous scanning mode could achieve higher treatment efficiency (lesion area generated per second) with a lower peak temperature. The present studies suggest that the peak temperature and tissue lesion resulting from the HIFU exposure could be controlled by adjusting the transducer scanning speed, which is important for improving the HIFU treatment efficiency.

  7. Amantadine Ameliorates Dopamine-Releasing Deficits and Behavioral Deficits in Rats after Fluid Percussion Injury

    PubMed Central

    Huang, Eagle Yi-Kung; Tsui, Pi-Fen; Kuo, Tung-Tai; Tsai, Jing-Jr.; Chou, Yu-Ching; Ma, Hsin-I; Chiang, Yung-Hsiao; Chen, Yuan-Hao

    2014-01-01

    Aims To investigate the role of dopamine in cognitive and motor learning skill deficits after a traumatic brain injury (TBI), we investigated dopamine release and behavioral changes at a series of time points after fluid percussion injury, and explored the potential of amantadine hydrochloride as a chronic treatment to provide behavioral recovery. Materials and Methods In this study, we sequentially investigated dopamine release at the striatum and behavioral changes at 1, 2, 4, 6, and 8 weeks after fluid percussion injury. Rats subjected to 6-Pa cerebral cortical fluid percussion injury were treated by using subcutaneous infusion pumps filled with either saline (sham group) or amantadine hydrochloride, with a releasing rate of 3.6mg/kg/hour for 8 weeks. The dopamine-releasing conditions and metabolism were analyzed sequentially by fast scan cyclic voltammetry (FSCV) and high-pressure liquid chromatography (HPLC). Novel object recognition (NOR) and fixed-speed rotarod (FSRR) behavioral tests were used to determine treatment effects on cognitive and motor deficits after injury. Results Sequential dopamine-release deficits were revealed in 6-Pa-fluid-percussion cerebral cortical injured animals. The reuptake rate (tau value) of dopamine in injured animals was prolonged, but the tau value became close to the value for the control group after amantadine therapy. Cognitive and motor learning impairments were shown evidenced by the NOR and FSRR behavioral tests after injury. Chronic amantadine therapy reversed dopamine-release deficits, and behavioral impairment after fluid percussion injuries were ameliorated in the rats treated by using amantadine-pumping infusion. Conclusion Chronic treatment with amantadine hydrochloride can ameliorate dopamine-release deficits as well as cognitive and motor deficits caused by cerebral fluid-percussion injury. PMID:24497943

  8. Optimal trajectories of aircraft and spacecraft

    NASA Technical Reports Server (NTRS)

    Miele, A.

    1990-01-01

    Work done on algorithms for the numerical solutions of optimal control problems and their application to the computation of optimal flight trajectories of aircraft and spacecraft is summarized. General considerations on calculus of variations, optimal control, numerical algorithms, and applications of these algorithms to real-world problems are presented. The sequential gradient-restoration algorithm (SGRA) is examined for the numerical solution of optimal control problems of the Bolza type. Both the primal formulation and the dual formulation are discussed. Aircraft trajectories, in particular, the application of the dual sequential gradient-restoration algorithm (DSGRA) to the determination of optimal flight trajectories in the presence of windshear are described. Both take-off trajectories and abort landing trajectories are discussed. Take-off trajectories are optimized by minimizing the peak deviation of the absolute path inclination from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. The survival capability of an aircraft in a severe windshear is discussed, and the optimal trajectories are found to be superior to both constant pitch trajectories and maximum angle of attack trajectories. Spacecraft trajectories, in particular, the application of the primal sequential gradient-restoration algorithm (PSGRA) to the determination of optimal flight trajectories for aeroassisted orbital transfer are examined. Both the coplanar case and the noncoplanar case are discussed within the frame of three problems: minimization of the total characteristic velocity; minimization of the time integral of the square of the path inclination; and minimization of the peak heating rate. The solution of the second problem is called nearly-grazing solution, and its merits are pointed out as a useful engineering compromise between energy requirements and aerodynamics heating requirements.

  9. Sea ice motion measurements from Seasat SAR images

    NASA Technical Reports Server (NTRS)

    Leberl, F.; Raggam, J.; Elachi, C.; Campbell, W. J.

    1983-01-01

    Data from the Seasat synthetic aperture radar (SAR) experiment are analyzed in order to determine the accuracy of this information for mapping the distribution of sea ice and its motion. Data from observations of sea ice in the Beaufort Sea from seven sequential orbits of the satellite were selected to study the capabilities and limitations of spaceborne radar application to sea-ice mapping. Results show that there is no difficulty in identifying homologue ice features on sequential radar images and the accuracy is entirely controlled by the accuracy of the orbit data and the geometric calibration of the sensor. Conventional radargrammetric methods are found to serve well for satellite radar ice mapping, while ground control points can be used to calibrate the ice location and motion measurements in the cases where orbit data and sensor calibration are lacking. The ice motion was determined to be approximately 6.4 + or - 0.5 km/day. In addition, the accuracy of pixel location was found over land areas. The use of one control point in 10,000 sq km produced an accuracy of about + or 150 m, while with a higher density of control points (7 in 1000 sq km) the location accuracy improves to the image resolution of + or - 25 m. This is found to be applicable for both optical and digital data.

  10. [Significance of heterogenity in endothelium-dependent vasodilatation occurrence in healthy individuals with or without coronary risk factors].

    PubMed

    Polovina, Marija; Potpara, Tatjana; Giga, Vojislav; Ostojić, Miodrag

    2009-10-01

    Brachial artery flow-mediated dilation (FMD) is extensively used for non-invasive assessment of endothelial function. Traditionally, FMD is calculated as a percent change of arterial diameter from the baseline value at an arbitrary time point after cuff deflation (usually 60 seconds). Considerable individual differences in brachial artery temporal response to hyperemic stimulus have been observed, potentially influenced by the presence of atherosclerotic risk factors (RF). The importance of such differences for the evaluation of endothelial function has not been well established. The aim of the study was to determine the time course of maximal brachial artery endothelium-dependent dilation in healthy adults with and without RF, to explore the correlation of RF with brachial artery temporal response and to evaluate the importance of individual differences in temporal response for the assessment of endothelial function. A total of 115 healthy volunteers were included in the study. Out of them, 58 had no RF (26 men, mean age 44 +/-14 years) and 57 had at least one RF (29 men, mean age 45 +/-14 years). High-resolution color Doppler vascular ultrasound was used for brachial artery imaging. To determine maximal arterial diameter after cuff deflation and the time-point of maximal vasodilation off-line sequential measurements were performed every 10 seconds from 0 to 240 seconds after cuff release. True maximal FMD value was calculated as a percent change of the true maximal diameter from the baseline, and compared with FMD value calculated assuming that every participant reached maximal dilation at 60 seconds post cuff deflation (FMD60). Correlation of different RF with brachial artery temporal response was assessed. A maximal brachial artery endothelium-dependent vasodilation occurred from 30-120 seconds after cuff release, and the mean time of endothelium-dependent dilation was 68 +/-20 seconds. Individuals without RF had faster endothelium-dependent dilation (mean time 62 +/-17 seconds), and a shorter time-span (30 to 100 seconds), than participants with RF (mean time 75 +/-21 seconds, time-span 40 to 120 seconds) (p < 0.001). Time when the maximal endothelium-dependent dilation occurred was independently associated with age, serum lipid fractions (total cholesterol, LDL and HDL cholesterol), smoking, physical activity and C-reactive protein. True maximal FMD value in the whole group (6.7 +/-3.0%) was significantly higher (p < 0.001) than FMD60 (5.2 +/-3.5%). The same results were demonstrated for individuals with RF (4.9 +/- 1.7% vs 3.1 +/- 2.3%, p < 0.001) and without RF (8.4 +/- 2.9% vs 7.2 +/- 3.2%, p < 0.05). The temporal response of endothelium-dependent dilation is influenced by the presence of coronary FR and individually heterogeneous. When calculated according to the commonly used approach, i.e. 60 seconds after cuff deflation, FMD is significantly lower than the true maximal FMD. The routinely used measurement time-points for FMD assessment may not be adequate for the detection of true peak vasodilation in individual persons. More precise evaluation of endothelial function can be achieved with sequential measurement of arterial diameter after hyperemic stimulus.

  11. Effect of sequential pneumatic compression therapy on venous blood velocity, refilling time, pain and quality of life in women with varicose veins: a randomized control study

    PubMed Central

    Yamany, Abeer; Hamdy, Bassant

    2016-01-01

    [Purpose] The aim of this study was to investigate the effects of sequential pneumatic compression therapy on venous blood flow, refilling time, pain level, and quality of life in women with varicose veins. [Subjects and Methods] Twenty-eight females with varicose veins were selected and randomly allocated to a control group, and experimental group. Maximum and mean venous blood velocities, the refilling time, pain by visual analog scale and quality of life by Aberdeen Varicose Veins Questionnaire were measured in all patients before and after six weeks of treatment. Both groups received lower extremity exercises; in addition, patients in the experimental group received sequential pneumatic compression therapy for 30 minutes daily, five days a week for six weeks. [Results] All measured parameters improved significantly in both groups, comparison of post treatment measurements between groups showed that the maximum and mean blood flow velocity, the pain level, and quality of life were significantly higher in the experimental group compared with the control group. On the other hand there was no significant difference between groups for refilling time. [Conclusion] Sequential pneumatic compression therapy with the applied parameters was an effective modality for increasing venous blood flow, reducing pain, and improving quality of women life with varicose veins. PMID:27512247

  12. Test pattern generation for ILA sequential circuits

    NASA Technical Reports Server (NTRS)

    Feng, YU; Frenzel, James F.; Maki, Gary K.

    1993-01-01

    An efficient method of generating test patterns for sequential machines implemented using one-dimensional, unilateral, iterative logic arrays (ILA's) of BTS pass transistor networks is presented. Based on a transistor level fault model, the method affords a unique opportunity for real-time fault detection with improved fault coverage. The resulting test sets are shown to be equivalent to those obtained using conventional gate level models, thus eliminating the need for additional test patterns. The proposed method advances the simplicity and ease of the test pattern generation for a special class of sequential circuitry.

  13. A Strategy to Design High-Density Nanoscale Devices utilizing Vapor Deposition of Metal Halide Perovskite Materials.

    PubMed

    Hwang, Bohee; Lee, Jang-Sik

    2017-08-01

    The demand for high memory density has increased due to increasing needs of information storage, such as big data processing and the Internet of Things. Organic-inorganic perovskite materials that show nonvolatile resistive switching memory properties have potential applications as the resistive switching layer for next-generation memory devices, but, for practical applications, these materials should be utilized in high-density data-storage devices. Here, nanoscale memory devices are fabricated by sequential vapor deposition of organolead halide perovskite (OHP) CH 3 NH 3 PbI 3 layers on wafers perforated with 250 nm via-holes. These devices have bipolar resistive switching properties, and show low-voltage operation, fast switching speed (200 ns), good endurance, and data-retention time >10 5 s. Moreover, the use of sequential vapor deposition is extended to deposit CH 3 NH 3 PbI 3 as the memory element in a cross-point array structure. This method to fabricate high-density memory devices could be used for memory cells that occupy large areas, and to overcome the scaling limit of existing methods; it also presents a way to use OHPs to increase memory storage capacity. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. The Use of Mixed Populations of Saccharomyces cerevisiae and S. kudriavzevii to Reduce Ethanol Content in Wine: Limited Aeration, Inoculum Proportions, and Sequential Inoculation

    PubMed Central

    Alonso-del-Real, Javier; Contreras-Ruiz, Alba; Castiglioni, Gabriel L.; Barrio, Eladio; Querol, Amparo

    2017-01-01

    Saccharomyces cerevisiae is the most widespread microorganism responsible for wine alcoholic fermentation. Nevertheless, the wine industry is currently facing new challenges, some of them associate with climate change, which have a negative effect on ethanol content and wine quality. Numerous and varied strategies have been carried out to overcome these concerns. From a biotechnological point of view, the use of alternative non-Saccharomyces yeasts, yielding lower ethanol concentrations and sometimes giving rise to new and interesting aroma, is one of the trendiest approaches. However, S. cerevisiae usually outcompetes other Saccharomyces species due to its better adaptation to the fermentative environment. For this reason, we studied for the first time the use of a Saccharomyces kudriavzevii strain, CR85, for co-inoculations at increasing proportions and sequential inoculations, as well as the effect of aeration, to improve its fermentation performance in order to obtain wines with an ethanol yield reduction. An enhanced competitive performance of S. kudriavzevii CR85 was observed when it represented 90% of the cells present in the inoculum. Furthermore, airflow supply of 20 VVH to the fermentation synergistically improved CR85 endurance and, interestingly, a significant ethanol concentration reduction was achieved. PMID:29118746

  15. The Use of Mixed Populations of Saccharomyces cerevisiae and S. kudriavzevii to Reduce Ethanol Content in Wine: Limited Aeration, Inoculum Proportions, and Sequential Inoculation.

    PubMed

    Alonso-Del-Real, Javier; Contreras-Ruiz, Alba; Castiglioni, Gabriel L; Barrio, Eladio; Querol, Amparo

    2017-01-01

    Saccharomyces cerevisiae is the most widespread microorganism responsible for wine alcoholic fermentation. Nevertheless, the wine industry is currently facing new challenges, some of them associate with climate change, which have a negative effect on ethanol content and wine quality. Numerous and varied strategies have been carried out to overcome these concerns. From a biotechnological point of view, the use of alternative non- Saccharomyces yeasts, yielding lower ethanol concentrations and sometimes giving rise to new and interesting aroma, is one of the trendiest approaches. However, S. cerevisiae usually outcompetes other Saccharomyces species due to its better adaptation to the fermentative environment. For this reason, we studied for the first time the use of a Saccharomyces kudriavzevii strain, CR85, for co-inoculations at increasing proportions and sequential inoculations, as well as the effect of aeration, to improve its fermentation performance in order to obtain wines with an ethanol yield reduction. An enhanced competitive performance of S. kudriavzevii CR85 was observed when it represented 90% of the cells present in the inoculum. Furthermore, airflow supply of 20 VVH to the fermentation synergistically improved CR85 endurance and, interestingly, a significant ethanol concentration reduction was achieved.

  16. Sequential detection of learning in cognitive diagnosis.

    PubMed

    Ye, Sangbeak; Fellouris, Georgios; Culpepper, Steven; Douglas, Jeff

    2016-05-01

    In order to look more closely at the many particular skills examinees utilize to answer items, cognitive diagnosis models have received much attention, and perhaps are preferable to item response models that ordinarily involve just one or a few broadly defined skills, when the objective is to hasten learning. If these fine-grained skills can be identified, a sharpened focus on learning and remediation can be achieved. The focus here is on how to detect when learning has taken place for a particular attribute and efficiently guide a student through a sequence of items to ultimately attain mastery of all attributes while administering as few items as possible. This can be seen as a problem in sequential change-point detection for which there is a long history and a well-developed literature. Though some ad hoc rules for determining learning may be used, such as stopping after M consecutive items have been successfully answered, more efficient methods that are optimal under various conditions are available. The CUSUM, Shiryaev-Roberts and Shiryaev procedures can dramatically reduce the time required to detect learning while maintaining rigorous Type I error control, and they are studied in this context through simulation. Future directions for modelling and detection of learning are discussed. © 2016 The British Psychological Society.

  17. Brief report: Using global positioning system (GPS) enabled cell phones to examine adolescent travel patterns and time in proximity to alcohol outlets.

    PubMed

    Byrnes, Hilary F; Miller, Brenda A; Morrison, Christopher N; Wiebe, Douglas J; Remer, Lillian G; Wiehe, Sarah E

    2016-07-01

    As adolescents gain freedom to explore new environments unsupervised, more time in proximity to alcohol outlets may increase risks for alcohol and marijuana use. This pilot study: 1) Describes variations in adolescents' proximity to outlets by time of day and day of the week, 2) Examines variations in outlet proximity by drinking and marijuana use status, and 3) Tests feasibility of obtaining real-time data to study adolescent proximity to outlets. U.S. adolescents (N = 18) aged 16-17 (50% female) carried GPS-enabled smartphones for one week with their locations tracked. The geographic areas where adolescents spend time, activity spaces, were created by connecting GPS points sequentially and adding spatial buffers around routes. Proximity to outlets was greater during after school and evening hours. Drinkers and marijuana users were in proximity to outlets 1½ to 2 times more than non-users. Findings provide information about where adolescents spend time and times of greatest risk, informing prevention efforts. Copyright © 2016 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  18. Method for contour extraction for object representation

    DOEpatents

    Skourikhine, Alexei N.; Prasad, Lakshman

    2005-08-30

    Contours are extracted for representing a pixelated object in a background pixel field. An object pixel is located that is the start of a new contour for the object and identifying that pixel as the first pixel of the new contour. A first contour point is then located on the mid-point of a transition edge of the first pixel. A tracing direction from the first contour point is determined for tracing the new contour. Contour points on mid-points of pixel transition edges are sequentially located along the tracing direction until the first contour point is again encountered to complete tracing the new contour. The new contour is then added to a list of extracted contours that represent the object. The contour extraction process associates regions and contours by labeling all the contours belonging to the same object with the same label.

  19. The impact of multiple memory formation on dendritic complexity in the hippocampus and anterior cingulate cortex assessed at recent and remote time points

    PubMed Central

    Wartman, Brianne C.; Holahan, Matthew R.

    2014-01-01

    Consolidation processes, involving synaptic and systems level changes, are suggested to stabilize memories once they are formed. At the synaptic level, dendritic structural changes are associated with long-term memory storage. At the systems level, memory storage dynamics between the hippocampus and anterior cingulate cortex (ACC) may be influenced by the number of sequentially encoded memories. The present experiment utilized Golgi-Cox staining and neuron reconstruction to examine recent and remote structural changes in the hippocampus and ACC following training on three different behavioral procedures. Rats were trained on one hippocampal-dependent task only (a water maze task), two hippocampal-dependent tasks (a water maze task followed by a radial arm maze task), or one hippocampal-dependent and one non-hippocampal-dependent task (a water maze task followed by an operant conditioning task). Rats were euthanized recently or remotely. Brains underwent Golgi-Cox processing and neurons were reconstructed using Neurolucida software (MicroBrightField, Williston, VT, USA). Rats trained on two hippocampal-dependent tasks displayed increased dendritic complexity compared to control rats, in neurons examined in both the ACC and hippocampus at recent and remote time points. Importantly, this behavioral group showed consistent, significant structural differences in the ACC compared to the control group at the recent time point. These findings suggest that taxing the demand placed upon the hippocampus, by training rats on two hippocampal-dependent tasks, engages synaptic and systems consolidation processes in the ACC at an accelerated rate for recent and remote storage of spatial memories. PMID:24795581

  20. Scanning Quantum Cryogenic Atom Microscope

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Kollár, Alicia J.; Taylor, Stephen F.; Turner, Richard W.; Lev, Benjamin L.

    2017-03-01

    Microscopic imaging of local magnetic fields provides a window into the organizing principles of complex and technologically relevant condensed-matter materials. However, a wide variety of intriguing strongly correlated and topologically nontrivial materials exhibit poorly understood phenomena outside the detection capability of state-of-the-art high-sensitivity high-resolution scanning probe magnetometers. We introduce a quantum-noise-limited scanning probe magnetometer that can operate from room-to-cryogenic temperatures with unprecedented dc-field sensitivity and micron-scale resolution. The Scanning Quantum Cryogenic Atom Microscope (SQCRAMscope) employs a magnetically levitated atomic Bose-Einstein condensate (BEC), thereby providing immunity to conductive and blackbody radiative heating. The SQCRAMscope has a field sensitivity of 1.4 nT per resolution-limited point (approximately 2 μ m ) or 6 nT /√{Hz } per point at its duty cycle. Compared to point-by-point sensors, the long length of the BEC provides a naturally parallel measurement, allowing one to measure nearly 100 points with an effective field sensitivity of 600 pT /√{Hz } for each point during the same time as a point-by-point scanner measures these points sequentially. Moreover, it has a noise floor of 300 pT and provides nearly 2 orders of magnitude improvement in magnetic flux sensitivity (down to 10-6 Φ0/√{Hz } ) over previous atomic probe magnetometers capable of scanning near samples. These capabilities are carefully benchmarked by imaging magnetic fields arising from microfabricated wire patterns in a system where samples may be scanned, cryogenically cooled, and easily exchanged. We anticipate the SQCRAMscope will provide charge-transport images at temperatures from room temperature to 4 K in unconventional superconductors and topologically nontrivial materials.

  1. The sequential organ failure assessment (SOFA) score is an effective triage marker following staggered paracetamol (acetaminophen) overdose.

    PubMed

    Craig, D G; Zafar, S; Reid, T W D J; Martin, K G; Davidson, J S; Hayes, P C; Simpson, K J

    2012-06-01

    The sequential organ failure assessment (SOFA) score is an effective triage marker following single time point paracetamol (acetaminophen) overdose, but has not been evaluated following staggered (multiple supratherapeutic doses over >8 h, resulting in cumulative dose of >4 g/day) overdoses. To evaluate the prognostic accuracy of the SOFA score following staggered paracetamol overdose. Time-course analysis of 50 staggered paracetamol overdoses admitted to a tertiary liver centre. Individual timed laboratory samples were correlated with corresponding clinical parameters and the daily SOFA scores were calculated. A total of 39/50 (78%) patients developed hepatic encephalopathy. The area under the SOFA receiver operator characteristic for death/liver transplantation was 87.4 (95% CI 73.2-95.7), 94.3 (95% CI 82.5-99.1), and 98.4 (95% CI 84.3-100.0) at 0, 24 and 48 h, respectively, postadmission. A SOFA score of <6 at tertiary care admission predicted survival with a sensitivity of 100.0% (95% CI 76.8-100.0) and specificity of 58.3% (95% CI 40.8-74.5), compared with 85.7% (95% CI 60.6-97.4) and 75.0% (95% CI 65.2-79.5) , respectively, for the modified Kings College criteria. Only 2/21 patients with an admission SOFA score <6 required renal replacement therapy or intracerebral pressure monitoring. SOFA significantly outperformed the Model for End-stage Liver Disease, but not APACHE II, at 0, 24-and 48-h following admission. A SOFA score <6 at tertiary care admission following a staggered paracetamol overdose, is associated with a good prognosis. Both the SOFA and APACHE II scores could improve triage of high-risk staggered paracetamol overdose patients. © 2012 Blackwell Publishing Ltd.

  2. Sequential Online Wellness Programming Is an Effective Strategy to Promote Behavior Change

    ERIC Educational Resources Information Center

    MacNab, Lindsay R.; Francis, Sarah L.

    2015-01-01

    The growing number of United States youth and adults categorized as overweight or obese illustrates a need for research-based family wellness interventions. Sequential, online, Extension-delivered family wellness interventions offer a time- and cost-effective approach for both participants and Extension educators. The 6-week, online Healthy…

  3. Two Tales of Time: Uncovering the Significance of Sequential Patterns among Contribution Types in Knowledge-Building Discourse

    ERIC Educational Resources Information Center

    Chen, Bodong; Resendes, Monica; Chai, Ching Sing; Hong, Huang-Yao

    2017-01-01

    As collaborative learning is actualized through evolving dialogues, temporality inevitably matters for the analysis of collaborative learning. This study attempts to uncover sequential patterns that distinguish "productive" threads of knowledge-building discourse. A database of Grade 1-6 knowledge-building discourse was first coded for…

  4. Real time on-chip sequential adaptive principal component analysis for data feature extraction and image compression

    NASA Technical Reports Server (NTRS)

    Duong, T. A.

    2004-01-01

    In this paper, we present a new, simple, and optimized hardware architecture sequential learning technique for adaptive Principle Component Analysis (PCA) which will help optimize the hardware implementation in VLSI and to overcome the difficulties of the traditional gradient descent in learning convergence and hardware implementation.

  5. On-line diagnosis of sequential systems

    NASA Technical Reports Server (NTRS)

    Sundstrom, R. J.

    1973-01-01

    A model for on-line diagnosis was investigated for discrete-time systems, and resettable sequential systems. Generalized notions of a realization are discussed along with fault tolerance and errors. Further investigation into the theory of on-line diagnosis is recommended for three levels: binary state-assigned level, logical circuit level, and the subsystem-network level.

  6. [Approach to percutaneous nephrolithotomy. Comparison of the procedure in a one-shot versus the sequential with metal dilata].

    PubMed

    Sedano-Portillo, Ismael; Ochoa-León, Gastón; Fuentes-Orozco, Clotilde; Irusteta-Jiménez, Leire; Michel-Espinoza, Luis Rodrigo; Salazar-Parra, Marcela; Cuesta-Márquez, Lizbeth; González-Ojeda, Alejandro

    2017-01-01

    Percutaneous nephrolithotomy is an efficient approach for treatment of different types of kidney stones. Various types of access techniques have been described like sequential dilatation and one-shot procedure. To determine the differences in time of exposure to X-rays and hemoglobin levels between techniques. Controlled clinical trial. Patients older than 18 years with complex/uncomplicated kidney stones, without urine infection were included. They were assigned randomly to one of the two techniques. Response variables were determined before and 24 h after procedures. 59 patients were included: 30 underwent one-shot procedure (study-group) and 29 sequential dilatation (control-group). Baseline characteristics were similar. Study group had a lower postoperative hemoglobin decline than control group (0.81 vs. 2.03 g/dl, respectively; p < 0.001); X-ray exposure time (69.6 vs. 100.62 s; p < 0.001) and postoperative creatinine serum levels (0.93 ± 0.29 vs. 1.13 ± 0.4 mg/dl; p = 0.039). No significant differences in postoperative morbidity were found. One-shot technique demonstrated better results compared to sequential dilatation.

  7. Solving constrained minimum-time robot problems using the sequential gradient restoration algorithm

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.

    1991-01-01

    Three constrained minimum-time control problems of a two-link manipulator are solved using the Sequential Gradient and Restoration Algorithm (SGRA). The inequality constraints considered are reduced via Valentine-type transformations to nondifferential path equality constraints. The SGRA is then used to solve these transformed problems with equality constraints. The results obtained indicate that at least one of the two controls is at its limits at any instant in time. The remaining control then adjusts itself so that none of the system constraints is violated. Hence, the minimum-time control is either a pure bang-bang control or a combined bang-bang/singular control.

  8. A sequential sampling account of response bias and speed-accuracy tradeoffs in a conflict detection task.

    PubMed

    Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew

    2014-03-01

    Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association

  9. Labeling and Grouping Effects in the Recall of Pictures by Children

    ERIC Educational Resources Information Center

    Furth, Hans G.; Milgram, Norman A.

    1973-01-01

    Free recall of an array of pictures followed by sequential location recall was observed in children ages 4 to 12. Presentation conditions included arrays of pictures differing in salience of categories versus noncategorical array, and two task conditions, overt labeling versus pointing. Grouping effects on memory were systematically related to…

  10. Optical design of system for a lightship

    NASA Astrophysics Data System (ADS)

    Chirkov, M. A.; Tsyganok, E. A.

    2017-06-01

    This article presents the result of the optical design of illuminating optical system for lightship using the freeform surface. It shows an algorithm of optical design of side-emitting lens for point source using Freeform Z function in Zemax non-sequential mode; optimization of calculation results and testing of optical system with real diode

  11. Cohort-Sequential Study of Conflict Inhibition during Middle Childhood

    ERIC Educational Resources Information Center

    Rollins, Leslie; Riggins, Tracy

    2017-01-01

    This longitudinal study examined developmental changes in conflict inhibition and error correction in three cohorts of children (5, 7, and 9 years of age). At each point of assessment, children completed three levels of Luria's tapping task (1980), which requires the inhibition of a dominant response and maintenance of task rules in working…

  12. The science of computing - Parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1985-01-01

    Although parallel computation architectures have been known for computers since the 1920s, it was only in the 1970s that microelectronic components technologies advanced to the point where it became feasible to incorporate multiple processors in one machine. Concommitantly, the development of algorithms for parallel processing also lagged due to hardware limitations. The speed of computing with solid-state chips is limited by gate switching delays. The physical limit implies that a 1 Gflop operational speed is the maximum for sequential processors. A computer recently introduced features a 'hypercube' architecture with 128 processors connected in networks at 5, 6 or 7 points per grid, depending on the design choice. Its computing speed rivals that of supercomputers, but at a fraction of the cost. The added speed with less hardware is due to parallel processing, which utilizes algorithms representing different parts of an equation that can be broken into simpler statements and processed simultaneously. Present, highly developed computer languages like FORTRAN, PASCAL, COBOL, etc., rely on sequential instructions. Thus, increased emphasis will now be directed at parallel processing algorithms to exploit the new architectures.

  13. Motor Timing Deficits in Sequential Movements in Parkinson Disease Are Related to Action Planning: A Motor Imagery Study

    PubMed Central

    Avanzino, Laura; Pelosin, Elisa; Martino, Davide; Abbruzzese, Giovanni

    2013-01-01

    Timing of sequential movements is altered in Parkinson disease (PD). Whether timing deficits in internally generated sequential movements in PD depends also on difficulties in motor planning, rather than merely on a defective ability to materially perform the planned movement is still undefined. To unveil this issue, we adopted a modified version of an established test for motor timing, i.e. the synchronization–continuation paradigm, by introducing a motor imagery task. Motor imagery is thought to involve mainly processes of movement preparation, with reduced involvement of end-stage movement execution-related processes. Fourteen patients with PD and twelve matched healthy volunteers were asked to tap in synchrony with a metronome cue (SYNC) and then, when the tone stopped, to keep tapping, trying to maintain the same rhythm (CONT-EXE) or to imagine tapping at the same rhythm, rather than actually performing it (CONT-MI). We tested both a sub-second and a supra-second inter-stimulus interval between the cues. Performance was recorded using a sensor-engineered glove and analyzed measuring the temporal error and the interval reproduction accuracy index. PD patients were less accurate than healthy subjects in the supra-second time reproduction task when performing both continuation tasks (CONT-MI and CONT-EXE), whereas no difference was detected in the synchronization task and on all tasks involving a sub-second interval. Our findings suggest that PD patients exhibit a selective deficit in motor timing for sequential movements that are separated by a supra-second interval and that this deficit may be explained by a defect of motor planning. Further, we propose that difficulties in motor planning are of a sufficient degree of severity in PD to affect also the motor performance in the supra-second time reproduction task. PMID:24086534

  14. Paper-based microfluidics with an erodible polymeric bridge giving controlled release and timed flow shutoff.

    PubMed

    Jahanshahi-Anbuhi, Sana; Henry, Aleah; Leung, Vincent; Sicard, Clémence; Pennings, Kevin; Pelton, Robert; Brennan, John D; Filipe, Carlos D M

    2014-01-07

    Water soluble pullulan films were formatted into paper-based microfluidic devices, serving as a controlled time shutoff valve. The utility of the valve was demonstrated by a one-step, fully automatic implementation of a complex pesticide assay requiring timed, sequential exposure of an immobilized enzyme layer to separate liquid streams. Pullulan film dissolution and the capillary wicking of aqueous solutions through the device were measured and modeled providing valve design criteria. The films dissolve mainly by surface erosion, meaning the film thickness mainly controls the shutoff time. This method can also provide time-dependent sequential release of reagents without compromising the simplicity and low cost of paper-based devices.

  15. Comparing cluster-level dynamic treatment regimens using sequential, multiple assignment, randomized trials: Regression estimation and sample size considerations.

    PubMed

    NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel

    2017-08-01

    Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.

  16. Robust registration in case of different scaling

    NASA Astrophysics Data System (ADS)

    Gluhchev, Georgi J.; Shalev, Shlomo

    1993-09-01

    The problem of robust registration in the case of anisotropic scaling has been investigated. Registration of two images using corresponding sets of fiducial points is sensitive to inaccuracies in point placement due to poor image quality or non-rigid distortions, including possible out-of-plane rotations. An approach aimed at the detection of the most unreliable points has been developed. It is based on the a priori knowledge of the sequential ordering of rotation and scaling. A measure of guilt derived from the anomalous geometric relationships is introduced. A heuristic decision rule allowing for deletion of the most guilty points is proposed. The approach allows for more precise evaluation of the translation vector. It has been tested on phantom images with known parameters and has shown satisfactory results.

  17. Use of Isobestic and Isoemission Points in Absorption and Luminescence Spectra for Study of the Transformation of Radiation Defects in Lithium Fluoride

    NASA Astrophysics Data System (ADS)

    Voitovich, A. P.; Kalinov, V. S.; Stupak, A. P.; Runets, L. P.

    2015-03-01

    Isobestic and isoemission points are recorded in the combined absorption and luminescence spectra of two types of radiation defects involved in complex processes consisting of several simultaneous parallel and sequential reactions. These points are observed if a constant sum of two terms, each formed by the product of the concentration of the corresponding defect and a characteristic integral coefficient associated with it, is conserved. The complicated processes involved in the transformation of radiation defects in lithium fluoride are studied using these points. It is found that the ratio of the changes in the concentrations of one of the components and the reaction product remains constant in the course of several simultaneous reactions.

  18. Fast online deconvolution of calcium imaging data

    PubMed Central

    Zhou, Pengcheng; Paninski, Liam

    2017-01-01

    Fluorescent calcium indicators are a popular means for observing the spiking activity of large neuronal populations, but extracting the activity of each neuron from raw fluorescence calcium imaging data is a nontrivial problem. We present a fast online active set method to solve this sparse non-negative deconvolution problem. Importantly, the algorithm 3progresses through each time series sequentially from beginning to end, thus enabling real-time online estimation of neural activity during the imaging session. Our algorithm is a generalization of the pool adjacent violators algorithm (PAVA) for isotonic regression and inherits its linear-time computational complexity. We gain remarkable increases in processing speed: more than one order of magnitude compared to currently employed state of the art convex solvers relying on interior point methods. Unlike these approaches, our method can exploit warm starts; therefore optimizing model hyperparameters only requires a handful of passes through the data. A minor modification can further improve the quality of activity inference by imposing a constraint on the minimum spike size. The algorithm enables real-time simultaneous deconvolution of O(105) traces of whole-brain larval zebrafish imaging data on a laptop. PMID:28291787

  19. Speech Perception and Production by Sequential Bilingual Children: A Longitudinal Study of Voice Onset Time Acquisition

    PubMed Central

    McCarthy, Kathleen M; Mahon, Merle; Rosen, Stuart; Evans, Bronwen G

    2014-01-01

    The majority of bilingual speech research has focused on simultaneous bilinguals. Yet, in immigrant communities, children are often initially exposed to their family language (L1), before becoming gradually immersed in the host country's language (L2). This is typically referred to as sequential bilingualism. Using a longitudinal design, this study explored the perception and production of the English voicing contrast in 55 children (40 Sylheti-English sequential bilinguals and 15 English monolinguals). Children were tested twice: when they were in nursery (52-month-olds) and 1 year later. Sequential bilinguals' perception and production of English plosives were initially driven by their experience with their L1, but after starting school, changed to match that of their monolingual peers. PMID:25123987

  20. A path-level exact parallelization strategy for sequential simulation

    NASA Astrophysics Data System (ADS)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

  1. Sequential Modular Position and Momentum Measurements of a Trapped Ion Mechanical Oscillator

    NASA Astrophysics Data System (ADS)

    Flühmann, C.; Negnevitsky, V.; Marinelli, M.; Home, J. P.

    2018-04-01

    The noncommutativity of position and momentum observables is a hallmark feature of quantum physics. However, this incompatibility does not extend to observables that are periodic in these base variables. Such modular-variable observables have been suggested as tools for fault-tolerant quantum computing and enhanced quantum sensing. Here, we implement sequential measurements of modular variables in the oscillatory motion of a single trapped ion, using state-dependent displacements and a heralded nondestructive readout. We investigate the commutative nature of modular variable observables by demonstrating no-signaling in time between successive measurements, using a variety of input states. Employing a different periodicity, we observe signaling in time. This also requires wave-packet overlap, resulting in quantum interference that we enhance using squeezed input states. The sequential measurements allow us to extract two-time correlators for modular variables, which we use to violate a Leggett-Garg inequality. Signaling in time and Leggett-Garg inequalities serve as efficient quantum witnesses, which we probe here with a mechanical oscillator, a system that has a natural crossover from the quantum to the classical regime.

  2. Low Dimensional Temporal Organization of Spontaneous Eye Blinks in Adults with Developmental Disabilities and Stereotyped Movement Disorder

    PubMed Central

    Lee, Mei-Hua; Bodfish, James W.; Lewis, Mark H.; Newell, Karl M.

    2009-01-01

    This study investigated the mean rate and time-dependent sequential organization of spontaneous eye blinks in adults with intellectual and developmental disability (IDD) and individuals from this group that were additionally categorized with stereotypic movement disorder (IDD+SMD). The mean blink rate was lower in the IDD+SMD group than the IDD group and both of these groups had a lower blink rate than a contrast group of healthy adults. In the IDD group the n to n+1 sequential organization over time of the eye blink durations showed a stronger compensatory organization than the contrast group suggesting decreased complexity/dimensionality of eye-blink behavior. Very low blink rate (and thus insufficient time series data) precluded analysis of time-dependent sequential properties in the IDD+SMD group. These findings support the hypothesis that both IDD and SMD are associated with a reduction in the dimension and adaptability of movement behavior and that this may serve as a risk factor for the expression of abnormal movements. PMID:19819672

  3. Comparison of solution-mixed and sequentially processed P3HT: F4TCNQ films: effect of doping-induced aggregation on film morphology

    DOE PAGES

    Jacobs, Ian E.; Aasen, Erik W.; Oliveira, Julia L.; ...

    2016-03-23

    Doping polymeric semiconductors often drastically reduces the solubility of the polymer, leading to difficulties in processing doped films. Here, we compare optical, electrical, and morphological properties of P3HT films doped with F4TCNQ, both from mixed solutions and using sequential solution processing with orthogonal solvents. We demonstrate that sequential doping occurs rapidly (<1 s), and that the film doping level can be precisely controlled by varying the concentration of the doping solution. Furthermore, the choice of sequential doping solvent controls whether dopant anions are included or excluded from polymer crystallites. Atomic force microscopy (AFM) reveals that sequential doping produces significantly moremore » uniform films on the nanoscale than the mixed-solution method. In addition, we show that mixed-solution doping induces the formation of aggregates even at low doping levels, resulting in drastic changes to film morphology. Sequentially coated films show 3–15 times higher conductivities at a given doping level than solution-doped films, with sequentially doped films processed to exclude dopant anions from polymer crystallites showing the highest conductivities. In conclusion, we propose a mechanism for doping induced aggregation in which the shift of the polymer HOMO level upon aggregation couples ionization and solvation energies. To show that the methodology is widely applicable, we demonstrate that several different polymer:dopant systems can be prepared by sequential doping.« less

  4. Comparison of solution-mixed and sequentially processed P3HT: F4TCNQ films: effect of doping-induced aggregation on film morphology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobs, Ian E.; Aasen, Erik W.; Oliveira, Julia L.

    Doping polymeric semiconductors often drastically reduces the solubility of the polymer, leading to difficulties in processing doped films. Here, we compare optical, electrical, and morphological properties of P3HT films doped with F4TCNQ, both from mixed solutions and using sequential solution processing with orthogonal solvents. We demonstrate that sequential doping occurs rapidly (<1 s), and that the film doping level can be precisely controlled by varying the concentration of the doping solution. Furthermore, the choice of sequential doping solvent controls whether dopant anions are included or excluded from polymer crystallites. Atomic force microscopy (AFM) reveals that sequential doping produces significantly moremore » uniform films on the nanoscale than the mixed-solution method. In addition, we show that mixed-solution doping induces the formation of aggregates even at low doping levels, resulting in drastic changes to film morphology. Sequentially coated films show 3–15 times higher conductivities at a given doping level than solution-doped films, with sequentially doped films processed to exclude dopant anions from polymer crystallites showing the highest conductivities. In conclusion, we propose a mechanism for doping induced aggregation in which the shift of the polymer HOMO level upon aggregation couples ionization and solvation energies. To show that the methodology is widely applicable, we demonstrate that several different polymer:dopant systems can be prepared by sequential doping.« less

  5. Comparison of Sequential Drug Release in Vitro and in Vivo

    PubMed Central

    Sundararaj, Sharath C.; Al-Sabbagh, Mohanad; Rabek, Cheryl L.; Dziubla, Thomas D.; Thomas, Mark V.; Puleo, David A.

    2015-01-01

    Development of drug delivery devices typically involves characterizing in vitro release performance with the inherent assumption that this will closely approximate in vivo performance. Yet, as delivery devices become more complex, for instance with a sequential drug release pattern, it is important to confirm that in vivo properties correlate with the expected “programming” achieved in vitro. In this work, a systematic comparison between in vitro and in vivo biomaterial erosion and sequential release was performed for a multilayered association polymer system comprising cellulose acetate phthalate and Pluronic F-127. After assessing the materials during incubation in phosphate-buffered saline, devices were implanted supracalvarially in rats. Devices with two different doses and with different erosion rates were harvested at increasing times post-implantation, and the in vivo thickness loss, mass loss, and the drug release profiles were compared with their in vitro counterparts. The sequential release of four different drugs observed in vitro was successfully translated to in vivo conditions. Results suggest, however, that the total erosion time of the devices was longer and release rates of the four drugs were different, with drugs initially released more quickly and then more slowly in vivo. Whereas many comparative studies of in vitro and in vivo drug release from biodegradable polymers involved a single drug, the present research demonstrated that sequential release of four drugs can be maintained following implantation. PMID:26111338

  6. Comparison of human embryomorphokinetic parameters in sequential or global culture media.

    PubMed

    Kazdar, Nadia; Brugnon, Florence; Bouche, Cyril; Jouve, Guilhem; Veau, Ségolène; Drapier, Hortense; Rousseau, Chloé; Pimentel, Céline; Viard, Patricia; Belaud-Rotureau, Marc-Antoine; Ravel, Célia

    2017-08-01

    A prospective study on randomized patients was conducted to determine how morphokinetic parameters are altered in embryos grown in sequential versus global culture media. Eleven morphokinetic parameters of 160 single embryos transferred were analyzed by time lapse imaging involving two University-affiliated in vitro fertilization (IVF) centers. We found that the fading of the two pronuclei occurred earlier in global (22.56±2.15 hpi) versus sequential media (23.63±2.71 hpi; p=0.0297). Likewise, the first cleavage started earlier at 24.52±2.33 hpi vs 25.76±2.95 hpi (p=0.0158). Also, the first cytokinesis was shorter in global medium, lasting 18±10.2 minutes in global versus 36±37.8 minutes in sequential culture medium (p <0.0001). We also observed a significant shortening in the duration of the 2-cell stage in sequential medium: 10.64 h±2.75 versus 11.66 h±1.11 in global medium (p=0.0225) which suggested a faster progression of the embryos through their first mitotic cell cycle. In conclusion, morphokinetic analysis of human embryos by Time lapse imaging reveals significant differences in five kinetic variables according to culture medium. Our study highlights the need to adapt morphokinetic analysis accordingly to the type of media used to best support human early embryo development.

  7. Memory and decision making: Effects of sequential presentation of probabilities and outcomes in risky prospects.

    PubMed

    Millroth, Philip; Guath, Mona; Juslin, Peter

    2018-06-07

    The rationality of decision making under risk is of central concern in psychology and other behavioral sciences. In real-life, the information relevant to a decision often arrives sequentially or changes over time, implying nontrivial demands on memory. Yet, little is known about how this affects the ability to make rational decisions and a default assumption is rather that information about outcomes and probabilities are simultaneously available at the time of the decision. In 4 experiments, we show that participants receiving probability- and outcome information sequentially report substantially (29 to 83%) higher certainty equivalents than participants with simultaneous presentation. This holds also for monetary-incentivized participants with perfect recall of the information. Participants in the sequential conditions often violate stochastic dominance in the sense that they pay more for a lottery with low probability of an outcome than participants in the simultaneous condition pay for a high probability of the same outcome. Computational modeling demonstrates that Cumulative Prospect Theory (Tversky & Kahneman, 1992) fails to account for the effects of sequential presentation, but a model assuming anchoring-and adjustment constrained by memory can account for the data. By implication, established assumptions of rationality may need to be reconsidered to account for the effects of memory in many real-life tasks. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  8. Modeling eye gaze patterns in clinician-patient interaction with lag sequential analysis.

    PubMed

    Montague, Enid; Xu, Jie; Chen, Ping-Yu; Asan, Onur; Barrett, Bruce P; Chewning, Betty

    2011-10-01

    The aim of this study was to examine whether lag sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multiuser health care settings in which trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Nonverbal communication patterns are important aspects of clinician-patient interactions and may affect patient outcomes. The eye gaze behaviors of clinicians and patients in 110 videotaped medical encounters were analyzed using the lag sequential method to identify significant behavior sequences. Lag sequential analysis included both event-based lag and time-based lag. Results from event-based lag analysis showed that the patient's gaze followed that of the clinician, whereas the clinician's gaze did not follow the patient's. Time-based sequential analysis showed that responses from the patient usually occurred within 2 s after the initial behavior of the clinician. Our data suggest that the clinician's gaze significantly affects the medical encounter but that the converse is not true. Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs.

  9. Modeling Eye Gaze Patterns in Clinician-Patient Interaction with Lag Sequential Analysis

    PubMed Central

    Montague, E; Xu, J; Asan, O; Chen, P; Chewning, B; Barrett, B

    2011-01-01

    Objective The aim of this study was to examine whether lag-sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multi-user health care settings where trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Background Nonverbal communication patterns are important aspects of clinician-patient interactions and may impact patient outcomes. Method Eye gaze behaviors of clinicians and patients in 110-videotaped medical encounters were analyzed using the lag-sequential method to identify significant behavior sequences. Lag-sequential analysis included both event-based lag and time-based lag. Results Results from event-based lag analysis showed that the patients’ gaze followed that of clinicians, while clinicians did not follow patients. Time-based sequential analysis showed that responses from the patient usually occurred within two seconds after the initial behavior of the clinician. Conclusion Our data suggest that the clinician’s gaze significantly affects the medical encounter but not the converse. Application Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs. PMID:22046723

  10. Crystal Growth and Dissolution of Methylammonium Lead Iodide Perovskite in Sequential Deposition: Correlation between Morphology Evolution and Photovoltaic Performance.

    PubMed

    Hsieh, Tsung-Yu; Huang, Chi-Kai; Su, Tzu-Sen; Hong, Cheng-You; Wei, Tzu-Chien

    2017-03-15

    Crystal morphology and structure are important for improving the organic-inorganic lead halide perovskite semiconductor property in optoelectronic, electronic, and photovoltaic devices. In particular, crystal growth and dissolution are two major phenomena in determining the morphology of methylammonium lead iodide perovskite in the sequential deposition method for fabricating a perovskite solar cell. In this report, the effect of immersion time in the second step, i.e., methlyammonium iodide immersion in the morphological, structural, optical, and photovoltaic evolution, is extensively investigated. Supported by experimental evidence, a five-staged, time-dependent evolution of the morphology of methylammonium lead iodide perovskite crystals is established and is well connected to the photovoltaic performance. This result is beneficial for engineering optimal time for methylammonium iodide immersion and converging the solar cell performance in the sequential deposition route. Meanwhile, our result suggests that large, well-faceted methylammonium lead iodide perovskite single crystal may be incubated by solution process. This offers a low cost route for synthesizing perovskite single crystal.

  11. Science documentary video slides to enhance education and communication

    NASA Astrophysics Data System (ADS)

    Byrne, J. M.; Little, L. J.; Dodgson, K.

    2010-12-01

    Documentary production can convey powerful messages using a combination of authentic science and reinforcing video imagery. Conventional documentary production contains too much information for many viewers to follow; hence many powerful points may be lost. But documentary productions that are re-edited into short video sequences and made available through web based video servers allow the teacher/viewer to access the material as video slides. Each video slide contains one critical discussion segment of the larger documentary. A teacher/viewer can review the documentary one segment at a time in a class room, public forum, or in the comfort of home. The sequential presentation of the video slides allows the viewer to best absorb the documentary message. The website environment provides space for additional questions and discussion to enhance the video message.

  12. A Pocock Approach to Sequential Meta-Analysis of Clinical Trials

    ERIC Educational Resources Information Center

    Shuster, Jonathan J.; Neu, Josef

    2013-01-01

    Three recent papers have provided sequential methods for meta-analysis of two-treatment randomized clinical trials. This paper provides an alternate approach that has three desirable features. First, when carried out prospectively (i.e., we only have the results up to the time of our current analysis), we do not require knowledge of the…

  13. Predictive Movements and Human Reinforcement Learning of Sequential Action

    ERIC Educational Resources Information Center

    de Kleijn, Roy; Kachergis, George; Hommel, Bernhard

    2018-01-01

    Sequential action makes up the bulk of human daily activity, and yet much remains unknown about how people learn such actions. In one motor learning paradigm, the serial reaction time (SRT) task, people are taught a consistent sequence of button presses by cueing them with the next target response. However, the SRT task only records keypress…

  14. Similar Neural Correlates for Language and Sequential Learning: Evidence from Event-Related Brain Potentials

    ERIC Educational Resources Information Center

    Christiansen, Morten H.; Conway, Christopher M.; Onnis, Luca

    2012-01-01

    We used event-related potentials (ERPs) to investigate the time course and distribution of brain activity while adults performed (1) a sequential learning task involving complex structured sequences and (2) a language processing task. The same positive ERP deflection, the P600 effect, typically linked to difficult or ungrammatical syntactic…

  15. Evaluation of floating-point sum or difference of products in carry-save domain

    NASA Technical Reports Server (NTRS)

    Wahab, A.; Erdogan, S.; Premkumar, A. B.

    1992-01-01

    An architecture to evaluate a 24-bit floating-point sum or difference of products using modified sequential carry-save multipliers with extensive pipelining is described. The basic building block of the architecture is a carry-save multiplier with built-in mantissa alignment for the summation during the multiplication cycles. A carry-save adder, capable of mantissa alignment, correctly positions products with the current carry-save sum. Carry propagation in individual multipliers is avoided and is only required once to produce the final result.

  16. Patterned photostimulation with digital micromirror devices to investigate dendritic integration across branch points.

    PubMed

    Liang, Conrad W; Mohammadi, Michael; Santos, M Daniel; Santos, M Danial; Tang, Cha-Min

    2011-03-02

    Light is a versatile and precise means to control neuronal excitability. The recent introduction of light sensitive effectors such as channel-rhodopsin and caged neurotransmitters have led to interests in developing better means to control patterns of light in space and time that are useful for experimental neuroscience. One conventional strategy, employed in confocal and 2-photon microscopy, is to focus light to a diffraction limited spot and then scan that single spot sequentially over the region of interest. This approach becomes problematic if large areas have to be stimulated within a brief time window, a problem more applicable to photostimulation than for imaging. An alternate strategy is to project the complete spatial pattern on the target with the aid of a digital micromirror device (DMD). The DMD approach is appealing because the hardware components are relatively inexpensive and is supported by commercial interests. Because such a system is not available for upright microscopes, we will discuss the critical issues in the construction and operations of such a DMD system. Even though we will be primarily describing the construction of the system for UV photolysis, the modifications for building the much simpler visible light system for optogenetic experiments will also be provided. The UV photolysis system was used to carryout experiments to study a fundamental question in neuroscience, how are spatially distributed inputs integrated across distal dendritic branch points. The results suggest that integration can be non-linear across branch points and the supralinearity is largely mediated by NMDA receptors.

  17. Longer-term functional outcomes and everyday listening performance for young children through to young adults using bilateral implants.

    PubMed

    Galvin, Karyn Louise; Holland, Jennifer Frances; Hughes, Kathryn Clare

    2014-01-01

    First, to document a broad range of functional outcomes of bilateral implantation for young children through young adults at a postoperative point at which stable outcomes could be expected. Second, to evaluate the relationship between functional outcomes and age at bilateral implantation and time between implants. A study-specific questionnaire was administered to parents in an interview 3.5 years or more after sequential (n = 50) or simultaneous (n = 7) implants were received by their child. Median age at bilateral implantation was 4.1 years (range 0.7 to 19.8) and time between implants was 2.7 years (range 0.0 to 16.7). On the basis of parent report, 72% of the sequentially implanted children and young adults found it easy/only "a bit difficult" to adapt to the second implant, and were "happily wearing both implants together most of the time" by 6 months or before; 26% had not adapted, with both implants not worn most of the time or worn as a parental requirement. Seventy-two percent of sequentially implanted children and young adults had a positive attitude toward the second implant, including 9 whose early postoperative attitude was negative or neutral. The majority of children and young adults preferred bilateral implants (70%) and used the two full time (72%), while around half demonstrated similar performance with each implant alone. The proportion of nonusers or very minimal users of the second implant was just 9%. Eighty-eight percent of parents reported superior performance with bilateral versus a unilateral implant (n = 40), or that only bilateral implants were worn (n = 10) so performance could not be compared. The most commonly identified areas of superiority were localization, less need for repetition, and increased responsiveness. In balancing risks and costs with benefits, most parents (86%) considered the second implant worthwhile. Regarding the relationship between outcomes and demographic factors, the group achieving similar performance with each implant alone was younger at bilateral implantation and had less time between implants, and the group bilaterally implanted before 3.5 years of age (who also had less than 2 years between implants) had a higher proportion of positive outcomes on all functional outcome measures. Overall, the results indicate primarily positive functional outcomes for children and young adults receiving bilateral implants at all ages, including when the delay between implants is long. The results are important for evidence-based preoperative counseling, which helps families to make informed decisions and develop appropriate expectations. The results are also important for the development of clinical management practices that support and encourage the minority of recipients who have difficulty adapting to bilateral implants or achieving full-time use.

  18. Simultaneous bilateral stereotactic procedure for deep brain stimulation implants: a significant step for reducing operation time.

    PubMed

    Fonoff, Erich Talamoni; Azevedo, Angelo; Angelos, Jairo Silva Dos; Martinez, Raquel Chacon Ruiz; Navarro, Jessie; Reis, Paul Rodrigo; Sepulveda, Miguel Ernesto San Martin; Cury, Rubens Gisbert; Ghilardi, Maria Gabriela Dos Santos; Teixeira, Manoel Jacobsen; Lopez, William Omar Contreras

    2016-07-01

    OBJECT Currently, bilateral procedures involve 2 sequential implants in each of the hemispheres. The present report demonstrates the feasibility of simultaneous bilateral procedures during the implantation of deep brain stimulation (DBS) leads. METHODS Fifty-seven patients with movement disorders underwent bilateral DBS implantation in the same study period. The authors compared the time required for the surgical implantation of deep brain electrodes in 2 randomly assigned groups. One group of 28 patients underwent traditional sequential electrode implantation, and the other 29 patients underwent simultaneous bilateral implantation. Clinical outcomes of the patients with Parkinson's disease (PD) who had undergone DBS implantation of the subthalamic nucleus using either of the 2 techniques were compared. RESULTS Overall, a reduction of 38.51% in total operating time for the simultaneous bilateral group (136.4 ± 20.93 minutes) as compared with that for the traditional consecutive approach (220.3 ± 27.58 minutes) was observed. Regarding clinical outcomes in the PD patients who underwent subthalamic nucleus DBS implantation, comparing the preoperative off-medication condition with the off-medication/on-stimulation condition 1 year after the surgery in both procedure groups, there was a mean 47.8% ± 9.5% improvement in the Unified Parkinson's Disease Rating Scale Part III (UPDRS-III) score in the simultaneous group, while the sequential group experienced 47.5% ± 15.8% improvement (p = 0.96). Moreover, a marked reduction in the levodopa-equivalent dose from preoperatively to postoperatively was similar in these 2 groups. The simultaneous bilateral procedure presented major advantages over the traditional sequential approach, with a shorter total operating time. CONCLUSIONS A simultaneous stereotactic approach significantly reduces the operation time in bilateral DBS procedures, resulting in decreased microrecording time, contributing to the optimization of functional stereotactic procedures.

  19. Application of a modified sequential organ failure assessment score to critically ill patients

    PubMed Central

    Ñamendys-Silva, S.A.; Silva-Medina, M.A.; Vásquez-Barahona, G.M.; Baltazar-Torres, J.A.; Rivero-Sigarroa, E.; Fonseca-Lazcano, J.A.; Domínguez-Cherit, G.

    2013-01-01

    The purpose of the present study was to explore the usefulness of the Mexican sequential organ failure assessment (MEXSOFA) score for assessing the risk of mortality for critically ill patients in the ICU. A total of 232 consecutive patients admitted to an ICU were included in the study. The MEXSOFA was calculated using the original SOFA scoring system with two modifications: the PaO2/FiO2 ratio was replaced with the SpO2/FiO2 ratio, and the evaluation of neurologic dysfunction was excluded. The ICU mortality rate was 20.2%. Patients with an initial MEXSOFA score of 9 points or less calculated during the first 24 h after admission to the ICU had a mortality rate of 14.8%, while those with an initial MEXSOFA score of 10 points or more had a mortality rate of 40%. The MEXSOFA score at 48 h was also associated with mortality: patients with a score of 9 points or less had a mortality rate of 14.1%, while those with a score of 10 points or more had a mortality rate of 50%. In a multivariate analysis, only the MEXSOFA score at 48 h was an independent predictor for in-ICU death with an OR = 1.35 (95%CI = 1.14-1.59, P < 0.001). The SOFA and MEXSOFA scores calculated 24 h after admission to the ICU demonstrated a good level of discrimination for predicting the in-ICU mortality risk in critically ill patients. The MEXSOFA score at 48 h was an independent predictor of death; with each 1-point increase, the odds of death increased by 35%. PMID:23369978

  20. /sup 99m/Tc-fibrinogen scanning in adult respiratory distress syndrome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, D.A.; Carvalho, A.C.; Geller, E.

    1987-01-01

    Fibrin is often seen occluding the lung vessels of patients dying from ARDS and is surrounded by regions of lung necrosis. To learn if we could observe increased or focal fibrin deposition and assess the kinetics of plasma fibrinogen turnover during severe acute respiratory failure, we injected technetium 99m-labeled human purified fibrinogen (Tc-HF) and used gamma camera scanning for as long as 12 h in 13 sequential patients as soon as possible after ICU admission. The fibrinogen uptake rates were determined by calculating the lung:heart radioactivity ratios at each time point. Slopes of the lung:heart ratio versus time were comparedmore » between ARDS and mild acute respiratory failure (ARF). The slope of the lung:heart Tc-HF ratio of the 9 patients with ARDS (2.9 +/- 0.4 units) was markedly higher (p less than 0.02) than the slope of the 4 patients with mild ARF (1.1 +/- 0.4) and the 3 patients studied 5 to 9 months after recovery from respiratory failure (0.7 +/- 0.07). In the 1 patient with ARDS and the 2 patients with mild ARF studied both during acute lung injury and after recovery, the lung:heart Tc-HF ratio had decreased at recovery. To compare the pulmonary uptake of Tc-HF to /sup 99m/Tc-labeled human serum albumin (Tc-HSA), 5 patients were injected with 10 mCi of Tc-HSA, and scanning of the thorax was performed with a similar sequential imaging protocol 24 h after conclusion of the Tc-HF study.« less

  1. A behavioural and neural evaluation of prospective decision-making under risk

    PubMed Central

    Symmonds, Mkael; Bossaerts, Peter; Dolan, Raymond J.

    2010-01-01

    Making the best choice when faced with a chain of decisions requires a person to judge both anticipated outcomes and future actions. Although economic decision-making models account for both risk and reward in single choice contexts there is a dearth of similar knowledge about sequential choice. Classical utility-based models assume that decision-makers select and follow an optimal pre-determined strategy, irrespective of the particular order in which options are presented. An alternative model involves continuously re-evaluating decision utilities, without prescribing a specific future set of choices. Here, using behavioral and functional magnetic resonance imaging (fMRI) data, we studied human subjects in a sequential choice task and use these data to compare alternative decision models of valuation and strategy selection. We provide evidence that subjects adopt a model of re-evaluating decision utilities, where available strategies are continuously updated and combined in assessing action values. We validate this model by using simultaneously-acquired fMRI data to show that sequential choice evokes a pattern of neural response consistent with a tracking of anticipated distribution of future reward, as expected in such a model. Thus, brain activity evoked at each decision point reflects the expected mean, variance and skewness of possible payoffs, consistent with the idea that sequential choice evokes a prospective evaluation of both available strategies and possible outcomes. PMID:20980595

  2. A behavioral and neural evaluation of prospective decision-making under risk.

    PubMed

    Symmonds, Mkael; Bossaerts, Peter; Dolan, Raymond J

    2010-10-27

    Making the best choice when faced with a chain of decisions requires a person to judge both anticipated outcomes and future actions. Although economic decision-making models account for both risk and reward in single-choice contexts, there is a dearth of similar knowledge about sequential choice. Classical utility-based models assume that decision-makers select and follow an optimal predetermined strategy, regardless of the particular order in which options are presented. An alternative model involves continuously reevaluating decision utilities, without prescribing a specific future set of choices. Here, using behavioral and functional magnetic resonance imaging (fMRI) data, we studied human subjects in a sequential choice task and use these data to compare alternative decision models of valuation and strategy selection. We provide evidence that subjects adopt a model of reevaluating decision utilities, in which available strategies are continuously updated and combined in assessing action values. We validate this model by using simultaneously acquired fMRI data to show that sequential choice evokes a pattern of neural response consistent with a tracking of anticipated distribution of future reward, as expected in such a model. Thus, brain activity evoked at each decision point reflects the expected mean, variance, and skewness of possible payoffs, consistent with the idea that sequential choice evokes a prospective evaluation of both available strategies and possible outcomes.

  3. 128-slice Dual-source Computed Tomography Coronary Angiography in Patients with Atrial Fibrillation: Image Quality and Radiation Dose of Prospectively Electrocardiogram-triggered Sequential Scan Compared with Retrospectively Electrocardiogram-gated Spiral Scan.

    PubMed

    Lin, Lu; Wang, Yi-Ning; Kong, Ling-Yan; Jin, Zheng-Yu; Lu, Guang-Ming; Zhang, Zhao-Qi; Cao, Jian; Li, Shuo; Song, Lan; Wang, Zhi-Wei; Zhou, Kang; Wang, Ming

    2013-01-01

    Objective To evaluate the image quality (IQ) and radiation dose of 128-slice dual-source computed tomography (DSCT) coronary angiography using prospectively electrocardiogram (ECG)-triggered sequential scan mode compared with ECG-gated spiral scan mode in a population with atrial fibrillation. Methods Thirty-two patients with suspected coronary artery disease and permanent atrial fibrillation referred for a second-generation 128-slice DSCT coronary angiography were included in the prospective study. Of them, 17 patients (sequential group) were randomly selected to use a prospectively ECG-triggered sequential scan, while the other 15 patients (spiral group) used a retrospectively ECG-gated spiral scan. The IQ was assessed by two readers independently, using a four-point grading scale from excel-lent (grade 1) to non-assessable (grade 4), based on the American Heart Association 15-segment model. IQ of each segment and effective dose of each patient were compared between the two groups. Results The mean heart rate (HR) of the sequential group was 96±27 beats per minute (bpm) with a variation range of 73±25 bpm, while the mean HR of the spiral group was 86±22 bpm with a variationrange of 65±24 bpm. Both of the mean HR (t=1.91, P=0.243) and HR variation range (t=0.950, P=0.350) had no significant difference between the two groups. In per-segment analysis, IQ of the sequential group vs. spiral group was rated as excellent (grade 1) in 190/244 (78%) vs. 177/217 (82%) by reader1 and 197/245 (80%) vs. 174/214 (81%) by reader2, as non-assessable (grade 4) in 4/244 (2%) vs. 2/217 (1%) by reader1 and 6/245 (2%) vs. 4/214 (2%) by reader2. Overall averaged IQ per-patient in the sequential and spiral group showed equally good (1.27±0.19 vs. 1.25±0.22, Z=-0.834, P=0.404). The effective radiation dose of the sequential group reduced significantly compared with the spiral group (4.88±1.77 mSv vs. 10.20±3.64 mSv; t=-5.372, P=0.000). Conclusion Compared with retrospectively ECG-gated spiral scan, prospectively ECG-triggered sequential DSCT coronary angiography provides similarly diagnostically valuable images in patients with atrial fibrillation and significantly reduces radiation dose.

  4. The effectiveness of zinc supplementation in men with isolated hypogonadotropic hypogonadism.

    PubMed

    Liu, Yan-Ling; Zhang, Man-Na; Tong, Guo-Yu; Sun, Shou-Yue; Zhu, Yan-Hua; Cao, Ying; Zhang, Jie; Huang, Hong; Niu, Ben; Li, Hong; Guo, Qing-Hua; Gao, Yan; Zhu, Da-Long; Li, Xiao-Ying

    2017-01-01

    A multicenter, open-label, randomized, controlled superiority trial with 18 months of follow-up was conducted to investigate whether oral zinc supplementation could further promote spermatogenesis in males with isolated hypogonadotropic hypogonadism (IHH) receiving sequential purified urinary follicular-stimulating hormone/human chorionic gonadotropin (uFSH/hCG) replacement. Sixty-seven Chinese male IHH patients were recruited from the Departments of Endocrinology in eight tertiary hospitals and randomly allocated into the sequential uFSH/hCG group (Group A, n = 34) or the sequential uFSH plus zinc supplementation group (Group B, n = 33). In Group A, patients received sequential uFSH (75 U, three times a week every other 3 months) and hCG (2000 U, twice a week) treatments. In Group B, patients received oral zinc supplementation (40 mg day-1 ) in addition to the sequential uFSH/hCG treatment given to patients in Group A. The primary outcome was the proportion of patients with a sperm concentration ≥1.0 × 106 ml-1 during the 18 months. The comparison of efficacy between Groups A and B was analyzed. Nineteen of 34 (55.9%) patients receiving sequential uFSH/hCG and 20 of 33 (60.6%) patients receiving sequential uFSH/hCG plus zinc supplementation achieved sperm concentrations ≥1.0 × 106 ml-1 by intention to treat analyses. No differences between Group A and Group B were observed as far as the efficacy of inducing spermatogenesis (P = 0.69). We concluded that the sequential uFSH/hCG plus zinc supplementation regimen had a similar efficacy to the sequential uFSH/hCG treatment alone. The additional improvement of 40 mg day-1 oral zinc supplementation on spermatogenesis and masculinization in male IHH patients is very subtle.

  5. Trend analysis and change point detection of annual and seasonal temperature series in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Suhaila, Jamaludin; Yusop, Zulkifli

    2017-06-01

    Most of the trend analysis that has been conducted has not considered the existence of a change point in the time series analysis. If these occurred, then the trend analysis will not be able to detect an obvious increasing or decreasing trend over certain parts of the time series. Furthermore, the lack of discussion on the possible factors that influenced either the decreasing or the increasing trend in the series needs to be addressed in any trend analysis. Hence, this study proposes to investigate the trends, and change point detection of mean, maximum and minimum temperature series, both annually and seasonally in Peninsular Malaysia and determine the possible factors that could contribute to the significance trends. In this study, Pettitt and sequential Mann-Kendall (SQ-MK) tests were used to examine the occurrence of any abrupt climate changes in the independent series. The analyses of the abrupt changes in temperature series suggested that most of the change points in Peninsular Malaysia were detected during the years 1996, 1997 and 1998. These detection points captured by Pettitt and SQ-MK tests are possibly related to climatic factors, such as El Niño and La Niña events. The findings also showed that the majority of the significant change points that exist in the series are related to the significant trend of the stations. Significant increasing trends of annual and seasonal mean, maximum and minimum temperatures in Peninsular Malaysia were found with a range of 2-5 °C/100 years during the last 32 years. It was observed that the magnitudes of the increasing trend in minimum temperatures were larger than the maximum temperatures for most of the studied stations, particularly at the urban stations. These increases are suspected to be linked with the effect of urban heat island other than El Niño event.

  6. Goal-Directed Decision Making with Spiking Neurons.

    PubMed

    Friedrich, Johannes; Lengyel, Máté

    2016-02-03

    Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. Copyright © 2016 the authors 0270-6474/16/361529-18$15.00/0.

  7. Goal-Directed Decision Making with Spiking Neurons

    PubMed Central

    Lengyel, Máté

    2016-01-01

    Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. SIGNIFICANCE STATEMENT Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. PMID:26843636

  8. A Sequential Linear Quadratic Approach for Constrained Nonlinear Optimal Control with Adaptive Time Discretization and Application to Higher Elevation Mars Landing Problem

    NASA Astrophysics Data System (ADS)

    Sandhu, Amit

    A sequential quadratic programming method is proposed for solving nonlinear optimal control problems subject to general path constraints including mixed state-control and state only constraints. The proposed algorithm further develops on the approach proposed in [1] with objective to eliminate the use of a high number of time intervals for arriving at an optimal solution. This is done by introducing an adaptive time discretization to allow formation of a desirable control profile without utilizing a lot of intervals. The use of fewer time intervals reduces the computation time considerably. This algorithm is further used in this thesis to solve a trajectory planning problem for higher elevation Mars landing.

  9. REQUEST: A Recursive QUEST Algorithm for Sequential Attitude Determination

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.

    1996-01-01

    In order to find the attitude of a spacecraft with respect to a reference coordinate system, vector measurements are taken. The vectors are pairs of measurements of the same generalized vector, taken in the spacecraft body coordinates, as well as in the reference coordinate system. We are interested in finding the best estimate of the transformation between these coordinate system.s The algorithm called QUEST yields that estimate where attitude is expressed by a quarternion. Quest is an efficient algorithm which provides a least squares fit of the quaternion of rotation to the vector measurements. Quest however, is a single time point (single frame) batch algorithm, thus measurements that were taken at previous time points are discarded. The algorithm presented in this work provides a recursive routine which considers all past measurements. The algorithm is based on on the fact that the, so called, K matrix, one of whose eigenvectors is the sought quaternion, is linerly related to the measured pairs, and on the ability to propagate K. The extraction of the appropriate eigenvector is done according to the classical QUEST algorithm. This stage, however, can be eliminated, and the computation simplified, if a standard eigenvalue-eigenvector solver algorithm is used. The development of the recursive algorithm is presented and illustrated via a numerical example.

  10. Accelerating simulation for the multiple-point statistics algorithm using vector quantization

    NASA Astrophysics Data System (ADS)

    Zuo, Chen; Pan, Zhibin; Liang, Hao

    2018-03-01

    Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.

  11. Does a String-Particle Dualism Indicate the Uncertainty Principle's Philosophical Dichotomy?

    NASA Astrophysics Data System (ADS)

    Mc Leod, David; Mc Leod, Roger

    2007-04-01

    String theory may allow resonances of neutrino-wave-strings to account for all experimentally detected phenomena. Particle theory logically, and physically, provides an alternate, contradictory dualism. Is it contradictory to symbolically and simultaneously state that λp = h, but, the product of position and momentum must be greater than, or equal to, the same (scaled) Plank's constant? Our previous electron and positron models require `membrane' vibrations of string-linked neutrinos, in closed loops, to behave like traveling waves, Tws, intermittently metamorphosing into alternately ascending and descending standing waves, Sws, between the nodes, which advance sequentially through 360 degrees. Accumulated time passages as Tws detail required ``loop currents'' supplying magnetic moments. Remaining time partitions into the Sws' alternately ascending and descending phases: the physical basis of the experimentally established 3D modes of these ``particles.'' Waves seem to indicate that point mass cannot be required to exist instantaneously at one point; Mott's and Sneddon's Wave Mechanics says that a constant, [mass], is present. String-like resonances may also account for homeopathy's efficacy, dark matter, and constellations' ``stick-figure projections,'' as indicated by some traditional cultures, all possibly involving neutrino strings. To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2007.NES07.C2.5

  12. Comparing Coordinated Versus Sequential Salpingo-Oophorectomy for BRCA1 and BRCA2 Mutation Carriers With Breast Cancer.

    PubMed

    S Chapman, Jocelyn; Roddy, Erika; Panighetti, Anna; Hwang, Shelley; Crawford, Beth; Powell, Bethan; Chen, Lee-May

    2016-12-01

    Women with breast cancer who carry BRCA1 or BRCA2 mutations must also consider risk-reducing salpingo-oophorectomy (RRSO) and how to coordinate this procedure with their breast surgery. We report the factors associated with coordinated versus sequential surgery and compare the outcomes of each. Patients in our cancer risk database who had breast cancer and a known deleterious BRCA1/2 mutation before undergoing breast surgery were included. Women who chose concurrent RRSO at the time of breast surgery were compared to those who did not. Sixty-two patients knew their mutation carrier status before undergoing breast cancer surgery. Forty-three patients (69%) opted for coordinated surgeries, and 19 (31%) underwent sequential surgeries at a median follow-up of 4.4 years. Women who underwent coordinated surgery were significantly older than those who chose sequential surgery (median age of 45 vs. 39 years; P = .025). There were no differences in comorbidities between groups. Patients who received neoadjuvant chemotherapy were more likely to undergo coordinated surgery (65% vs. 37%; P = .038). Sequential surgery patients had longer hospital stays (4.79 vs. 3.44 days, P = .01) and longer operating times (8.25 vs. 6.38 hours, P = .006) than patients who elected combined surgery. Postoperative complications were minor and were no more likely in either group (odds ratio, 4.76; 95% confidence interval, 0.56-40.6). Coordinating RRSO with breast surgery is associated with receipt of neoadjuvant chemotherapy, longer operating times, and hospital stays without an observed increase in complications. In the absence of risk, surgical options can be personalized. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Evidence for decreased interaction and improved carotenoid bioavailability by sequential delivery of a supplement.

    PubMed

    Salter-Venzon, Dawna; Kazlova, Valentina; Izzy Ford, Samantha; Intra, Janjira; Klosner, Allison E; Gellenbeck, Kevin W

    2017-05-01

    Despite the notable health benefits of carotenoids for human health, the majority of human diets worldwide are repeatedly shown to be inadequate in intake of carotenoid-rich fruits and vegetables, according to current health recommendations. To address this deficit, strategies designed to increase dietary intakes and subsequent plasma levels of carotenoids are warranted. When mixed carotenoids are delivered into the intestinal tract simultaneously, competition occurs for micelle formation and absorption, affecting carotenoid bioavailability. Previously, we tested the in vitro viability of a carotenoid mix designed to deliver individual carotenoids sequentially spaced from one another over the 6 hr transit time of the human upper gastrointestinal system. We hypothesized that temporally and spatially separating the individual carotenoids would reduce competition for micelle formation, improve uptake, and maximize efficacy. Here, we test this hypothesis in a double-blind, repeated-measure, cross-over human study with 12 subjects by comparing the change of plasma carotenoid levels for 8 hr after oral doses of a sequentially spaced carotenoid mix, to a matched mix without sequential spacing. We find the carotenoid change from baseline, measured as area under the curve, is increased following consumption of the sequentially spaced mix compared to concomitant carotenoids delivery. These results demonstrate reduced interaction and regulation between the sequentially spaced carotenoids, suggesting improved bioavailability from a novel sequentially spaced carotenoid mix.

  14. Sequential vs simultaneous encoding of spatial information: a comparison between the blind and the sighted.

    PubMed

    Ruotolo, Francesco; Ruggiero, Gennaro; Vinciguerra, Michela; Iachini, Tina

    2012-02-01

    The aim of this research is to assess whether the crucial factor in determining the characteristics of blind people's spatial mental images is concerned with the visual impairment per se or the processing style that the dominant perceptual modalities used to acquire spatial information impose, i.e. simultaneous (vision) vs sequential (kinaesthesis). Participants were asked to learn six positions in a large parking area via movement alone (congenitally blind, adventitiously blind, blindfolded sighted) or with vision plus movement (simultaneous sighted, sequential sighted), and then to mentally scan between positions in the path. The crucial manipulation concerned the sequential sighted group. Their visual exploration was made sequential by putting visual obstacles within the pathway in such a way that they could not see simultaneously the positions along the pathway. The results revealed a significant time/distance linear relation in all tested groups. However, the linear component was lower in sequential sighted and blind participants, especially congenital. Sequential sighted and congenitally blind participants showed an almost overlapping performance. Differences between groups became evident when mentally scanning farther distances (more than 5m). This threshold effect could be revealing of processing limitations due to the need of integrating and updating spatial information. Overall, the results suggest that the characteristics of the processing style rather than the visual impairment per se affect blind people's spatial mental images. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Sequential self-assembly of DNA functionalized droplets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yin; McMullen, Angus; Pontani, Lea-Laetitia

    Complex structures and devices, both natural and manmade, are often constructed sequentially. From crystallization to embryogenesis, a nucleus or seed is formed and built upon. Sequential assembly allows for initiation, signaling, and logical programming, which are necessary for making enclosed, hierarchical structures. Though biology relies on such schemes, they have not been available in materials science. We demonstrate programmed sequential self-assembly of DNA functionalized emulsions. The droplets are initially inert because the grafted DNA strands are pre-hybridized in pairs. Active strands on initiator droplets then displace one of the paired strands and thus release its complement, which in turn activatesmore » the next droplet in the sequence, akin to living polymerization. This strategy provides time and logic control during the self-assembly process, and offers a new perspective on the synthesis of materials.« less

  16. A sequential adaptation technique and its application to the Mark 12 IFF system

    NASA Astrophysics Data System (ADS)

    Bailey, John S.; Mallett, John D.; Sheppard, Duane J.; Warner, F. Neal; Adams, Robert

    1986-07-01

    Sequential adaptation uses only two sets of receivers, correlators, and A/D converters which are time multiplexed to effect spatial adaptation in a system with (N) adaptive degrees of freedom. This technique can substantially reduce the hardware cost over what is realizable in a parallel architecture. A three channel L-band version of the sequential adapter was built and tested for use with the MARK XII IFF (identify friend or foe) system. In this system the sequentially determined adaptive weights were obtained digitally but implemented at RF. As a result, many of the post RF hardware induced sources of error that normally limit cancellation, such as receiver mismatch, are removed by the feedback property. The result is a system that can yield high levels of cancellation and be readily retrofitted to currently fielded equipment.

  17. Sequential self-assembly of DNA functionalized droplets

    DOE PAGES

    Zhang, Yin; McMullen, Angus; Pontani, Lea-Laetitia; ...

    2017-06-16

    Complex structures and devices, both natural and manmade, are often constructed sequentially. From crystallization to embryogenesis, a nucleus or seed is formed and built upon. Sequential assembly allows for initiation, signaling, and logical programming, which are necessary for making enclosed, hierarchical structures. Though biology relies on such schemes, they have not been available in materials science. We demonstrate programmed sequential self-assembly of DNA functionalized emulsions. The droplets are initially inert because the grafted DNA strands are pre-hybridized in pairs. Active strands on initiator droplets then displace one of the paired strands and thus release its complement, which in turn activatesmore » the next droplet in the sequence, akin to living polymerization. This strategy provides time and logic control during the self-assembly process, and offers a new perspective on the synthesis of materials.« less

  18. Immortal time bias: a frequently unrecognized threat to validity in the evaluation of postoperative radiotherapy.

    PubMed

    Park, Henry S; Gross, Cary P; Makarov, Danil V; Yu, James B

    2012-08-01

    To evaluate the influence of immortal time bias on observational cohort studies of postoperative radiotherapy (PORT) and the effectiveness of sequential landmark analysis to account for this bias. First, we reviewed previous studies of the Surveillance, Epidemiology, and End Results (SEER) database to determine how frequently this bias was considered. Second, we used SEER to select three tumor types (glioblastoma multiforme, Stage IA-IVM0 gastric adenocarcinoma, and Stage II-III rectal carcinoma) for which prospective trials demonstrated an improvement in survival associated with PORT. For each tumor type, we calculated conditional survivals and adjusted hazard ratios of PORT vs. postoperative observation cohorts while restricting the sample at sequential monthly landmarks. Sixty-two percent of previous SEER publications evaluating PORT failed to use a landmark analysis. As expected, delivery of PORT for all three tumor types was associated with improved survival, with the largest associated benefit favoring PORT when all patients were included regardless of survival. Preselecting a cohort with a longer minimum survival sequentially diminished the apparent benefit of PORT. Although the majority of previous SEER articles do not correct for it, immortal time bias leads to altered estimates of PORT effectiveness, which are very sensitive to landmark selection. We suggest the routine use of sequential landmark analysis to account for this bias. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Immortal Time Bias: A Frequently Unrecognized Threat to Validity in the Evaluation of Postoperative Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Henry S.; Gross, Cary P.; Makarov, Danil V.

    2012-08-01

    Purpose: To evaluate the influence of immortal time bias on observational cohort studies of postoperative radiotherapy (PORT) and the effectiveness of sequential landmark analysis to account for this bias. Methods and Materials: First, we reviewed previous studies of the Surveillance, Epidemiology, and End Results (SEER) database to determine how frequently this bias was considered. Second, we used SEER to select three tumor types (glioblastoma multiforme, Stage IA-IVM0 gastric adenocarcinoma, and Stage II-III rectal carcinoma) for which prospective trials demonstrated an improvement in survival associated with PORT. For each tumor type, we calculated conditional survivals and adjusted hazard ratios of PORTmore » vs. postoperative observation cohorts while restricting the sample at sequential monthly landmarks. Results: Sixty-two percent of previous SEER publications evaluating PORT failed to use a landmark analysis. As expected, delivery of PORT for all three tumor types was associated with improved survival, with the largest associated benefit favoring PORT when all patients were included regardless of survival. Preselecting a cohort with a longer minimum survival sequentially diminished the apparent benefit of PORT. Conclusions: Although the majority of previous SEER articles do not correct for it, immortal time bias leads to altered estimates of PORT effectiveness, which are very sensitive to landmark selection. We suggest the routine use of sequential landmark analysis to account for this bias.« less

  20. An algorithm for propagating the square-root covariance matrix in triangular form

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Choe, C. Y.

    1976-01-01

    A method for propagating the square root of the state error covariance matrix in lower triangular form is described. The algorithm can be combined with any triangular square-root measurement update algorithm to obtain a triangular square-root sequential estimation algorithm. The triangular square-root algorithm compares favorably with the conventional sequential estimation algorithm with regard to computation time.

  1. Should Bilingual Children Learn Reading in Two Languages at the Same Time or in Sequence?

    ERIC Educational Resources Information Center

    Berens, Melody S.; Kovelman, Ioulia; Petitto, Laura-Ann

    2013-01-01

    Is it best to learn reading in two languages simultaneously or sequentially? We observed second- and third-grade children in two-way "dual-language learning contexts": (a) 50:50 or Simultaneous dual-language (two languages within same developmental period) and (b) 90:10 or Sequential dual-language (one language, followed gradually by the other).…

  2. Speech Perception and Production by Sequential Bilingual Children: A Longitudinal Study of Voice Onset Time Acquisition

    ERIC Educational Resources Information Center

    McCarthy, Kathleen M.; Mahon, Merle; Rosen, Stuart; Evans, Bronwen G.

    2014-01-01

    The majority of bilingual speech research has focused on simultaneous bilinguals. Yet, in immigrant communities, children are often initially exposed to their family language (L1), before becoming gradually immersed in the host country's language (L2). This is typically referred to as sequential bilingualism. Using a longitudinal design, this…

  3. A Longitudinal Study in Children with Sequential Bilateral Cochlear Implants: Time Course for the Second Implanted Ear and Bilateral Performance

    ERIC Educational Resources Information Center

    Reeder, Ruth M.; Firszt, Jill B.; Cadieux, Jamie H.; Strube, Michael J.

    2017-01-01

    Purpose: Whether, and if so when, a second-ear cochlear implant should be provided to older, unilaterally implanted children is an ongoing clinical question. This study evaluated rate of speech recognition progress for the second implanted ear and with bilateral cochlear implants in older sequentially implanted children and evaluated localization…

  4. PC_Eyewitness: evaluating the New Jersey method.

    PubMed

    MacLin, Otto H; Phelan, Colin M

    2007-05-01

    One important variable in eyewitness identification research is lineup administration procedure. Lineups administered sequentially (one at a time) have been shown to reduce the number of false identifications in comparison with those administered simultaneously (all at once). As a result, some policymakers have adopted sequential administration. However, they have made slight changes to the method used in psychology laboratories. Eyewitnesses in the field are allowed to take multiple passes through a lineup, whereas participants in the laboratory are allowed only one pass. PC_Eyewitness (PCE) is a computerized system used to construct and administer simultaneous or sequential lineups in both the laboratory and the field. It is currently being used in laboratories investigating eyewitness identification in the United States, Canada, and abroad. A modified version of PCE is also being developed for a local police department. We developed a new module for PCE, the New Jersey module, to examine the effects of a second pass. We found that the sequential advantage was eliminated when the participants were allowed to view the lineup a second time. The New Jersey module, and steps we are taking to improve on the module, are presented here and are being made available to the research and law enforcement communities.

  5. Comparing multiple imputation methods for systematically missing subject-level data.

    PubMed

    Kline, David; Andridge, Rebecca; Kaizar, Eloise

    2017-06-01

    When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this paper, we focus on missing subject-level variables and compare two multiple imputation approaches: a joint modeling approach and a sequential conditional modeling approach. We find the joint modeling approach to be preferable to the sequential conditional approach, except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Remarkably, the estimates from the sequential conditional method are often less efficient than a complete case analysis, which, in the context of research synthesis, implies that we lose efficiency by combining studies. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Temporal texture of associative encoding modulates recall processes.

    PubMed

    Tibon, Roni; Levy, Daniel A

    2014-02-01

    Binding aspects of an experience that are distributed over time is an important element of episodic memory. In the current study, we examined how the temporal complexity of an experience may govern the processes required for its retrieval. We recorded event-related potentials during episodic cued recall following pair associate learning of concurrently and sequentially presented object-picture pairs. Cued recall success effects over anterior and posterior areas were apparent in several time windows. In anterior locations, these recall success effects were similar for concurrently and sequentially encoded pairs. However, in posterior sites clustered over parietal scalp the effect was larger for the retrieval of sequentially encoded pairs. We suggest that anterior aspects of the mid-latency recall success effects may reflect working-with-memory operations or direct access recall processes, while more posterior aspects reflect recollective processes which are required for retrieval of episodes of greater temporal complexity. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. The effect of sequential exposure of color conditions on time and accuracy of graphic symbol location.

    PubMed

    Alant, Erna; Kolatsis, Anna; Lilienfeld, Margi

    2010-03-01

    An important aspect in AAC concerns the user's ability to locate an aided visual symbol on a communication display in order to facilitate meaningful interaction with partners. Recent studies have suggested that the use of different colored symbols may be influential in the visual search process, and that this, in turn will influence the speed and accuracy of symbol location. This study examined the role of color on rate and accuracy of identifying symbols on an 8-location overlay through the use of 3 color conditions (same, mixed and unique). Sixty typically developing preschool children were exposed to two different sequential exposures (Set 1 and Set 2). Participants searched for a target stimulus (either meaningful symbols or arbitrary forms) in a stimuli array. Findings indicated that the sequential exposures (orderings) impacted both time and accuracy for both types of symbols within specific instances.

  8. The Symmetry of Adverse Local Tissue Reactions in Patients with Bilateral Simultaneous and Sequential ASR Hip Replacement.

    PubMed

    Madanat, Rami; Hussey, Daniel K; Donahue, Gabrielle S; Potter, Hollis G; Wallace, Robert; Bragdon, Charles R; Muratoglu, Orhun K; Malchau, Henrik

    2015-10-01

    The purpose of this study was to evaluate whether patients with bilateral metal-on-metal (MoM) hip replacements have symmetric adverse local tissue reactions (ALTRs) at follow-up. An MRI of both hips was performed at a mean time of six years after surgery in 43 patients. The prevalence and severity of ALTRs were found to be similar in simultaneous hips but differences were observed in sequential hips. The order and timing of sequential hip arthroplasties did not affect the severity of ALTRs. Thus, in addition to metal ion exposure from an earlier MoM implant other factors may also play a role in the progression of ALTRs. Bilateral implants should be given special consideration in risk stratification algorithms for management of patients with MoM hip arthroplasty. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. A computationally efficient Bayesian sequential simulation approach for the assimilation of vast and diverse hydrogeophysical datasets

    NASA Astrophysics Data System (ADS)

    Nussbaumer, Raphaël; Gloaguen, Erwan; Mariéthoz, Grégoire; Holliger, Klaus

    2016-04-01

    Bayesian sequential simulation (BSS) is a powerful geostatistical technique, which notably has shown significant potential for the assimilation of datasets that are diverse with regard to the spatial resolution and their relationship. However, these types of applications of BSS require a large number of realizations to adequately explore the solution space and to assess the corresponding uncertainties. Moreover, such simulations generally need to be performed on very fine grids in order to adequately exploit the technique's potential for characterizing heterogeneous environments. Correspondingly, the computational cost of BSS algorithms in their classical form is very high, which so far has limited an effective application of this method to large models and/or vast datasets. In this context, it is also important to note that the inherent assumption regarding the independence of the considered datasets is generally regarded as being too strong in the context of sequential simulation. To alleviate these problems, we have revisited the classical implementation of BSS and incorporated two key features to increase the computational efficiency. The first feature is a combined quadrant spiral - superblock search, which targets run-time savings on large grids and adds flexibility with regard to the selection of neighboring points using equal directional sampling and treating hard data and previously simulated points separately. The second feature is a constant path of simulation, which enhances the efficiency for multiple realizations. We have also modified the aggregation operator to be more flexible with regard to the assumption of independence of the considered datasets. This is achieved through log-linear pooling, which essentially allows for attributing weights to the various data components. Finally, a multi-grid simulating path was created to enforce large-scale variance and to allow for adapting parameters, such as, for example, the log-linear weights or the type of simulation path at various scales. The newly implemented search method for kriging reduces the computational cost from an exponential dependence with regard to the grid size in the original algorithm to a linear relationship, as each neighboring search becomes independent from the grid size. For the considered examples, our results show a sevenfold reduction in run time for each additional realization when a constant simulation path is used. The traditional criticism that constant path techniques introduce a bias to the simulations was explored and our findings do indeed reveal a minor reduction in the diversity of the simulations. This bias can, however, be largely eliminated by changing the path type at different scales through the use of the multi-grid approach. Finally, we show that adapting the aggregation weight at each scale considered in our multi-grid approach allows for reproducing both the variogram and histogram, and the spatial trend of the underlying data.

  10. Diagnostics for Confounding of Time-varying and Other Joint Exposures.

    PubMed

    Jackson, John W

    2016-11-01

    The effects of joint exposures (or exposure regimes) include those of adhering to assigned treatment versus placebo in a randomized controlled trial, duration of exposure in a cohort study, interactions between exposures, and direct effects of exposure, among others. Unlike the setting of a single point exposure (e.g., propensity score matching), there are few tools to describe confounding for joint exposures or how well a method resolves it. Investigators need tools that describe confounding in ways that are conceptually grounded and intuitive for those who read, review, and use applied research to guide policy. We revisit the implications of exchangeability conditions that hold in sequentially randomized trials, and the bias structure that motivates the use of g-methods, such as marginal structural models. From these, we develop covariate balance diagnostics for joint exposures that can (1) describe time-varying confounding, (2) assess whether covariates are predicted by prior exposures given their past, the indication for g-methods, and (3) describe residual confounding after inverse probability weighting. For each diagnostic, we present time-specific metrics that encompass a wide class of joint exposures, including regimes of multivariate time-varying exposures in censored data, with multivariate point exposures as a special case. We outline how to estimate these directly or with regression and how to average them over person-time. Using a simulated example, we show how these metrics can be presented graphically. This conceptually grounded framework can potentially aid the transparent design, analysis, and reporting of studies that examine joint exposures. We provide easy-to-use tools to implement it.

  11. Time-elapsed screw insertion with microCT imaging.

    PubMed

    Ryan, M K; Mohtar, A A; Cleek, T M; Reynolds, K J

    2016-01-25

    Time-elapsed analysis of bone is an innovative technique that uses sequential image data to analyze bone mechanics under a given loading regime. This paper presents the development of a novel device capable of performing step-wise screw insertion into excised bone specimens, within the microCT environment, whilst simultaneously recording insertion torque, compression under the screw head and rotation angle. The system is computer controlled and screw insertion is performed in incremental steps of insertion torque. A series of screw insertion tests to failure were performed (n=21) to establish a relationship between the torque at head contact and stripping torque (R(2)=0.89). The test-device was then used to perform step-wise screw insertion, stopping at intervals of 20%, 40%, 60% and 80% between screw head contact and screw stripping. Image data-sets were acquired at each of these time-points as well as at head contact and post-failure. Examination of the image data revealed the trabecular deformation as a result of increased insertion torque was restricted to within 1mm of the outer diameter of the screw thread. Minimal deformation occurred prior to the step between the 80% time-point and post-failure. The device presented has allowed, for the first time, visualization of the micro-mechanical response in the peri-implant bone with increased tightening torque. Further testing on more samples is expected to increase our understanding of the effects of increased tightening torque at the micro-structural level, and the failure mechanisms of trabeculae. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Two-craft Coulomb formation study about circular orbits and libration points

    NASA Astrophysics Data System (ADS)

    Inampudi, Ravi Kishore

    This dissertation investigates the dynamics and control of a two-craft Coulomb formation in circular orbits and at libration points; it addresses relative equilibria, stability and optimal reconfigurations of such formations. The relative equilibria of a two-craft tether formation connected by line-of-sight elastic forces moving in circular orbits and at libration points are investigated. In circular Earth orbits and Earth-Moon libration points, the radial, along-track, and orbit normal great circle equilibria conditions are found. An example of modeling the tether force using Coulomb force is discussed. Furthermore, the non-great-circle equilibria conditions for a two-spacecraft tether structure in circular Earth orbit and at collinear libration points are developed. Then the linearized dynamics and stability analysis of a 2-craft Coulomb formation at Earth-Moon libration points are studied. For orbit-radial equilibrium, Coulomb forces control the relative distance between the two satellites. The gravity gradient torques on the formation due to the two planets help stabilize the formation. Similar analysis is performed for along-track and orbit-normal relative equilibrium configurations. Where necessary, the craft use a hybrid thrusting-electrostatic actuation system. The two-craft dynamics at the libration points provide a general framework with circular Earth orbit dynamics forming a special case. In the presence of differential solar drag perturbations, a Lyapunov feedback controller is designed to stabilize a radial equilibrium, two-craft Coulomb formation at collinear libration points. The second part of the thesis investigates optimal reconfigurations of two-craft Coulomb formations in circular Earth orbits by applying nonlinear optimal control techniques. The objective of these reconfigurations is to maneuver the two-craft formation between two charged equilibria configurations. The reconfiguration of spacecraft is posed as an optimization problem using the calculus of variations approach. The optimality criteria are minimum time, minimum acceleration of the separation distance, minimum Coulomb and electric propulsion fuel usage, and minimum electrical power consumption. The continuous time problem is discretized using a pseudospectral method, and the resulting finite dimensional problem is solved using a sequential quadratic programming algorithm. The software package, DIDO, implements this approach. This second part illustrates how pseudospectral methods significantly simplify the solution-finding process.

  13. Work–Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress

    PubMed Central

    Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao

    2018-01-01

    After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work–family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work–family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work–family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women’s perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work–family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work–family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress. PMID:29719522

  14. Efficient sequential and parallel algorithms for record linkage.

    PubMed

    Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar

    2014-01-01

    Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Our sequential and parallel algorithms have been tested on a real dataset of 1,083,878 records and synthetic datasets ranging in size from 50,000 to 9,000,000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm.

  15. Work-Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress.

    PubMed

    Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao

    2018-01-01

    After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work-family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work-family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work-family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women's perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work-family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work-family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress.

  16. Stability of tacrolimus solutions in polyolefin containers.

    PubMed

    Lee, Jun H; Goldspiel, Barry R; Ryu, Sujung; Potti, Gopal K

    2016-02-01

    Results of a study to determine the stability of tacrolimus solutions stored in polyolefin containers under various temperature conditions are reported. Triplicate solutions of tacrolimus (0.001, 0.01, and 0.1 mg/mL) in 0.9% sodium chloride injection or 5% dextrose injection were prepared in polyolefin containers. Some samples were stored at room temperature (20-25 °C); others were refrigerated (2-8 °C) for 20 hours and then stored at room temperature for up to 28 hours. The solutions were analyzed by stability-indicating high-performance liquid chromatography (HPLC) assay at specified time points over 48 hours. Solution pH was measured and containers were visually inspected at each time point. Stability was defined as retention of at least 90% of the initial tacrolimus concentration. All tested solutions retained over 90% of the initial tacrolimus concentration at all time points, with the exception of the 0.001-mg/mL solution prepared in 0.9% sodium chloride injection, which was deemed unstable beyond 24 hours. At all evaluated concentrations, mean solution pH values did not change significantly over 48 hours; no particle formation was detected. During storage in polyolefin bags at room temperature, a 0.001-mg/mL solution of tacrolimus was stable for 24 hours when prepared in 0.9% sodium chloride injection and for at least 48 hours when prepared in 5% dextrose injection. Solutions of 0.01 and 0.1 mg/mL prepared in either diluent were stable for at least 48 hours, and the 0.01-mg/mL tacrolimus solution was also found to be stable throughout a sequential temperature protocol. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  17. Induction of olfaction and cancer-related genes in mice fed a high-fat diet as assessed through the mode-of-action by network identification analysis.

    PubMed

    Choi, Youngshim; Hur, Cheol-Goo; Park, Taesun

    2013-01-01

    The pathophysiological mechanisms underlying the development of obesity and metabolic diseases are not well understood. To gain more insight into the genetic mediators associated with the onset and progression of diet-induced obesity and metabolic diseases, we studied the molecular changes in response to a high-fat diet (HFD) by using a mode-of-action by network identification (MNI) analysis. Oligo DNA microarray analysis was performed on visceral and subcutaneous adipose tissues and muscles of male C57BL/6N mice fed a normal diet or HFD for 2, 4, 8, and 12 weeks. Each of these data was queried against the MNI algorithm, and the lists of top 5 highly ranked genes and gene ontology (GO)-annotated pathways that were significantly overrepresented among the 100 highest ranked genes at each time point in the 3 different tissues of mice fed the HFD were considered in the present study. The 40 highest ranked genes identified by MNI analysis at each time point in the different tissues of mice with diet-induced obesity were subjected to clustering based on their temporal patterns. On the basis of the above-mentioned results, we investigated the sequential induction of distinct olfactory receptors and the stimulation of cancer-related genes during the development of obesity in both adipose tissues and muscles. The top 5 genes recognized using the MNI analysis at each time point and gene cluster identified based on their temporal patterns in the peripheral tissues of mice provided novel and often surprising insights into the potential genetic mediators for obesity progression.

  18. Effects of a web-based tailored multiple-lifestyle intervention for adults: a two-year randomized controlled trial comparing sequential and simultaneous delivery modes.

    PubMed

    Schulz, Daniela N; Kremers, Stef P J; Vandelanotte, Corneel; van Adrichem, Mathieu J G; Schneider, Francine; Candel, Math J J M; de Vries, Hein

    2014-01-27

    Web-based computer-tailored interventions for multiple health behaviors can have a significant public health impact. Yet, few randomized controlled trials have tested this assumption. The objective of this paper was to test the effects of a sequential and simultaneous Web-based tailored intervention on multiple lifestyle behaviors. A randomized controlled trial was conducted with 3 tailoring conditions (ie, sequential, simultaneous, and control conditions) in the Netherlands in 2009-2012. Follow-up measurements took place after 12 and 24 months. The intervention content was based on the I-Change model. In a health risk appraisal, all respondents (N=5055) received feedback on their lifestyle behaviors that indicated whether they complied with the Dutch guidelines for physical activity, vegetable consumption, fruit consumption, alcohol intake, and smoking. Participants in the sequential (n=1736) and simultaneous (n=1638) conditions received tailored motivational feedback to change unhealthy behaviors one at a time (sequential) or all at the same time (simultaneous). Mixed model analyses were performed as primary analyses; regression analyses were done as sensitivity analyses. An overall risk score was used as outcome measure, then effects on the 5 individual lifestyle behaviors were assessed and a process evaluation was performed regarding exposure to and appreciation of the intervention. Both tailoring strategies were associated with small self-reported behavioral changes. The sequential condition had the most significant effects compared to the control condition after 12 months (T1, effect size=0.28). After 24 months (T2), the simultaneous condition was most effective (effect size=0.18). All 5 individual lifestyle behaviors changed over time, but few effects differed significantly between the conditions. At both follow-ups, the sequential condition had significant changes in smoking abstinence compared to the simultaneous condition (T1 effect size=0.31; T2 effect size=0.41). The sequential condition was more effective in decreasing alcohol consumption than the control condition at 24 months (effect size=0.27). Change was predicted by the amount of exposure to the intervention (total visiting time: beta=-.06; P=.01; total number of visits: beta=-.11; P<.001). Both interventions were appreciated well by respondents without significant differences between conditions. Although evidence was found for the effectiveness of both programs, no simple conclusive finding could be drawn about which intervention mode was more effective. The best kind of intervention may depend on the behavior that is targeted or on personal preferences and motivation. Further research is needed to identify moderators of intervention effectiveness. The results need to be interpreted in view of the high and selective dropout rates, multiple comparisons, and modest effect sizes. However, a large number of people were reached at low cost and behavioral change was achieved after 2 years. Nederlands Trial Register: NTR 2168; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=2168 (Archived by WebCite at http://www.webcitation.org/6MbUqttYB).

  19. Sequential algorithm analysis to facilitate selective biliary access for difficult biliary cannulation in ERCP: a prospective clinical study.

    PubMed

    Lee, Tae Hoon; Hwang, Soon Oh; Choi, Hyun Jong; Jung, Yunho; Cha, Sang Woo; Chung, Il-Kwun; Moon, Jong Ho; Cho, Young Deok; Park, Sang-Heum; Kim, Sun-Joo

    2014-02-17

    Numerous clinical trials to improve the success rate of biliary access in difficult biliary cannulation (DBC) during ERCP have been reported. However, standard guidelines or sequential protocol analysis according to different methods are limited in place. We planned to investigate a sequential protocol to facilitate selective biliary access for DBC during ERCP. This prospective clinical study enrolled 711 patients with naïve papillae at a tertiary referral center. If wire-guided cannulation was deemed to have failed due to the DBC criteria, then according to the cannulation algorithm early precut fistulotomy (EPF; cannulation time > 5 min, papillary contacts > 5 times, or hook-nose-shaped papilla), double-guidewire cannulation (DGC; unintentional pancreatic duct cannulation ≥ 3 times), and precut after placement of a pancreatic stent (PPS; if DGC was difficult or failed) were performed sequentially. The main outcome measurements were the technical success, procedure outcomes, and complications. Initially, a total of 140 (19.7%) patients with DBC underwent EPF (n = 71) and DGC (n = 69). Then, in DGC group 36 patients switched to PPS due to difficulty criteria. The successful biliary cannulation rate was 97.1% (136/140; 94.4% [67/71] with EPF, 47.8% [33/69] with DGC, and 100% [36/36] with PPS; P < 0.001). The mean successful cannulation time (standard deviation) was 559.4 (412.8) seconds in EPF, 314.8 (65.2) seconds in DGC, and 706.0 (469.4) seconds in PPS (P < 0.05). The DGC group had a relatively low successful cannulation rate (47.8%) but had a shorter cannulation time compared to the other groups due to early switching to the PPS method in difficult or failed DGC. Post-ERCP pancreatitis developed in 14 (10%) patients (9 mild, 1 moderate), which did not differ significantly among the groups (P = 0.870) or compared with the conventional group (P = 0.125). Based on the sequential protocol analysis, EPF, DGC, and PPS may be safe and feasible for DBC. The use of EPF in selected DBC criteria, DGC in unintentional pancreatic duct cannulations, and PPS in failed or difficult DGC may facilitate successful biliary cannulation.

  20. On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.

    PubMed

    Yamazaki, Keisuke

    2012-07-01

    Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Episodic-like memory trace in awake replay of hippocampal place cell activity sequences

    PubMed Central

    Takahashi, Susumu

    2015-01-01

    Episodic memory retrieval of events at a specific place and time is effective for future planning. Sequential reactivation of the hippocampal place cells along familiar paths while the animal pauses is well suited to such a memory retrieval process. It is, however, unknown whether this awake replay represents events occurring along the path. Using a subtask switching protocol in which the animal experienced three subtasks as ‘what’ information in a maze, I here show that the replay represents a trial type, consisting of path and subtask, in terms of neuronal firing timings and rates. The actual trial type to be rewarded could only be reliably predicted from replays that occurred at the decision point. This trial-type representation implies that not only ‘where and when’ but also ‘what’ information is contained in the replay. This result supports the view that awake replay is an episodic-like memory retrieval process. DOI: http://dx.doi.org/10.7554/eLife.08105.001 PMID:26481131

  2. Numerical study on the sequential Bayesian approach for radioactive materials detection

    NASA Astrophysics Data System (ADS)

    Qingpei, Xiang; Dongfeng, Tian; Jianyu, Zhu; Fanhua, Hao; Ge, Ding; Jun, Zeng

    2013-01-01

    A new detection method, based on the sequential Bayesian approach proposed by Candy et al., offers new horizons for the research of radioactive detection. Compared with the commonly adopted detection methods incorporated with statistical theory, the sequential Bayesian approach offers the advantages of shorter verification time during the analysis of spectra that contain low total counts, especially in complex radionuclide components. In this paper, a simulation experiment platform implanted with the methodology of sequential Bayesian approach was developed. Events sequences of γ-rays associating with the true parameters of a LaBr3(Ce) detector were obtained based on an events sequence generator using Monte Carlo sampling theory to study the performance of the sequential Bayesian approach. The numerical experimental results are in accordance with those of Candy. Moreover, the relationship between the detection model and the event generator, respectively represented by the expected detection rate (Am) and the tested detection rate (Gm) parameters, is investigated. To achieve an optimal performance for this processor, the interval of the tested detection rate as a function of the expected detection rate is also presented.

  3. Use of the Sequential Organ Failure Assessment score for evaluating outcome among obstetric patients admitted to the intensive care unit.

    PubMed

    Jain, Shruti; Guleria, Kiran; Suneja, Amita; Vaid, Neelam B; Ahuja, Sharmila

    2016-03-01

    To evaluate the prognostic value of the Sequential Organ Failure Assessment (SOFA) score among obstetric patients admitted to the intensive care unit (ICU). A prospective study was conducted among 90 consecutive obstetric patients who were admitted to the ICU of Guru Teg Bahadur Hospital, Delhi, India, between October 6, 2010, and December 25, 2011. Maximum SOFA score was calculated for each of the six organ systems. Receiver operating characteristic curves were used to determine critical cutoff values for total, maximum total, and mean total SOFA scores at various time points. Total SOFA score at admission displayed an area under the curve (AUC) of 0.949, a cutoff value of at least 8.5, sensitivity of 86.7%, and specificity of 90.0%. Maximum total SOFA score had an AUC of 0.980, a cutoff value of at least 10.0, sensitivity of 96.7%, and specificity of 90.0%. Mean total SOFA score had an AUC of 0.997, a cutoff value of at least 9.0, sensitivity of 96.7%, and specificity of 96.7%. In terms of discriminatory power for predicting mortality among obstetric patients admitted to the ICU, total SOFA score at admission was the most relevant, simple, and accurate measure. Copyright © 2015 International Federation of Gynecology and Obstetrics. Published by Elsevier Ireland Ltd. All rights reserved.

  4. Synchrotron x-ray study of a low roughness and high efficiency K 2 CsSb photocathode during film growth

    DOE PAGES

    Xie, Junqi; Demarteau, Marcel; Wagner, Robert; ...

    2017-04-24

    Reduction of roughness to the nm level is critical of achieving the ultimate performance from photocathodes used in high gradient fields. The thrust of this paper is to explore the evolution of roughness during sequential growth, and to show that deposition of multilayer structures consisting of very thin reacted layers results in an nm level smooth photocathode. Synchrotron x-ray methods were applied to study the multi-step growth process of a high efficiency K 2CsSb photocathode. We observed a transition point of the Sb film grown on Si at the film thickness of similar to 40 angstrom with the substrate temperaturemore » at 100 degrees C and the growth rate at 0.1 Å s -1. The final K 2CsSb photocathode exhibits a thickness of around five times that of the total deposited Sb film regardless of how the Sb film was grown. The film surface roughening process occurs first at the step when K diffuses into the crystalline Sb. Furthermore, the photocathode we obtained from the multi-step growth exhibits roughness in an order of magnitude lower than the normal sequential process. X-ray diffraction measurements show that the material goes through two structural changes of the crystalline phase during formation, from crystalline Sb to K 3Sb and finally to K 2CsSb.« less

  5. Synchrotron x-ray study of a low roughness and high efficiency K 2 CsSb photocathode during film growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Junqi; Demarteau, Marcel; Wagner, Robert

    Reduction of roughness to the nm level is critical of achieving the ultimate performance from photocathodes used in high gradient fields. The thrust of this paper is to explore the evolution of roughness during sequential growth, and to show that deposition of multilayer structures consisting of very thin reacted layers results in an nm level smooth photocathode. Synchrotron x-ray methods were applied to study the multi-step growth process of a high efficiency K 2CsSb photocathode. We observed a transition point of the Sb film grown on Si at the film thickness of similar to 40 angstrom with the substrate temperaturemore » at 100 degrees C and the growth rate at 0.1 Å s -1. The final K 2CsSb photocathode exhibits a thickness of around five times that of the total deposited Sb film regardless of how the Sb film was grown. The film surface roughening process occurs first at the step when K diffuses into the crystalline Sb. Furthermore, the photocathode we obtained from the multi-step growth exhibits roughness in an order of magnitude lower than the normal sequential process. X-ray diffraction measurements show that the material goes through two structural changes of the crystalline phase during formation, from crystalline Sb to K 3Sb and finally to K 2CsSb.« less

  6. Sampling bird communities in bottomland hardwood forests of the Mississippi Alluvial Valley: Number of points visited versus number of visits to a point

    USGS Publications Warehouse

    Twedt, D.J.; Smith, W.P.; Cooper, R.J.; Ford, R.P.; Hamel, P.B.; Wiedenfeld, D.A.; Smith, Winston Paul

    1993-01-01

    Within each of 4 forest stands on Delta Experimental Forest (DEF), 25 points were visited 5 to 7 times from 8 May to 21 May 1991, and 6 times from 30 May to 12 June 1992. During each visit to a point, all birds detected, visuallyor aurally, at any distance were recorded during a 4-minute interval. Using these data, our objectives were to recommend the number of point counts and the number of visits to a point which provide the greatest efficiency for estimating the cumulative number of species in bottomland hardwood forest stands within the Mississippi Alluvial Valley, and to ascertain if increasing the number of visits to points is equivalent to adding more points. Because the total number of species detected in DEF were different between years, 39 species in 1991 and 55 species in 1992, we considered each year independently. Within each stand, we obtained bootstrap estimates of the mean cumulative number of species obtained from all possible combinations of six points and six visits (i.e., 36 means/stand). These bootstrap estimates were subjected to ANOVA; we modelled cumulative number of species as a function of the number of points visited, the number of visits to each point, and their interaction. As part of the same ANOVA we made an a priori, simultaneous comparison of the 15 possible reciprocal treatments (i.e., 1 point-2 visits vs. 2 points-1 visit, etc.). Results of analyses for each year were similar. Although no interaction was detected between the number of points and the number of visits, when reciprocals were compared, more points visited yielded significantly greater cumulative number of species than more visits to each point. Significant differences were detected among both the number of points visited and among the number of visits to a point. Scheffe's test of differences among means indicated that the cumulative number of species increased significantly with each added point, through five points, but six points did not differ from five points in 1991. Similarly, the cumulative number of species increased significantlywith each revisit, up to four visits, but four visits did not differ significantly from five visits. Starting with one point, which yielded about 33 percent of the total species pool when averaged among one through six points, each subsequent point resulted in an increase of about 9 percent, 5 percent, 3 percent, and 3 percent, respectively. Each sequential increase in the number of visits, however, only resulted in increases of 7 percent, 4 percent, 2 percent, and 2 percent of the total species pool.

  7. Decoupling Actions from Consequences: Dorsal Hippocampal Lesions Facilitate Instrumental Performance, but Impair Behavioral Flexibility in Rats

    PubMed Central

    Busse, Sebastian; Schwarting, Rainer K. W.

    2016-01-01

    The present study is part of a series of experiments, where we analyze why and how damage of the rat’s dorsal hippocampus (dHC) can enhance performance in a sequential reaction time task (SRTT). In this task, sequences of distinct visual stimulus presentations are food-rewarded in a fixed-ratio-13-schedule. Our previous study (Busse and Schwarting, 2016) had shown that rats with lesions of the dHC show substantially shorter session times and post-reinforcement pauses (PRPs) than controls, which allows for more practice when daily training is kept constant. Since sequential behavior is based on instrumental performance, a sequential benefit might be secondary to that. In order to test this hypothesis in the present study, we performed two experiments, where pseudorandom rather than sequential stimulus presentation was used in rats with excitotoxic dorsal hippocampal lesions. Again, we found enhanced performance in the lesion-group in terms of shorter session times and PRPs. During the sessions we found that the lesion-group spent less time with non-instrumental behavior (i.e., grooming, sniffing, and rearing) after prolonged instrumental training. Also, such rats showed moderate evidence for an extinction impairment under devalued food reward conditions and significant deficits in a response-outcome (R-O)-discrimination task in comparison to a control-group. These findings suggest that facilitatory effects on instrumental performance after dorsal hippocampal lesions may be primarily a result of complex behavioral changes, i.e., reductions of behavioral flexibility and/or alterations in motivation, which then result in enhanced instrumental learning. PMID:27375453

  8. Hybrid parallel computing architecture for multiview phase shifting

    NASA Astrophysics Data System (ADS)

    Zhong, Kai; Li, Zhongwei; Zhou, Xiaohui; Shi, Yusheng; Wang, Congjun

    2014-11-01

    The multiview phase-shifting method shows its powerful capability in achieving high resolution three-dimensional (3-D) shape measurement. Unfortunately, this ability results in very high computation costs and 3-D computations have to be processed offline. To realize real-time 3-D shape measurement, a hybrid parallel computing architecture is proposed for multiview phase shifting. In this architecture, the central processing unit can co-operate with the graphic processing unit (GPU) to achieve hybrid parallel computing. The high computation cost procedures, including lens distortion rectification, phase computation, correspondence, and 3-D reconstruction, are implemented in GPU, and a three-layer kernel function model is designed to simultaneously realize coarse-grained and fine-grained paralleling computing. Experimental results verify that the developed system can perform 50 fps (frame per second) real-time 3-D measurement with 260 K 3-D points per frame. A speedup of up to 180 times is obtained for the performance of the proposed technique using a NVIDIA GT560Ti graphics card rather than a sequential C in a 3.4 GHZ Inter Core i7 3770.

  9. Sequential parallel comparison design with binary and time-to-event outcomes.

    PubMed

    Silverman, Rachel Kloss; Ivanova, Anastasia; Fine, Jason

    2018-04-30

    Sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials especially trials with possibly high placebo effect. Sequential parallel comparison design is conducted with 2 stages. Participants are randomized between active therapy and placebo in stage 1. Then, stage 1 placebo nonresponders are rerandomized between active therapy and placebo. Data from the 2 stages are pooled to yield a single P value. We consider SPCD with binary and with time-to-event outcomes. For time-to-event outcomes, response is defined as a favorable event prior to the end of follow-up for a given stage of SPCD. We show that for these cases, the usual test statistics from stages 1 and 2 are asymptotically normal and uncorrelated under the null hypothesis, leading to a straightforward combined testing procedure. In addition, we show that the estimators of the treatment effects from the 2 stages are asymptotically normal and uncorrelated under the null and alternative hypothesis, yielding confidence interval procedures with correct coverage. Simulations and real data analysis demonstrate the utility of the binary and time-to-event SPCD. Copyright © 2018 John Wiley & Sons, Ltd.

  10. Probing finite coarse-grained virtual Feynman histories with sequential weak values

    NASA Astrophysics Data System (ADS)

    Georgiev, Danko; Cohen, Eliahu

    2018-05-01

    Feynman's sum-over-histories formulation of quantum mechanics has been considered a useful calculational tool in which virtual Feynman histories entering into a coherent quantum superposition cannot be individually measured. Here we show that sequential weak values, inferred by consecutive weak measurements of projectors, allow direct experimental probing of individual virtual Feynman histories, thereby revealing the exact nature of quantum interference of coherently superposed histories. Because the total sum of sequential weak values of multitime projection operators for a complete set of orthogonal quantum histories is unity, complete sets of weak values could be interpreted in agreement with the standard quantum mechanical picture. We also elucidate the relationship between sequential weak values of quantum histories with different coarse graining in time and establish the incompatibility of weak values for nonorthogonal quantum histories in history Hilbert space. Bridging theory and experiment, the presented results may enhance our understanding of both weak values and quantum histories.

  11. A Bayesian sequential processor approach to spectroscopic portal system decisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sale, K; Candy, J; Breitfeller, E

    The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waitingmore » for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.« less

  12. ANALYSES OF RESPONSE–STIMULUS SEQUENCES IN DESCRIPTIVE OBSERVATIONS

    PubMed Central

    Samaha, Andrew L; Vollmer, Timothy R; Borrero, Carrie; Sloman, Kimberly; Pipkin, Claire St. Peter; Bourret, Jason

    2009-01-01

    Descriptive observations were conducted to record problem behavior displayed by participants and to record antecedents and consequences delivered by caregivers. Next, functional analyses were conducted to identify reinforcers for problem behavior. Then, using data from the descriptive observations, lag-sequential analyses were conducted to examine changes in the probability of environmental events across time in relation to occurrences of problem behavior. The results of the lag-sequential analyses were interpreted in light of the results of functional analyses. Results suggested that events identified as reinforcers in a functional analysis followed behavior in idiosyncratic ways: after a range of delays and frequencies. Thus, it is possible that naturally occurring reinforcement contingencies are arranged in ways different from those typically evaluated in applied research. Further, these complex response–stimulus relations can be represented by lag-sequential analyses. However, limitations to the lag-sequential analysis are evident. PMID:19949537

  13. Physics-based, Bayesian sequential detection method and system for radioactive contraband

    DOEpatents

    Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E

    2014-03-18

    A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.

  14. A Process Improvement Evaluation of Sequential Compression Device Compliance and Effects of Provider Intervention.

    PubMed

    Beachler, Jason A; Krueger, Chad A; Johnson, Anthony E

    This process improvement study sought to evaluate the compliance in orthopaedic patients with sequential compression devices and to monitor any improvement in compliance following an educational intervention. All non-intensive care unit orthopaedic primary patients were evaluated at random times and their compliance with sequential compression devices was monitored and recorded. Following a 2-week period of data collection, an educational flyer was displayed in every patient's room and nursing staff held an in-service training event focusing on the importance of sequential compression device use in the surgical patient. Patients were then monitored, again at random, and compliance was recorded. With the addition of a simple flyer and a single in-service on the importance of mechanical compression in the surgical patient, a significant improvement in compliance was documented at the authors' institution from 28% to 59% (p < .0001).

  15. Sequentially Executed Model Evaluation Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-10-20

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as partmore » of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.« less

  16. Music Education for Life: Core Music Education--Students' Civil Right

    ERIC Educational Resources Information Center

    Shuler, Scott C.

    2012-01-01

    Educators are obliged to stand up for children--to point out when the self-declared local "education emperor" (or mayor or governor) has no clothes. The so-called reform government has turned some local districts into a Wild West where schools share no common or sequential curriculum and all that matters is test scores. Music educators must join…

  17. Relations of Transtheoretical Model Stage, Self-Efficacy, and Voluntary Physical Activity in African American Preadolescents

    ERIC Educational Resources Information Center

    Annesi, James J.; Faigenbaum, Avery D.; Westcott, Wayne L.

    2010-01-01

    The transtheoretical model (TTM; Prochaska, DiClemente, & Norcross, 1992) suggests that, at any point, an individual is in one of five stages-of-change related to adopting a behavior. People sequentially advance in stage but may also maintain or even regress, based on personal and environmental factors (Nigg, 2005). A classic study published in…

  18. Model Order Reduction of Aeroservoelastic Model of Flexible Aircraft

    NASA Technical Reports Server (NTRS)

    Wang, Yi; Song, Hongjun; Pant, Kapil; Brenner, Martin J.; Suh, Peter

    2016-01-01

    This paper presents a holistic model order reduction (MOR) methodology and framework that integrates key technological elements of sequential model reduction, consistent model representation, and model interpolation for constructing high-quality linear parameter-varying (LPV) aeroservoelastic (ASE) reduced order models (ROMs) of flexible aircraft. The sequential MOR encapsulates a suite of reduction techniques, such as truncation and residualization, modal reduction, and balanced realization and truncation to achieve optimal ROMs at grid points across the flight envelope. The consistence in state representation among local ROMs is obtained by the novel method of common subspace reprojection. Model interpolation is then exploited to stitch ROMs at grid points to build a global LPV ASE ROM feasible to arbitrary flight condition. The MOR method is applied to the X-56A MUTT vehicle with flexible wing being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies demonstrated that relative to the fullorder model, our X-56A ROM can accurately and reliably capture vehicles dynamics at various flight conditions in the target frequency regime while the number of states in ROM can be reduced by 10X (from 180 to 19), and hence, holds great promise for robust ASE controller synthesis and novel vehicle design.

  19. Managing numerical errors in random sequential adsorption

    NASA Astrophysics Data System (ADS)

    Cieśla, Michał; Nowak, Aleksandra

    2016-09-01

    Aim of this study is to examine the influence of a finite surface size and a finite simulation time on a packing fraction estimated using random sequential adsorption simulations. The goal of particular interest is providing hints on simulation setup to achieve desired level of accuracy. The analysis is based on properties of saturated random packing of disks on continuous and flat surfaces of different sizes.

  20. A Longitudinal Study in Adults with Sequential Bilateral Cochlear Implants: Time Course for Individual Ear and Bilateral Performance

    ERIC Educational Resources Information Center

    Reeder, Ruth M.; Firszt, Jill B.; Holden, Laura K.; Strube, Michael J.

    2014-01-01

    Purpose: The purpose of this study was to examine the rate of progress in the 2nd implanted ear as it relates to the 1st implanted ear and to bilateral performance in adult sequential cochlear implant recipients. In addition, this study aimed to identify factors that contribute to patient outcomes. Method: The authors performed a prospective…

  1. The Intelligent Management System: An Overview.

    DTIC Science & Technology

    1982-12-07

    comprises of hundreds of subprocesses concerned with bulb grasping, positioning, heating, cooling, etc. Each is sequentially related with others in time...activity. Activity schemata can be constructed into a network to define both parallel and sequential precedence, and hierarchical to describe...The flow of work is controlled by the "operation- lineup " associated with the product being manufactured. The operation- lineup specifies the sequence of

  2. Does pointing facilitate the recall of serial positions in visuospatial working memory?

    PubMed

    Spataro, Pietro; Marques, Valeria R S; Longobardi, Emiddia; Rossi-Arnaud, Clelia

    2015-09-01

    The present study examined the question of whether pointing enhances the serial recall of visuospatial positions. Thirty-six participants were presented with 40 target arrays varying in length from five to eight items, with each position appearing sequentially in red for 1 s. The task was to reproduce the order of presentation of the positions on a blank matrix. Results showed that, for five-, six-, and seven-item arrays, order memory was significantly better in the passive view than in the pointing condition, and the serial position curves displayed both recency and priority effects. Interestingly, the advantage of the passive-view condition was more pronounced in the early than in the late positions. For eight-item arrays, no significant differences were found between the passive view and the pointing conditions. Overall, the present data provide no evidence in support of the view that pointing facilitates the recall of serial positions.

  3. Sequential time interleaved random equivalent sampling for repetitive signal.

    PubMed

    Zhao, Yijiu; Liu, Jingjing

    2016-12-01

    Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.

  4. [Bilateral cochlear implants in children: acquisition of binaural hearing].

    PubMed

    Ramos-Macías, Angel; Deive-Maggiolo, Leopoldo; Artiles-Cabrera, Ovidio; González-Aguado, Rocío; Borkoski-Barreiro, Silvia A; Masgoret-Palau, Elizabeth; Falcón-González, Juan C; Bueno-Yanes, Jorge

    2013-01-01

    Several studies have indicated the benefit of bilateral cochlear implants in the acquisition of binaural hearing and bilateralism. In children with cochlear implants, is it possible to achieve binaurality after a second implant? When is the ideal time to implant them? The objective of this study was to analyse the binaural effect in children with bilateral implants and the differences between subjects with simultaneous and sequential implants with both short and long intervals. There were 90 patients between 1 and 2 years of age (the first surgery), implanted between 2000 and 2008. Of these, 25 were unilateral users and 65 bilateral; 17 patients had received simultaneous implants, 29 had sequential implants before 12 months after the first one (short interimplant period) and 19 after 12 months (long period). All of them were tested for silent and noisy verbal perception and a tonal threshold audiometry was performed. The silent perception test showed that the simultaneous and short period sequential implant patients (mean: 84.67%) versus unilateral and long period sequential implants (mean: 79.66%), had a statistically-significant difference (P=0,23). Likewise, the noisy perception test showed a difference with statistical significance (P=0,22) comparing the simultaneous implanted and short period sequential implants (mean, 77.17%) versus unilateral implanted and long period sequential ones (mean: 69.32%). The simultaneous and sequential short period implants acquired the advantages of binaural hearing. Copyright © 2012 Elsevier España, S.L. All rights reserved.

  5. Microwave Ablation: Comparison of Simultaneous and Sequential Activation of Multiple Antennas in Liver Model Systems.

    PubMed

    Harari, Colin M; Magagna, Michelle; Bedoya, Mariajose; Lee, Fred T; Lubner, Meghan G; Hinshaw, J Louis; Ziemlewicz, Timothy; Brace, Christopher L

    2016-01-01

    To compare microwave ablation zones created by using sequential or simultaneous power delivery in ex vivo and in vivo liver tissue. All procedures were approved by the institutional animal care and use committee. Microwave ablations were performed in both ex vivo and in vivo liver models with a 2.45-GHz system capable of powering up to three antennas simultaneously. Two- and three-antenna arrays were evaluated in each model. Sequential and simultaneous ablations were created by delivering power (50 W ex vivo, 65 W in vivo) for 5 minutes per antenna (10 and 15 minutes total ablation time for sequential ablations, 5 minutes for simultaneous ablations). Thirty-two ablations were performed in ex vivo bovine livers (eight per group) and 28 in the livers of eight swine in vivo (seven per group). Ablation zone size and circularity metrics were determined from ablations excised postmortem. Mixed effects modeling was used to evaluate the influence of power delivery, number of antennas, and tissue type. On average, ablations created by using the simultaneous power delivery technique were larger than those with the sequential technique (P < .05). Simultaneous ablations were also more circular than sequential ablations (P = .0001). Larger and more circular ablations were achieved with three antennas compared with two antennas (P < .05). Ablations were generally smaller in vivo compared with ex vivo. The use of multiple antennas and simultaneous power delivery creates larger, more confluent ablations with greater temperatures than those created with sequential power delivery. © RSNA, 2015.

  6. Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation

    NASA Astrophysics Data System (ADS)

    Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab

    2015-05-01

    3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.

  7. Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model.

    PubMed

    Hardy, N F; Buonomano, Dean V

    2018-02-01

    Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency-a measure of network interconnectedness-decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.

  8. Comprehensive proteomic analysis of Penicillium verrucosum.

    PubMed

    Nöbauer, Katharina; Hummel, Karin; Mayrhofer, Corina; Ahrens, Maike; Setyabudi, Francis M C; Schmidt-Heydt, Markus; Eisenacher, Martin; Razzazi-Fazeli, Ebrahim

    2017-05-01

    Mass spectrometric identification of proteins in species lacking validated sequence information is a major problem in veterinary science. In the present study, we used ochratoxin A producing Penicillium verrucosum to identify and quantitatively analyze proteins of an organism with yet no protein information available. The work presented here aimed to provide a comprehensive protein identification of P. verrucosum using shotgun proteomics. We were able to identify 3631 proteins in an "ab initio" translated database from DNA sequences of P. verrucosum. Additionally, a sequential window acquisition of all theoretical fragment-ion spectra analysis was done to find differentially regulated proteins at two different time points of the growth curve. We compared the proteins at the beginning (day 3) and at the end of the log phase (day 12). © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Equilibria, prudent compromises, and the "waiting" game.

    PubMed

    Sim, Kwang Mong

    2005-08-01

    While evaluation of many e-negotiation agents are carried out through empirical studies, this work supplements and complements existing literature by analyzing the problem of designing market-driven agents (MDAs) in terms of equilibrium points and stable strategies. MDAs are negotiation agents designed to make prudent compromises taking into account factors such as time preference, outside option, and rivalry. This work shows that 1) in a given market situation, an MDA negotiates optimally because it makes minimally sufficient concession, and 2) by modeling negotiation of MDAs as a game gamma of incomplete information, it is shown that the strategies adopted by MDAs are stable. In a bilateral negotiation, it is proven that the strategy pair of two MDAs forms a sequential equilibrium for gamma. In a multilateral negotiation, it is shown that the strategy profile of MDAs forms a market equilibrium for gamma.

  10. 'Sticking to a healthy diet is easier for me when I exercise regularly': cognitive transfer between physical exercise and healthy nutrition.

    PubMed

    Fleig, Lena; Kerschreiter, Rudolf; Schwarzer, Ralf; Pomp, Sarah; Lippke, Sonia

    2014-01-01

    Long-term rehabilitation success depends on regular exercise and healthy nutrition. The present study introduces a new framework to explain this association on a psychosocial level. The exercise-nutrition relationship was investigated by exploring the sequential mediation of habit strength and transfer cognitions. Analyses were performed at two measurement points in time (at 12 and 18 months after rehabilitation), involving 470 medical rehabilitation patients who participated in an exercise intervention. Patients filled in paper-pencil questionnaires assessing exercise (t1) and habit strength, transfer cognitions and healthy nutrition at follow-up (t2). Habit strength and transfer cognitions mediated the relationship between exercise and nutrition. Findings suggest that habit strength and transfer cognitions are important factors underlying the relationship between exercise and nutrition.

  11. A Parametric Geometry Computational Fluid Dynamics (CFD) Study Utilizing Design of Experiments (DOE)

    NASA Technical Reports Server (NTRS)

    Rhew, Ray D.; Parker, Peter A.

    2007-01-01

    Design of Experiments (DOE) was applied to the LAS geometric parameter study to efficiently identify and rank primary contributors to integrated drag over the vehicles ascent trajectory in an order of magnitude fewer CFD configurations thereby reducing computational resources and solution time. SME s were able to gain a better understanding on the underlying flowphysics of different geometric parameter configurations through the identification of interaction effects. An interaction effect, which describes how the effect of one factor changes with respect to the levels of other factors, is often the key to product optimization. A DOE approach emphasizes a sequential approach to learning through successive experimentation to continuously build on previous knowledge. These studies represent a starting point for expanded experimental activities that will eventually cover the entire design space of the vehicle and flight trajectory.

  12. Application of the sequential quadratic programming algorithm for reconstructing the distribution of optical parameters based on the time-domain radiative transfer equation.

    PubMed

    Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming

    2016-10-17

    Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.

  13. Computer-Based Radiographic Quantification of Joint Space Narrowing Progression Using Sequential Hand Radiographs: Validation Study in Rheumatoid Arthritis Patients from Multiple Institutions.

    PubMed

    Ichikawa, Shota; Kamishima, Tamotsu; Sutherland, Kenneth; Fukae, Jun; Katayama, Kou; Aoki, Yuko; Okubo, Takanobu; Okino, Taichi; Kaneda, Takahiko; Takagi, Satoshi; Tanimura, Kazuhide

    2017-10-01

    We have developed a refined computer-based method to detect joint space narrowing (JSN) progression with the joint space narrowing progression index (JSNPI) by superimposing sequential hand radiographs. The purpose of this study is to assess the validity of a computer-based method using images obtained from multiple institutions in rheumatoid arthritis (RA) patients. Sequential hand radiographs of 42 patients (37 females and 5 males) with RA from two institutions were analyzed by a computer-based method and visual scoring systems as a standard of reference. The JSNPI above the smallest detectable difference (SDD) defined JSN progression on the joint level. The sensitivity and specificity of the computer-based method for JSN progression was calculated using the SDD and a receiver operating characteristic (ROC) curve. Out of 314 metacarpophalangeal joints, 34 joints progressed based on the SDD, while 11 joints widened. Twenty-one joints progressed in the computer-based method, 11 joints in the scoring systems, and 13 joints in both methods. Based on the SDD, we found lower sensitivity and higher specificity with 54.2 and 92.8%, respectively. At the most discriminant cutoff point according to the ROC curve, the sensitivity and specificity was 70.8 and 81.7%, respectively. The proposed computer-based method provides quantitative measurement of JSN progression using sequential hand radiographs and may be a useful tool in follow-up assessment of joint damage in RA patients.

  14. The potential impact of immunization campaign budget re-allocation on global eradication of paediatric infectious diseases

    PubMed Central

    2011-01-01

    Background The potential benefits of coordinating infectious disease eradication programs that use campaigns such as supplementary immunization activities (SIAs) should not be over-looked. One example of a coordinated approach is an adaptive "sequential strategy": first, all annual SIA budget is dedicated to the eradication of a single infectious disease; once that disease is eradicated, the annual SIA budget is re-focussed on eradicating a second disease, etc. Herd immunity suggests that a sequential strategy may eradicate several infectious diseases faster than a non-adaptive "simultaneous strategy" of dividing annual budget equally among eradication programs for those diseases. However, mathematical modeling is required to understand the potential extent of this effect. Methods Our objective was to illustrate how budget allocation strategies can interact with the nonlinear nature of disease transmission to determine time to eradication of several infectious diseases under different budget allocation strategies. Using a mathematical transmission model, we analyzed three hypothetical vaccine-preventable infectious diseases in three different countries. A central decision-maker can distribute funding among SIA programs for these three diseases according to either a sequential strategy or a simultaneous strategy. We explored the time to eradication under these two strategies under a range of scenarios. Results For a certain range of annual budgets, all three diseases can be eradicated relatively quickly under the sequential strategy, whereas eradication never occurs under the simultaneous strategy. However, moderate changes to total SIA budget, SIA frequency, order of eradication, or funding disruptions can create disproportionately large differences in the time and budget required for eradication under the sequential strategy. We find that the predicted time to eradication can be very sensitive to small differences in the rate of case importation between the countries. We also find that the time to eradication of all three diseases is not necessarily lowest when the least transmissible disease is targeted first. Conclusions Relatively modest differences in budget allocation strategies in the near-term can result in surprisingly large long-term differences in time required to eradicate, as a result of the amplifying effects of herd immunity and the nonlinearities of disease transmission. More sophisticated versions of such models may be useful to large international donors or other organizations as a planning or portfolio optimization tool, where choices must be made regarding how much funding to dedicate to different infectious disease eradication efforts. PMID:21955853

  15. The potential impact of immunization campaign budget re-allocation on global eradication of paediatric infectious diseases.

    PubMed

    Fitzpatrick, Tiffany; Bauch, Chris T

    2011-09-28

    The potential benefits of coordinating infectious disease eradication programs that use campaigns such as supplementary immunization activities (SIAs) should not be over-looked. One example of a coordinated approach is an adaptive "sequential strategy": first, all annual SIA budget is dedicated to the eradication of a single infectious disease; once that disease is eradicated, the annual SIA budget is re-focussed on eradicating a second disease, etc. Herd immunity suggests that a sequential strategy may eradicate several infectious diseases faster than a non-adaptive "simultaneous strategy" of dividing annual budget equally among eradication programs for those diseases. However, mathematical modeling is required to understand the potential extent of this effect. Our objective was to illustrate how budget allocation strategies can interact with the nonlinear nature of disease transmission to determine time to eradication of several infectious diseases under different budget allocation strategies. Using a mathematical transmission model, we analyzed three hypothetical vaccine-preventable infectious diseases in three different countries. A central decision-maker can distribute funding among SIA programs for these three diseases according to either a sequential strategy or a simultaneous strategy. We explored the time to eradication under these two strategies under a range of scenarios. For a certain range of annual budgets, all three diseases can be eradicated relatively quickly under the sequential strategy, whereas eradication never occurs under the simultaneous strategy. However, moderate changes to total SIA budget, SIA frequency, order of eradication, or funding disruptions can create disproportionately large differences in the time and budget required for eradication under the sequential strategy. We find that the predicted time to eradication can be very sensitive to small differences in the rate of case importation between the countries. We also find that the time to eradication of all three diseases is not necessarily lowest when the least transmissible disease is targeted first. Relatively modest differences in budget allocation strategies in the near-term can result in surprisingly large long-term differences in time required to eradicate, as a result of the amplifying effects of herd immunity and the nonlinearities of disease transmission. More sophisticated versions of such models may be useful to large international donors or other organizations as a planning or portfolio optimization tool, where choices must be made regarding how much funding to dedicate to different infectious disease eradication efforts.

  16. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    NASA Astrophysics Data System (ADS)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  17. Octree-based, GPU implementation of a continuous cellular automaton for the simulation of complex, evolving surfaces

    NASA Astrophysics Data System (ADS)

    Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Gadea, R.; Sato, K.

    2011-03-01

    Presently, dynamic surface-based models are required to contain increasingly larger numbers of points and to propagate them over longer time periods. For large numbers of surface points, the octree data structure can be used as a balance between low memory occupation and relatively rapid access to the stored data. For evolution rules that depend on neighborhood states, extended simulation periods can be obtained by using simplified atomistic propagation models, such as the Cellular Automata (CA). This method, however, has an intrinsic parallel updating nature and the corresponding simulations are highly inefficient when performed on classical Central Processing Units (CPUs), which are designed for the sequential execution of tasks. In this paper, a series of guidelines is presented for the efficient adaptation of octree-based, CA simulations of complex, evolving surfaces into massively parallel computing hardware. A Graphics Processing Unit (GPU) is used as a cost-efficient example of the parallel architectures. For the actual simulations, we consider the surface propagation during anisotropic wet chemical etching of silicon as a computationally challenging process with a wide-spread use in microengineering applications. A continuous CA model that is intrinsically parallel in nature is used for the time evolution. Our study strongly indicates that parallel computations of dynamically evolving surfaces simulated using CA methods are significantly benefited by the incorporation of octrees as support data structures, substantially decreasing the overall computational time and memory usage.

  18. Safeguarding a Lunar Rover with Wald's Sequential Probability Ratio Test

    NASA Technical Reports Server (NTRS)

    Furlong, Michael; Dille, Michael; Wong, Uland; Nefian, Ara

    2016-01-01

    The virtual bumper is a safeguarding mechanism for autonomous and remotely operated robots. In this paper we take a new approach to the virtual bumper system by using an old statistical test. By using a modified version of Wald's sequential probability ratio test we demonstrate that we can reduce the number of false positive reported by the virtual bumper, thereby saving valuable mission time. We use the concept of sequential probability ratio to control vehicle speed in the presence of possible obstacles in order to increase certainty about whether or not obstacles are present. Our new algorithm reduces the chances of collision by approximately 98 relative to traditional virtual bumper safeguarding without speed control.

  19. Displacement fields from point cloud data: Application of particle imaging velocimetry to landslide geodesy

    USGS Publications Warehouse

    Aryal, Arjun; Brooks, Benjamin A.; Reid, Mark E.; Bawden, Gerald W.; Pawlak, Geno

    2012-01-01

    Acquiring spatially continuous ground-surface displacement fields from Terrestrial Laser Scanners (TLS) will allow better understanding of the physical processes governing landslide motion at detailed spatial and temporal scales. Problems arise, however, when estimating continuous displacement fields from TLS point-clouds because reflecting points from sequential scans of moving ground are not defined uniquely, thus repeat TLS surveys typically do not track individual reflectors. Here, we implemented the cross-correlation-based Particle Image Velocimetry (PIV) method to derive a surface deformation field using TLS point-cloud data. We estimated associated errors using the shape of the cross-correlation function and tested the method's performance with synthetic displacements applied to a TLS point cloud. We applied the method to the toe of the episodically active Cleveland Corral Landslide in northern California using TLS data acquired in June 2005–January 2007 and January–May 2010. Estimated displacements ranged from decimeters to several meters and they agreed well with independent measurements at better than 9% root mean squared (RMS) error. For each of the time periods, the method provided a smooth, nearly continuous displacement field that coincides with independently mapped boundaries of the slide and permits further kinematic and mechanical inference. For the 2010 data set, for instance, the PIV-derived displacement field identified a diffuse zone of displacement that preceded by over a month the development of a new lateral shear zone. Additionally, the upslope and downslope displacement gradients delineated by the dense PIV field elucidated the non-rigid behavior of the slide.

  20. Patterned Photostimulation with Digital Micromirror Devices to Investigate Dendritic Integration Across Branch Points

    PubMed Central

    Santos, M. Daniel; Tang, Cha-Min

    2011-01-01

    Light is a versatile and precise means to control neuronal excitability. The recent introduction of light sensitive effectors such as channel-rhodopsin and caged neurotransmitters have led to interests in developing better means to control patterns of light in space and time that are useful for experimental neuroscience. One conventional strategy, employed in confocal and 2-photon microscopy, is to focus light to a diffraction limited spot and then scan that single spot sequentially over the region of interest. This approach becomes problematic if large areas have to be stimulated within a brief time window, a problem more applicable to photostimulation than for imaging. An alternate strategy is to project the complete spatial pattern on the target with the aid of a digital micromirror device (DMD). The DMD approach is appealing because the hardware components are relatively inexpensive and is supported by commercial interests. Because such a system is not available for upright microscopes, we will discuss the critical issues in the construction and operations of such a DMD system. Even though we will be primarily describing the construction of the system for UV photolysis, the modifications for building the much simpler visible light system for optogenetic experiments will also be provided. The UV photolysis system was used to carryout experiments to study a fundamental question in neuroscience, how are spatially distributed inputs integrated across distal dendritic branch points. The results suggest that integration can be non-linear across branch points and the supralinearity is largely mediated by NMDA receptors. PMID:21403635

  1. Recent social and biogeophysical changes in the Ganges-Brahmaputra-Meghna, Mekong, and Amazon deltas as inputs into evolutionary policy-making.

    NASA Astrophysics Data System (ADS)

    de Araujo Barbosa, C. C.; Hossain, S.; Szabo, S.; Matthews, Z.; Heard, S.; Dearing, J.

    2014-12-01

    Policy-making in social-ecological systems increasingly looks to iterative, evolutionary approaches that can address the inherent complexity of interactions between human wellbeing, agricultural and aquacultural production, and ecosystem services. Here we show how an analysis of available time-series in delta regions over past decades can provide important insight into the social-ecological system dynamics that result from the complexity. The presentation summarises the recent changes for major elements of each social-ecological system, for example demography, economy, health, climate, food, and water. Time-series data from official statistics, monitoring programmes and sequential satellite imagery are analysed to define the range of trends, the presence of change points, slow and fast variables, and the significant drivers of change. For example, in the Bangladesh delta zone, increasing gross domestic product and per capita income levels since the 1980s mirror rising levels of food and inland fish production. In contrast, non-food ecosystem services such as water availability, water quality and land stability have deteriorated. As a result, poverty alleviation is associated with environmental degradation. Trends in indicators of human wellbeing and ecosystem services point to widespread non-stationary dynamics governed by slowly changing variables with increased probability of systemic threshold changes/tipping points in the near future. We conclude by examining how the findings could feed into new management tools, such as system dynamic models and assessments of safe operating spaces. Such tools have the potential to help create policies that deliver alternative and sustainable paths for land management while accommodating social and environmental change.

  2. Lithospheric structure of Taiwan from gravity modelling and sequential inversion of seismological and gravity data

    NASA Astrophysics Data System (ADS)

    Masson, F.; Mouyen, M.; Hwang, C.; Wu, Y.-M.; Ponton, F.; Lehujeur, M.; Dorbath, C.

    2012-11-01

    Using a Bouguer anomaly map and a dense seismic data set, we have performed two studies in order to improve our knowledge of the deep structure of Taiwan. First, we model the Bouguer anomaly along a profile crossing the island using simple forward modelling. The modelling is 2D, with the hypothesis of cylindrical symmetry. Second we present a joint analysis of gravity anomaly and seismic arrival time data recorded in Taiwan. An initial velocity model has been obtained by local earthquake tomography (LET) of the seismological data. The LET velocity model was used to construct an initial 3D gravity model, using a linear velocity-density relationship (Birch's law). The synthetic Bouguer anomaly calculated for this model has the same shape and wavelength as the observed anomaly. However some characteristics of the anomaly map are not retrieved. To derive a crustal velocity/density model which accounts for both types of observations, we performed a sequential inversion of seismological and gravity data. The variance reduction of the arrival time data for the final sequential model was comparable to the variance reduction obtained by simple LET. Moreover, the sequential model explained about 80% of the observed gravity anomaly. New 3D model of Taiwan lithosphere is presented.

  3. Analysis of trend in temperature and rainfall time series of an Indian arid region: comparative evaluation of salient techniques

    NASA Astrophysics Data System (ADS)

    Machiwal, Deepesh; Gupta, Ankit; Jha, Madan Kumar; Kamble, Trupti

    2018-04-01

    This study investigated trends in 35 years (1979-2013) temperature (maximum, Tmax and minimum, Tmin) and rainfall at annual and seasonal (pre-monsoon, monsoon, post-monsoon, and winter) scales for 31 grid points in a coastal arid region of India. Box-whisker plots of annual temperature and rainfall time series depict systematic spatial gradients. Trends were examined by applying eight tests, such as Kendall rank correlation (KRC), Spearman rank order correlation (SROC), Mann-Kendall (MK), four modified MK tests, and innovative trend analysis (ITA). Trend magnitudes were quantified by Sen's slope estimator, and a new method was adopted to assess the significance of linear trends in MK-test statistics. It was found that the significant serial correlation is prominent in the annual and post-monsoon Tmax and Tmin, and pre-monsoon Tmin. The KRC and MK tests yielded similar results in close resemblance with the SROC test. The performance of two modified MK tests considering variance-correction approaches was found superior to the KRC, MK, modified MK with pre-whitening, and ITA tests. The performance of original MK test is poor due to the presence of serial correlation, whereas the ITA method is over-sensitive in identifying trends. Significantly increasing trends are more prominent in Tmin than Tmax. Further, both the annual and monsoon rainfall time series have a significantly increasing trend of 9 mm year-1. The sequential significance of linear trend in MK test-statistics is very strong (R 2 ≥ 0.90) in the annual and pre-monsoon Tmin (90% grid points), and strong (R 2 ≥ 0.75) in monsoon Tmax (68% grid points), monsoon, post-monsoon, and winter Tmin (respectively 65, 55, and 48% grid points), as well as in the annual and monsoon rainfalls (respectively 68 and 61% grid points). Finally, this study recommends use of variance-corrected MK test for the precise identification of trends. It is emphasized that the rising Tmax may hamper crop growth due to enhanced metabolic-activities and shortened crop-duration. Likewise, increased Tmin may result in lesser crop and biomass yields owing to the increased respiration.

  4. Real-time monitoring of human blood clotting using a lateral excited film bulk acoustic resonator

    NASA Astrophysics Data System (ADS)

    Chen, Da; Wang, Jingjng; Wang, Peng; Guo, Qiuquan; Zhang, Zhen; Ma, Jilong

    2017-04-01

    Frequent assay of hemostatic status is an essential issue for the millions of patients using anticoagulant drugs. In this paper, we presented a micro-fabricated film bulk acoustic sensor for the real-time monitoring of blood clotting and the measurement of hemostatic parameters. The device was made of an Au/ZnO/Si3N4 film stack and excited by a lateral electric field. It operated under a shear mode resonance with the frequency of 1.42 GHz and had a quality factor of 342 in human blood. During the clotting process of blood, the resonant frequency decreased along with the change of blood viscosity and showed an apparent step-ladder curve, revealing the sequential clotting stages. An important hemostatic parameter, prothrombin time, was quantitatively determined from the frequency response for different dilutions of the blood samples. The effect of a typical anticoagulant drug (heparin) on the prothrombin time was exemplarily shown. The proposed sensor displayed a good consistency and clinical comparability with the standard coagulometric methods. Thanks to the availability of direct digital signals, excellent potentials of miniaturization and integration, the proposed sensor has promising application for point-of-care coagulation technologies.

  5. Simultaneous and successive inoculations of yeasts and lactic acid bacteria on the fermentation of an unsulfited Tannat grape must

    PubMed Central

    Muñoz, Viviana; Beccaria, Bruno; Abreo, Eduardo

    2014-01-01

    Interactions between yeasts and lactic acid bacteria are strain specific, and their outcome is expected to change in simultaneous alcoholic - malolactic fermentations from the pattern observed in successive fermentations. One Oenococcus oeni strain Lalvin VP41™ was inoculated with two Saccharomyces cerevisiae strains either simultaneously, three days after the yeast inoculation, or when alcoholic fermentation was close to finish. Early bacterial inoculations with each yeast strain allowed for the growth of the bacterial populations, and the length of malolactic fermentation was reduced to six days. Alcoholic fermentation by Lalvin ICV D80® yeast strain left the highest residual sugar, suggesting a negative effect of the bacterial growth and malolactic activity on its performance. In sequential inoculations the bacterial populations did not show actual growth with either yeast strain. In this strategy, both yeast strains finished the alcoholic fermentations, and malolactic fermentations took longer to finish. Lalvin ICV D80® allowed for higher viability and activity of the bacterial strain than Fermicru UY4® under the three inoculation strategies. This was beneficial for the sequential completion of both fermentations, but negatively affected the completion of alcoholic fermentation by Lalvin ICV D80® in the early bacteria additions. Conversely, Fermicru UY4®, which was rather inhibitory towards the bacteria, favored the timely completion of both fermentations simultaneously. As bacteria in early inoculations with low or no SO2 addition can be expected to multiply and interact with fermenting yeasts, not only are the yeast-bacterium strains combination and time point of the inoculation to be considered, but also the amount of bacteria inoculated. PMID:24948914

  6. Avoiding Severe Toxicity From Combined BRAF Inhibitor and Radiation Treatment: Consensus Guidelines from the Eastern Cooperative Oncology Group (ECOG)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anker, Christopher J., E-mail: chris.anker@UVMHealth.org; Grossmann, Kenneth F.; Atkins, Michael B.

    2016-06-01

    BRAF kinase gene V600 point mutations drive approximately 40% to 50% of all melanomas, and BRAF inhibitors (BRAFi) have been found to significantly improve survival outcomes. Although radiation therapy (RT) provides effective symptom palliation, there is a lack of toxicity and efficacy data when RT is combined with BRAFi, including vemurafenib and dabrafenib. This literature review provides a detailed analysis of potential increased dermatologic, pulmonary, neurologic, hepatic, esophageal, and bowel toxicity from the combination of BRAFi and RT for melanoma patients described in 27 publications. Despite 7 publications noting potential intracranial neurotoxicity, the rates of radionecrosis and hemorrhage from wholemore » brain RT (WBRT), stereotactic radiosurgery (SRS), or both do not appear increased with concurrent or sequential administration of BRAFis. Almost all grade 3 dermatitis reactions occurred when RT and BRAFi were administered concurrently. Painful, disfiguring nondermatitis cutaneous reactions have been described from concurrent or sequential RT and BRAFi administration, which improved with topical steroids and time. Visceral toxicity has been reported with RT and BRAFi, with deaths possibly related to bowel perforation and liver hemorrhage. Increased severity of radiation pneumonitis with BRAFi is rare, but more concerning was a potentially related fatal pulmonary hemorrhage. Conversely, encouraging reports have described patients with leptomeningeal spread and unresectable lymphadenopathy rendered disease free from combined RT and BRAFi. Based on our review, the authors recommend holding RT ≥3 days before and after fractionated RT and ≥1 day before and after SRS. No fatal reactions have been described with a dose <4 Gy per fraction, and time off systemic treatment should be minimized. Future prospective data will serve to refine these recommendations.« less

  7. Avoiding Severe Toxicity From Combined BRAF Inhibitor and Radiation Treatment: Consensus Guidelines from the Eastern Cooperative Oncology Group (ECOG)

    PubMed Central

    Anker, Christopher J.; Grossmann, Kenneth F.; Atkins, Michael B.; Suneja, Gita; Tarhini, Ahmad A.; Kirkwood, John M.

    2016-01-01

    BRAF kinase gene V600 point mutations drive approximately 40% to 50% of all melanomas, and BRAF inhibitors (BRAFi) have been found to significantly improve survival outcomes. Although radiation therapy (RT) provides effective symptom palliation, there is a lack of toxicity and efficacy data when RT is combined with BRAFi, including vemurafenib and dabrafenib. This literature review provides a detailed analysis of potential increased dermatologic, pulmonary, neurologic, hepatic, esophageal, and bowel toxicity from the combination of BRAFi and RT for melanoma patients described in 27 publications. Despite 7 publications noting potential intracranial neurotoxicity, the rates of radionecrosis and hemorrhage from whole brain RT (WBRT), stereotactic radiosurgery (SRS), or both do not appear increased with concurrent or sequential administration of BRAFis. Almost all grade 3 dermatitis reactions occurred when RT and BRAFi were administered concurrently. Painful, disfiguring nondermatitis cutaneous reactions have been described from concurrent or sequential RT and BRAFi administration, which improved with topical steroids and time. Visceral toxicity has been reported with RT and BRAFi, with deaths possibly related to bowel perforation and liver hemorrhage. Increased severity of radiation pneumonitis with BRAFi is rare, but more concerning was a potentially related fatal pulmonary hemorrhage. Conversely, encouraging reports have described patients with leptomeningeal spread and unresectable lymphadenopathy rendered disease free from combined RT and BRAFi. Based on our review, the authors recommend holding BRAFi and/or MEK inhibitors ≥3 days before and after fractionated RT and ≥1 day before and after SRS. No fatal reactions have been described with a dose <4 Gy per fraction, and time off systemic treatment should be minimized. Future prospective data will serve to refine these recommendations. PMID:27131079

  8. Sequential quantum secret sharing in a noisy environment aided with weak measurements

    NASA Astrophysics Data System (ADS)

    Ray, Maharshi; Chatterjee, Sourav; Chakrabarty, Indranil

    2016-05-01

    In this work we give a (n,n)-threshold protocol for sequential secret sharing of quantum information for the first time. By sequential secret sharing we refer to a situation where the dealer is not having all the secrets at the same time, at the beginning of the protocol; however if the dealer wishes to share secrets at subsequent phases she/he can realize it with the help of our protocol. First of all we present our protocol for three parties and later we generalize it for the situation where we have more (n> 3) parties. Interestingly, we show that our protocol of sequential secret sharing requires less amount of quantum as well as classical resource as compared to the situation wherein existing protocols are repeatedly used. Further in a much more realistic situation, we consider the sharing of qubits through two kinds of noisy channels, namely the phase damping channel (PDC) and the amplitude damping channel (ADC). When we carry out the sequential secret sharing in the presence of noise we observe that the fidelity of secret sharing at the kth iteration is independent of the effect of noise at the (k - 1)th iteration. In case of ADC we have seen that the average fidelity of secret sharing drops down to ½ which is equivalent to a random guess of the quantum secret. Interestingly, we find that by applying weak measurements one can enhance the average fidelity. This increase of the average fidelity can be achieved with certain trade off with the success probability of the weak measurements.

  9. Efficient sequential and parallel algorithms for record linkage

    PubMed Central

    Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar

    2014-01-01

    Background and objective Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Methods Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Results Our sequential and parallel algorithms have been tested on a real dataset of 1 083 878 records and synthetic datasets ranging in size from 50 000 to 9 000 000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). Conclusions We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm. PMID:24154837

  10. Multigrid methods with space–time concurrency

    DOE PAGES

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.; ...

    2017-10-06

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  11. Multigrid methods with space–time concurrency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  12. Building emotional resilience over 14 sessions of emotion focused therapy: Micro-longitudinal analyses of productive emotional patterns.

    PubMed

    Pascual-Leone, A; Yeryomenko, N; Sawashima, T; Warwar, S

    2017-05-04

    Pascual-Leone and Greenberg's sequential model of emotional processing has been used to explore process in over 24 studies. This line of research shows emotional processing in good psychotherapy often follows a sequential order, supporting a saw-toothed pattern of change within individual sessions (progressing "2-steps-forward, 1-step-back"). However, one cannot assume that local in-session patterns are scalable across an entire course of therapy. Thus, the primary objective of this exploratory study was to consider how the sequential patterns identified by Pascual-Leone, may apply across entire courses of treatment. Intensive emotion coding in two separate single-case designs were submitted for quantitative analyses of longitudinal patterns. Comprehensive coding in these cases involved recording observations for every emotional event in an entire course of treatment (using the Classification of Affective-Meaning States), which were then treated as a 9-point ordinal scale. Applying multilevel modeling to each of the two cases showed significant patterns of change over a large number of sessions, and those patterns were either nested at the within-session level or observed at the broader session-by-session level of change. Examining successful treatment cases showed several theoretically coherent kinds of temporal patterns, although not always in the same case. Clinical or methodological significance of this article: This is the first paper to demonstrate systematic temporal patterns of emotion over the course of an entire treatment. (1) The study offers a proof of concept that longitudinal patterns in the micro-processes of emotion can be objectively derived and quantified. (2) It also shows that patterns in emotion may be identified on the within-session level, as well as the session-by-session level of analysis. (3) Finally, observed processes over time support the ordered pattern of emotional states hypothesized in Pascual-Leone and Greenberg's ( 2007 ) model of emotional processing.

  13. Accelerated high-resolution photoacoustic tomography via compressed sensing

    NASA Astrophysics Data System (ADS)

    Arridge, Simon; Beard, Paul; Betcke, Marta; Cox, Ben; Huynh, Nam; Lucka, Felix; Ogunlade, Olumide; Zhang, Edward

    2016-12-01

    Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Pérot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.

  14. Prospective Study of Burn Wound Excision of the Hands

    DTIC Science & Technology

    1983-06-01

    Houston, Texas. sion (10, 11). This method allows the sequential removal Presented at the Forty-second Annual Session of The American of nonviable tissue...ultrasonography. Es- days, after which all dressings were removed and a more charotomies of upper extremities were carried out if vigorous physical therapy...sponges, followed by mild compression wrapping and interphalangeal joints. The thumb was abducted and elevation. Electrocoagulation of bleeding points

  15. Efficacy and safety of Postoperative Intravenous Parecoxib sodium Followed by ORal CElecoxib (PIPFORCE) post-total knee arthroplasty in patients with osteoarthritis: a study protocol for a multicentre, double-blind, parallel-group trial

    PubMed Central

    Zhuang, Qianyu; Bian, Yanyan; Wang, Wei; Jiang, Jingmei; Feng, Bin; Sun, Tiezheng; Lin, Jianhao; Zhang, Miaofeng; Yan, Shigui; Shen, Bin; Pei, Fuxing; Weng, Xisheng

    2016-01-01

    Introduction Total knee arthroplasty (TKA) has been regarded as a most painful orthopaedic surgery. Although many surgeons sequentially use parecoxib and celecoxib as a routine strategy for postoperative pain control after TKA, high quality evidence is still lacking to prove the effect of this sequential regimen, especially at the medium-term follow-up. The purpose of this study, therefore, is to evaluate efficacy and safety of postoperative intravenous parecoxib sodium followed by oral celecoxib in patients with osteoarthritis (OA) undergoing TKA. The hypothesis is that compared to placebo with opioids as rescue treatment, sequential use of parecoxib and celecoxib can achieve less morphine consumption over the postoperative 2 weeks, as well as better pain control, quicker functional recovery in the postoperative 6 weeks and less opioid-related adverse events during the 12-week recovery phase. Methods and analysis This study is designed as a multicentre, randomised, double-blind, parallel-group and placebo-controlled trial. The target sample size is 246. All participants who meet the study inclusion and exclusion criteria will be randomly assigned in a 1:1 ratio to either the parecoxib/celecoxib group or placebo group. The randomisation and allocation will be study site based. The study will consist of three phases: an initial screening phase; a 6-week double-blind treatment phase; and a 6-week follow-up phase. The primary end point is cumulative opioid consumption during 2 weeks postoperation. Secondary end points consist of the postoperative visual analogue scale score, knee joint function, quality of life, local skin temperature, erythrocyte sedimentation rate, C reactive protein, cytokines and blood coagulation parameters. Safety end points will be monitored too. Ethics and dissemination Ethics approval for this study has been obtained from the Ethics Committee, Peking Union Medical College Hospital, China (Protocol number: S-572) Study results will be available as published manuscripts and presentations at national and international meetings. Trial registration number NCT02198924. PMID:27609846

  16. Efficacy and safety of Postoperative Intravenous Parecoxib sodium Followed by ORal CElecoxib (PIPFORCE) post-total knee arthroplasty in patients with osteoarthritis: a study protocol for a multicentre, double-blind, parallel-group trial.

    PubMed

    Zhuang, Qianyu; Bian, Yanyan; Wang, Wei; Jiang, Jingmei; Feng, Bin; Sun, Tiezheng; Lin, Jianhao; Zhang, Miaofeng; Yan, Shigui; Shen, Bin; Pei, Fuxing; Weng, Xisheng

    2016-09-08

    Total knee arthroplasty (TKA) has been regarded as a most painful orthopaedic surgery. Although many surgeons sequentially use parecoxib and celecoxib as a routine strategy for postoperative pain control after TKA, high quality evidence is still lacking to prove the effect of this sequential regimen, especially at the medium-term follow-up. The purpose of this study, therefore, is to evaluate efficacy and safety of postoperative intravenous parecoxib sodium followed by oral celecoxib in patients with osteoarthritis (OA) undergoing TKA. The hypothesis is that compared to placebo with opioids as rescue treatment, sequential use of parecoxib and celecoxib can achieve less morphine consumption over the postoperative 2 weeks, as well as better pain control, quicker functional recovery in the postoperative 6 weeks and less opioid-related adverse events during the 12-week recovery phase. This study is designed as a multicentre, randomised, double-blind, parallel-group and placebo-controlled trial. The target sample size is 246. All participants who meet the study inclusion and exclusion criteria will be randomly assigned in a 1:1 ratio to either the parecoxib/celecoxib group or placebo group. The randomisation and allocation will be study site based. The study will consist of three phases: an initial screening phase; a 6-week double-blind treatment phase; and a 6-week follow-up phase. The primary end point is cumulative opioid consumption during 2 weeks postoperation. Secondary end points consist of the postoperative visual analogue scale score, knee joint function, quality of life, local skin temperature, erythrocyte sedimentation rate, C reactive protein, cytokines and blood coagulation parameters. Safety end points will be monitored too. Ethics approval for this study has been obtained from the Ethics Committee, Peking Union Medical College Hospital, China (Protocol number: S-572) Study results will be available as published manuscripts and presentations at national and international meetings. NCT02198924. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  17. Parents' Verbal and Nonverbal Caring Behaviors and Child Distress During Cancer-Related Port Access Procedures: A Time-Window Sequential Analysis.

    PubMed

    Bai, Jinbing; Harper, Felicity W K; Penner, Louis A; Swanson, Kristen; Santacroce, Sheila J

    2017-11-01

    To study the relationship between parental verbal and nonverbal caring behaviors and child distress during cancer-related port access placement using correlational and time-window sequential analyses.
. Longitudinal, observational design.
. Children's Hospital of Michigan and St. Jude Children's Research Hospital.
. 43 child-parent dyads, each with two or three video recordings of the child undergoing cancer-related port placement.
. Two trained raters coded parent interaction behaviors and child distress using the Parent Caring Response Scoring System and Karmanos Child Coping and Distress Scale, respectively. Mixed modeling with generalized estimating equations examined the associations between parent interaction behaviors and parent distress, child distress, and child cooperation reported by multiple raters. Time-window sequential analyses were performed to investigate the temporal relationships in parent-child interactions within a five-second window.
. Parent caring behaviors, child distress, and child cooperation.
. Parent caring interaction behaviors were significantly correlated with parent distress, child distress, and child cooperation during repeated cancer port accessing. Sequential analyses showed that children were significantly less likely to display behavioral and verbal distress following parent caring behaviors than at any other time. If a child is already distressed, parent verbal and nonverbal caring behaviors can significantly reduce child behavioral and verbal distress.
. Parent caring behaviors, particularly the rarely studied nonverbal behaviors (e.g., eye contact, distance close to touch, supporting/allowing), can reduce the child's distress during cancer port accessing procedures.
. Studying parent-child interactions during painful cancer-related procedures can provide evidence to develop nursing interventions to support parents in caring for their child during painful procedures.

  18. Sequential capillary electrophoresis analysis using optically gated sample injection and UV/vis detection.

    PubMed

    Liu, Xiaoxia; Tian, Miaomiao; Camara, Mohamed Amara; Guo, Liping; Yang, Li

    2015-10-01

    We present sequential CE analysis of amino acids and L-asparaginase-catalyzed enzyme reaction, by combing the on-line derivatization, optically gated (OG) injection and commercial-available UV-Vis detection. Various experimental conditions for sequential OG-UV/vis CE analysis were investigated and optimized by analyzing a standard mixture of amino acids. High reproducibility of the sequential CE analysis was demonstrated with RSD values (n = 20) of 2.23, 2.57, and 0.70% for peak heights, peak areas, and migration times, respectively, and the LOD of 5.0 μM (for asparagine) and 2.0 μM (for aspartic acid) were obtained. With the application of the OG-UV/vis CE analysis, sequential online CE enzyme assay of L-asparaginase-catalyzed enzyme reaction was carried out by automatically and continuously monitoring the substrate consumption and the product formation every 12 s from the beginning to the end of the reaction. The Michaelis constants for the reaction were obtained and were found to be in good agreement with the results of traditional off-line enzyme assays. The study demonstrated the feasibility and reliability of integrating the OG injection with UV/vis detection for sequential online CE analysis, which could be of potential value for online monitoring various chemical reaction and bioprocesses. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. A fast and accurate online sequential learning algorithm for feedforward networks.

    PubMed

    Liang, Nan-Ying; Huang, Guang-Bin; Saratchandran, P; Sundararajan, N

    2006-11-01

    In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang et al. developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance.

  20. Effects of a Web-Based Tailored Multiple-Lifestyle Intervention for Adults: A Two-Year Randomized Controlled Trial Comparing Sequential and Simultaneous Delivery Modes

    PubMed Central

    Kremers, Stef PJ; Vandelanotte, Corneel; van Adrichem, Mathieu JG; Schneider, Francine; Candel, Math JJM; de Vries, Hein

    2014-01-01

    Background Web-based computer-tailored interventions for multiple health behaviors can have a significant public health impact. Yet, few randomized controlled trials have tested this assumption. Objective The objective of this paper was to test the effects of a sequential and simultaneous Web-based tailored intervention on multiple lifestyle behaviors. Methods A randomized controlled trial was conducted with 3 tailoring conditions (ie, sequential, simultaneous, and control conditions) in the Netherlands in 2009-2012. Follow-up measurements took place after 12 and 24 months. The intervention content was based on the I-Change model. In a health risk appraisal, all respondents (N=5055) received feedback on their lifestyle behaviors that indicated whether they complied with the Dutch guidelines for physical activity, vegetable consumption, fruit consumption, alcohol intake, and smoking. Participants in the sequential (n=1736) and simultaneous (n=1638) conditions received tailored motivational feedback to change unhealthy behaviors one at a time (sequential) or all at the same time (simultaneous). Mixed model analyses were performed as primary analyses; regression analyses were done as sensitivity analyses. An overall risk score was used as outcome measure, then effects on the 5 individual lifestyle behaviors were assessed and a process evaluation was performed regarding exposure to and appreciation of the intervention. Results Both tailoring strategies were associated with small self-reported behavioral changes. The sequential condition had the most significant effects compared to the control condition after 12 months (T1, effect size=0.28). After 24 months (T2), the simultaneous condition was most effective (effect size=0.18). All 5 individual lifestyle behaviors changed over time, but few effects differed significantly between the conditions. At both follow-ups, the sequential condition had significant changes in smoking abstinence compared to the simultaneous condition (T1 effect size=0.31; T2 effect size=0.41). The sequential condition was more effective in decreasing alcohol consumption than the control condition at 24 months (effect size=0.27). Change was predicted by the amount of exposure to the intervention (total visiting time: beta=–.06; P=.01; total number of visits: beta=–.11; P<.001). Both interventions were appreciated well by respondents without significant differences between conditions. Conclusions Although evidence was found for the effectiveness of both programs, no simple conclusive finding could be drawn about which intervention mode was more effective. The best kind of intervention may depend on the behavior that is targeted or on personal preferences and motivation. Further research is needed to identify moderators of intervention effectiveness. The results need to be interpreted in view of the high and selective dropout rates, multiple comparisons, and modest effect sizes. However, a large number of people were reached at low cost and behavioral change was achieved after 2 years. Trial Registration Nederlands Trial Register: NTR 2168; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=2168 (Archived by WebCite at http://www.webcitation.org/6MbUqttYB). PMID:24472854

  1. Effects of musical training on sound pattern processing in high-school students.

    PubMed

    Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse

    2009-05-01

    Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.

  2. Perceptual Grouping Affects Pitch Judgments Across Time and Frequency

    PubMed Central

    Borchert, Elizabeth M. O.; Micheyl, Christophe; Oxenham, Andrew J.

    2010-01-01

    Pitch, the perceptual correlate of fundamental frequency (F0), plays an important role in speech, music and animal vocalizations. Changes in F0 over time help define musical melodies and speech prosody, while comparisons of simultaneous F0 are important for musical harmony, and for segregating competing sound sources. This study compared listeners’ ability to detect differences in F0 between pairs of sequential or simultaneous tones that were filtered into separate, non-overlapping spectral regions. The timbre differences induced by filtering led to poor F0 discrimination in the sequential, but not the simultaneous, conditions. Temporal overlap of the two tones was not sufficient to produce good performance; instead performance appeared to depend on the two tones being integrated into the same perceptual object. The results confirm the difficulty of comparing the pitches of sequential sounds with different timbres and suggest that, for simultaneous sounds, pitch differences may be detected through a decrease in perceptual fusion rather than an explicit coding and comparison of the underlying F0s. PMID:21077719

  3. Microwave Ablation: Comparison of Simultaneous and Sequential Activation of Multiple Antennas in Liver Model Systems

    PubMed Central

    Harari, Colin M.; Magagna, Michelle; Bedoya, Mariajose; Lee, Fred T.; Lubner, Meghan G.; Hinshaw, J. Louis; Ziemlewicz, Timothy

    2016-01-01

    Purpose To compare microwave ablation zones created by using sequential or simultaneous power delivery in ex vivo and in vivo liver tissue. Materials and Methods All procedures were approved by the institutional animal care and use committee. Microwave ablations were performed in both ex vivo and in vivo liver models with a 2.45-GHz system capable of powering up to three antennas simultaneously. Two- and three-antenna arrays were evaluated in each model. Sequential and simultaneous ablations were created by delivering power (50 W ex vivo, 65 W in vivo) for 5 minutes per antenna (10 and 15 minutes total ablation time for sequential ablations, 5 minutes for simultaneous ablations). Thirty-two ablations were performed in ex vivo bovine livers (eight per group) and 28 in the livers of eight swine in vivo (seven per group). Ablation zone size and circularity metrics were determined from ablations excised postmortem. Mixed effects modeling was used to evaluate the influence of power delivery, number of antennas, and tissue type. Results On average, ablations created by using the simultaneous power delivery technique were larger than those with the sequential technique (P < .05). Simultaneous ablations were also more circular than sequential ablations (P = .0001). Larger and more circular ablations were achieved with three antennas compared with two antennas (P < .05). Ablations were generally smaller in vivo compared with ex vivo. Conclusion The use of multiple antennas and simultaneous power delivery creates larger, more confluent ablations with greater temperatures than those created with sequential power delivery. © RSNA, 2015 PMID:26133361

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nitao, J J

    The goal of the Event Reconstruction Project is to find the location and strength of atmospheric release points, both stationary and moving. Source inversion relies on observational data as input. The methodology is sufficiently general to allow various forms of data. In this report, the authors will focus primarily on concentration measurements obtained at point monitoring locations at various times. The algorithms being investigated in the Project are the MCMC (Markov Chain Monte Carlo), SMC (Sequential Monte Carlo) Methods, classical inversion methods, and hybrids of these. They refer the reader to the report by Johannesson et al. (2004) for explanationsmore » of these methods. These methods require computing the concentrations at all monitoring locations for a given ''proposed'' source characteristic (locations and strength history). It is anticipated that the largest portion of the CPU time will take place performing this computation. MCMC and SMC will require this computation to be done at least tens of thousands of times. Therefore, an efficient means of computing forward model predictions is important to making the inversion practical. In this report they show how Green's functions and reciprocal Green's functions can significantly accelerate forward model computations. First, instead of computing a plume for each possible source strength history, they can compute plumes from unit impulse sources only. By using linear superposition, they can obtain the response for any strength history. This response is given by the forward Green's function. Second, they may use the law of reciprocity. Suppose that they require the concentration at a single monitoring point x{sub m} due to a potential (unit impulse) source that is located at x{sub s}. instead of computing a plume with source location x{sub s}, they compute a ''reciprocal plume'' whose (unit impulse) source is at the monitoring locations x{sub m}. The reciprocal plume is computed using a reversed-direction wind field. The wind field and transport coefficients must also be appropriately time-reversed. Reciprocity says that the concentration of reciprocal plume at x{sub s} is related to the desired concentration at x{sub m}. Since there are many less monitoring points than potential source locations, the number of forward model computations is drastically reduced.« less

  5. Adapting the Surgical Apgar Score for Perioperative Outcome Prediction in Liver Transplantation: A Retrospective Study

    PubMed Central

    Pearson, Amy C. S.; Subramanian, Arun; Schroeder, Darrell R.; Findlay, James Y.

    2017-01-01

    Background The surgical Apgar score (SAS) is a 10-point scale using the lowest heart rate, lowest mean arterial pressure, and estimated blood loss (EBL) during surgery to predict postoperative outcomes. The SAS has not yet been validated in liver transplantation patients, because typical blood loss usually exceeds the highest EBL category. Our primary aim was to develop a modified SAS for liver transplant (SAS-LT) by replacing the EBL parameter with volume of red cells transfused. We hypothesized that the SAS-LT would predict death or severe complication within 30 days of transplant with similar accuracy to current scoring systems. Methods A retrospective cohort of consecutive liver transplantations from July 2007 to November 2013 was used to develop the SAS-LT. The predictive ability of SAS-LT for early postoperative outcomes was compared with Model for End-stage Liver Disease, Sequential Organ Failure Assessment, and Acute Physiology and Chronic Health Evaluation III scores using multivariable logistic regression and receiver operating characteristic analysis. Results Of 628 transplants, death or serious perioperative morbidity occurred in 105 (16.7%). The SAS-LT (receiver operating characteristic area under the curve [AUC], 0.57) had similar predictive ability to Acute Physiology and Chronic Health Evaluation III, model for end-stage liver disease, and Sequential Organ Failure Assessment scores (0.57, 0.56, and 0.61, respectively). Seventy-nine (12.6%) patients were discharged from the ICU in 24 hours or less. These patients’ SAS-LT scores were significantly higher than those with a longer stay (7.0 vs 6.2, P < 0.01). The AUC on multivariable modeling remained predictive of early ICU discharge (AUC, 0.67). Conclusions The SAS-LT utilized simple intraoperative metrics to predict early morbidity and mortality after liver transplant with similar accuracy to other scoring systems at an earlier postoperative time point. PMID:29184910

  6. A versatile semi-permanent sequential bilayer/diblock polymer coating for capillary isoelectric focusing.

    PubMed

    Bahnasy, Mahmoud F; Lucy, Charles A

    2012-12-07

    A sequential surfactant bilayer/diblock copolymer coating was previously developed for the separation of proteins. The coating is formed by flushing the capillary with the cationic surfactant dioctadecyldimethylammonium bromide (DODAB) followed by the neutral polymer poly-oxyethylene (POE) stearate. Herein we show the method development and optimization for capillary isoelectric focusing (cIEF) separations based on the developed sequential coating. Electroosmotic flow can be tuned by varying the POE chain length which allows optimization of resolution and analysis time. DODAB/POE 40 stearate can be used to perform single-step cIEF, while both DODAB/POE 40 and DODAB/POE 100 stearate allow performing two-step cIEF methodologies. A set of peptide markers is used to assess the coating performance. The sequential coating has been applied successfully to cIEF separations using different capillary lengths and inner diameters. A linear pH gradient is established only in two-step CIEF methodology using 3-10 pH 2.5% (v/v) carrier ampholyte. Hemoglobin A(0) and S variants are successfully resolved on DODAB/POE 40 stearate sequentially coated capillaries. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. The use of sequential extraction to evaluate the remediation potential of heavy metals from contaminated harbour sediment

    NASA Astrophysics Data System (ADS)

    Nystrøm, G. M.; Ottosen, L. M.; Villumsen, A.

    2003-05-01

    In this work sequential extraction is performed with harbour sediment in order to evaluate the electrodialytic remediation potential for harbour sediments. Sequential extraction was performed on a sample of Norwegian harbour sediment; with the original sediment and after the sediment was treated with acid. The results from the sequential extraction show that 75% Zn and Pb and about 50% Cu are found in the most mobile phases in the original sediment and more than 90% Zn and Pb and 75% Cu are found in the most mobile phase in the sediment treated with acid. Electrodialytic remediation experiments were made. The method uses a low direct current as cleaning agent, removing the heavy metals towards the anode and cathode according to the charge of the heavy metals in the electric field. The electrodialytic experiments show that up to 50% Cu, 85% Zn and 60% Pb can be removed after 20 days. Thus, there is still a potential for a higher removal, with some changes in the experimental set-up and longer remediation time. The experiments show that thc use of sequential extraction can be used to predict the electrodialytic remediation potential for harbour sediments.

  8. [Sequential sampling plans to Orthezia praelonga Douglas (Hemiptera: Sternorrhyncha, Ortheziidae) in citrus].

    PubMed

    Costa, Marilia G; Barbosa, José C; Yamamoto, Pedro T

    2007-01-01

    The sequential sampling is characterized by using samples of variable sizes, and has the advantage of reducing sampling time and costs if compared to fixed-size sampling. To introduce an adequate management for orthezia, sequential sampling plans were developed for orchards under low and high infestation. Data were collected in Matão, SP, in commercial stands of the orange variety 'Pêra Rio', at five, nine and 15 years of age. Twenty samplings were performed in the whole area of each stand by observing the presence or absence of scales on plants, being plots comprised of ten plants. After observing that in all of the three stands the scale population was distributed according to the contagious model, fitting the Negative Binomial Distribution in most samplings, two sequential sampling plans were constructed according to the Sequential Likelihood Ratio Test (SLRT). To construct these plans an economic threshold of 2% was adopted and the type I and II error probabilities were fixed in alpha = beta = 0.10. Results showed that the maximum numbers of samples expected to determine control need were 172 and 76 samples for stands with low and high infestation, respectively.

  9. Rapid-Testing Technology and Systems Improvement for the Elimination of Congenital Syphilis in Haiti: Overcoming the “Technology to Systems Gap”

    PubMed Central

    Benoit, Daphne; Zhou, Xi K.; Pape, Jean W.; Peeling, Rosanna W.; Fitzgerald, Daniel W.; Mate, Kedar S.

    2013-01-01

    Background. Despite the availability of rapid diagnostic tests and inexpensive treatment for pregnant women, maternal-child syphilis transmission remains a leading cause of perinatal morbidity and mortality in developing countries. In Haiti, more than 3000 babies are born with congenital syphilis annually. Methods and Findings. From 2007 to 2011, we used a sequential time series, multi-intervention study design in fourteen clinics throughout Haiti to improve syphilis testing and treatment in pregnancy. The two primary interventions were the introduction of a rapid point-of-care syphilis test and systems strengthening based on quality improvement (QI) methods. Syphilis testing increased from 91.5% prediagnostic test to 95.9% after (P < 0.001) and further increased to 96.8% (P < 0.001) after the QI intervention. Despite high rates of testing across all time periods, syphilis treatment lagged behind and only increased from 70.3% to 74.7% after the introduction of rapid tests (P = 0.27), but it improved significantly from 70.2% to 84.3% (P < 0.001) after the systems strengthening QI intervention. Conclusion. Both point-of-care diagnostic testing and health systems-based quality improvement interventions can improve the delivery of specific evidence-based healthcare interventions to prevent congenital syphilis at scale in Haiti. Improved treatment rates for syphilis were seen only after the use of systems-based quality improvement approaches. PMID:26316955

  10. Rapid-Testing Technology and Systems Improvement for the Elimination of Congenital Syphilis in Haiti: Overcoming the "Technology to Systems Gap".

    PubMed

    Severe, Linda; Benoit, Daphne; Zhou, Xi K; Pape, Jean W; Peeling, Rosanna W; Fitzgerald, Daniel W; Mate, Kedar S

    2013-01-01

    Background. Despite the availability of rapid diagnostic tests and inexpensive treatment for pregnant women, maternal-child syphilis transmission remains a leading cause of perinatal morbidity and mortality in developing countries. In Haiti, more than 3000 babies are born with congenital syphilis annually. Methods and Findings. From 2007 to 2011, we used a sequential time series, multi-intervention study design in fourteen clinics throughout Haiti to improve syphilis testing and treatment in pregnancy. The two primary interventions were the introduction of a rapid point-of-care syphilis test and systems strengthening based on quality improvement (QI) methods. Syphilis testing increased from 91.5% prediagnostic test to 95.9% after (P < 0.001) and further increased to 96.8% (P < 0.001) after the QI intervention. Despite high rates of testing across all time periods, syphilis treatment lagged behind and only increased from 70.3% to 74.7% after the introduction of rapid tests (P = 0.27), but it improved significantly from 70.2% to 84.3% (P < 0.001) after the systems strengthening QI intervention. Conclusion. Both point-of-care diagnostic testing and health systems-based quality improvement interventions can improve the delivery of specific evidence-based healthcare interventions to prevent congenital syphilis at scale in Haiti. Improved treatment rates for syphilis were seen only after the use of systems-based quality improvement approaches.

  11. Exploratory Spatial Analysis of in vitro Respiratory Syncytial Virus Co-infections

    PubMed Central

    Simeonov, Ivan; Gong, Xiaoyan; Kim, Oekyung; Poss, Mary; Chiaromonte, Francesca; Fricks, John

    2010-01-01

    The cell response to virus infection and virus perturbation of that response is dynamic and is reflected by changes in cell susceptibility to infection. In this study, we evaluated the response of human epithelial cells to sequential infections with human respiratory syncytial virus strains A2 and B to determine if a primary infection with one strain will impact the ability of cells to be infected with the second as a function of virus strain and time elapsed between the two exposures. Infected cells were visualized with fluorescent markers, and location of all cells in the tissue culture well were identified using imaging software. We employed tools from spatial statistics to investigate the likelihood of a cell being infected given its proximity to a cell infected with either the homologous or heterologous virus. We used point processes, K-functions, and simulation procedures designed to account for specific features of our data when assessing spatial associations. Our results suggest that intrinsic cell properties increase susceptibility of cells to infection, more so for RSV-B than for RSV-A. Further, we provide evidence that the primary infection can decrease susceptibility of cells to the heterologous challenge virus but only at the 16 h time point evaluated in this study. Our research effort highlights the merits of integrating empirical and statistical approaches to gain greater insight on in vitro dynamics of virus-host interactions. PMID:21994640

  12. Sequential lift and suture technique for post-LASIK corneal striae.

    PubMed

    Mackool, Richard J; Monsanto, Vivian R

    2003-04-01

    We describe a surgical technique to manage persistent corneal striae after laser in situ keratomileusis (LASIK). The sequential lift and suture technique reduces the time required for LASIK, eliminates the need to fixate the flap with forceps during suturing, and increases the accuracy of suture placement. The results in 10 eyes (9 patients) showed complete resolution of striae with improvement in subjective symptoms (glare and blurred vision) and best corrected visual acuity.

  13. Nonstandard convergence to jamming in random sequential adsorption: The case of patterned one-dimensional substrates

    NASA Astrophysics Data System (ADS)

    Verma, Arjun; Privman, Vladimir

    2018-02-01

    We study approach to the large-time jammed state of the deposited particles in the model of random sequential adsorption. The convergence laws are usually derived from the argument of Pomeau which includes the assumption of the dominance, at large enough times, of small landing regions into each of which only a single particle can be deposited without overlapping earlier deposited particles and which, after a certain time are no longer created by depositions in larger gaps. The second assumption has been that the size distribution of gaps open for particle-center landing in this large-time small-gaps regime is finite in the limit of zero gap size. We report numerical Monte Carlo studies of a recently introduced model of random sequential adsorption on patterned one-dimensional substrates that suggest that the second assumption must be generalized. We argue that a region exists in the parameter space of the studied model in which the gap-size distribution in the Pomeau large-time regime actually linearly vanishes at zero gap sizes. In another region, the distribution develops a threshold property, i.e., there are no small gaps below a certain gap size. We discuss the implications of these findings for new asymptotic power-law and exponential-modified-by-a-power-law convergences to jamming in irreversible one-dimensional deposition.

  14. A Node Linkage Approach for Sequential Pattern Mining

    PubMed Central

    Navarro, Osvaldo; Cumplido, René; Villaseñor-Pineda, Luis; Feregrino-Uribe, Claudia; Carrasco-Ochoa, Jesús Ariel

    2014-01-01

    Sequential Pattern Mining is a widely addressed problem in data mining, with applications such as analyzing Web usage, examining purchase behavior, and text mining, among others. Nevertheless, with the dramatic increase in data volume, the current approaches prove inefficient when dealing with large input datasets, a large number of different symbols and low minimum supports. In this paper, we propose a new sequential pattern mining algorithm, which follows a pattern-growth scheme to discover sequential patterns. Unlike most pattern growth algorithms, our approach does not build a data structure to represent the input dataset, but instead accesses the required sequences through pseudo-projection databases, achieving better runtime and reducing memory requirements. Our algorithm traverses the search space in a depth-first fashion and only preserves in memory a pattern node linkage and the pseudo-projections required for the branch being explored at the time. Experimental results show that our new approach, the Node Linkage Depth-First Traversal algorithm (NLDFT), has better performance and scalability in comparison with state of the art algorithms. PMID:24933123

  15. The timing of language learning shapes brain structure associated with articulation.

    PubMed

    Berken, Jonathan A; Gracco, Vincent L; Chen, Jen-Kai; Klein, Denise

    2016-09-01

    We compared the brain structure of highly proficient simultaneous (two languages from birth) and sequential (second language after age 5) bilinguals, who differed only in their degree of native-like accent, to determine how the brain develops when a skill is acquired from birth versus later in life. For the simultaneous bilinguals, gray matter density was increased in the left putamen, as well as in the left posterior insula, right dorsolateral prefrontal cortex, and left and right occipital cortex. For the sequential bilinguals, gray matter density was increased in the bilateral premotor cortex. Sequential bilinguals with better accents also showed greater gray matter density in the left putamen, and in several additional brain regions important for sensorimotor integration and speech-motor control. Our findings suggest that second language learning results in enhanced brain structure of specific brain areas, which depends on whether two languages are learned simultaneously or sequentially, and on the extent to which native-like proficiency is acquired.

  16. Sequential Anaerobic/Aerobic Digestion for Enhanced Carbon/Nitrogen Removal and Cake Odor Reduction.

    PubMed

    Ahmad, Muneer; Denee, Marco Abel; Jiang, Hao; Eskicioglu, Cigdem; Kadota, Paul; Gregonia, Theresa

    2016-12-01

      Anaerobic digestion (AD) has been proven to be an effective process for the treatment of wastewater sludge. However, it produces high levels of ammonia in the digester effluent, which may jeopardize meeting stringent nutrient discharge limits. In this study, the effect of a sequential anaerobic/aerobic (AN/AERO) digestion and a single-stage conventional AN digestion (as control) was investigated on mixed (primary + secondary) sludge generated by the Annacis Island wastewater treatment plant (WWTP) (BC, Canada). An overall sludge retention time (SRT) of 22.5 days under three different scenarios was chosen based on the current operational SRT of the digesters at the Annacis Island WWTP. The steady state results have shown that sequential AN/AERO digestion configurations achieved up to 11% higher volatile solids (VS) removal and 72% lower ammonia generation over single-stage conventional AN digestion. Furthermore, sequential AN/AERO system also showed enhanced dewaterability, improved fecal coliform destruction and reduced digested cake odors over control digesters.

  17. Three parameters optimizing closed-loop control in sequential segmental neuromuscular stimulation.

    PubMed

    Zonnevijlle, E D; Somia, N N; Perez Abadia, G; Stremel, R W; Maldonado, C J; Werker, P M; Kon, M; Barker, J H

    1999-05-01

    In conventional dynamic myoplasties, the force generation is poorly controlled. This causes unnecessary fatigue of the transposed/transplanted electrically stimulated muscles and causes damage to the involved tissues. We introduced sequential segmental neuromuscular stimulation (SSNS) to reduce muscle fatigue by allowing part of the muscle to rest periodically while the other parts work. Despite this improvement, we hypothesize that fatigue could be further reduced in some applications of dynamic myoplasty if the muscles were made to contract according to need. The first necessary step is to gain appropriate control over the contractile activity of the dynamic myoplasty. Therefore, closed-loop control was tested on a sequentially stimulated neosphincter to strive for the best possible control over the amount of generated pressure. A selection of parameters was validated for optimizing control. We concluded that the frequency of corrections, the threshold for corrections, and the transition time are meaningful parameters in the controlling algorithm of the closed-loop control in a sequentially stimulated myoplasty.

  18. Blocking for Sequential Political Experiments

    PubMed Central

    Moore, Sally A.

    2013-01-01

    In typical political experiments, researchers randomize a set of households, precincts, or individuals to treatments all at once, and characteristics of all units are known at the time of randomization. However, in many other experiments, subjects “trickle in” to be randomized to treatment conditions, usually via complete randomization. To take advantage of the rich background data that researchers often have (but underutilize) in these experiments, we develop methods that use continuous covariates to assign treatments sequentially. We build on biased coin and minimization procedures for discrete covariates and demonstrate that our methods outperform complete randomization, producing better covariate balance in simulated data. We then describe how we selected and deployed a sequential blocking method in a clinical trial and demonstrate the advantages of our having done so. Further, we show how that method would have performed in two larger sequential political trials. Finally, we compare causal effect estimates from differences in means, augmented inverse propensity weighted estimators, and randomization test inversion. PMID:24143061

  19. Enzymatic saccharification of pretreated wheat straw: comparison of solids-recycling, sequential hydrolysis and batch hydrolysis.

    PubMed

    Pihlajaniemi, Ville; Sipponen, Satu; Sipponen, Mika H; Pastinen, Ossi; Laakso, Simo

    2014-02-01

    In the enzymatic hydrolysis of lignocellulose materials, the recycling of the solid residue has previously been considered within the context of enzyme recycling. In this study, a steady state investigation of a solids-recycling process was made with pretreated wheat straw and compared to sequential and batch hydrolysis at constant reaction times, substrate feed and liquid and enzyme consumption. Compared to batch hydrolysis, the recycling and sequential processes showed roughly equal hydrolysis yields, while the volumetric productivity was significantly increased. In the 72h process the improvement was 90% due to an increased reaction consistency, while the solids feed was 16% of the total process constituents. The improvement resulted primarily from product removal, which was equally efficient in solids-recycling and sequential hydrolysis processes. No evidence of accumulation of enzymes beyond the accumulation of the substrate was found in recycling. A mathematical model of solids-recycling was constructed, based on a geometrical series. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Resolution of concerted versus sequential mechanisms in photo-induced double-proton transfer reaction in 7-azaindole H-bonded dimer

    PubMed Central

    Catalán, Javier; del Valle, Juan Carlos; Kasha, Michael

    1999-01-01

    The experimental and theoretical bases for a synchronous or concerted double-proton transfer in centro-symmetric H-bonded electronically excited molecular dimers are presented. The prototype model is the 7-azaindole dimer. New research offers confirmation of a concerted mechanism for excited-state biprotonic transfer. Recent femtosecond photoionization and coulombic explosion techniques have given rise to time-of-flight MS observations suggesting sequential two-step biprotonic transfer for the same dimer. We interpret the overall species observed in the time-of-flight experiments as explicable without conflict with the concerted mechanism of proton transfer. PMID:10411876

  1. Suppressing correlations in massively parallel simulations of lattice models

    NASA Astrophysics Data System (ADS)

    Kelling, Jeffrey; Ódor, Géza; Gemming, Sibylle

    2017-11-01

    For lattice Monte Carlo simulations parallelization is crucial to make studies of large systems and long simulation time feasible, while sequential simulations remain the gold-standard for correlation-free dynamics. Here, various domain decomposition schemes are compared, concluding with one which delivers virtually correlation-free simulations on GPUs. Extensive simulations of the octahedron model for 2 + 1 dimensional Kardar-Parisi-Zhang surface growth, which is very sensitive to correlation in the site-selection dynamics, were performed to show self-consistency of the parallel runs and agreement with the sequential algorithm. We present a GPU implementation providing a speedup of about 30 × over a parallel CPU implementation on a single socket and at least 180 × with respect to the sequential reference.

  2. Sampling strategies for subsampled segmented EPI PRF thermometry in MR guided high intensity focused ultrasound

    PubMed Central

    Odéen, Henrik; Todd, Nick; Diakite, Mahamadou; Minalga, Emilee; Payne, Allison; Parker, Dennis L.

    2014-01-01

    Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemes utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm3 FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations. PMID:25186406

  3. Sequential divergence and the multiplicative origin of community diversity

    PubMed Central

    Hood, Glen R.; Forbes, Andrew A.; Powell, Thomas H. Q.; Egan, Scott P.; Hamerlinck, Gabriela; Smith, James J.; Feder, Jeffrey L.

    2015-01-01

    Phenotypic and genetic variation in one species can influence the composition of interacting organisms within communities and across ecosystems. As a result, the divergence of one species may not be an isolated process, as the origin of one taxon could create new niche opportunities for other species to exploit, leading to the genesis of many new taxa in a process termed “sequential divergence.” Here, we test for such a multiplicative effect of sequential divergence in a community of host-specific parasitoid wasps, Diachasma alloeum, Utetes canaliculatus, and Diachasmimorpha mellea (Hymenoptera: Braconidae), that attack Rhagoletis pomonella fruit flies (Diptera: Tephritidae). Flies in the R. pomonella species complex radiated by sympatrically shifting and ecologically adapting to new host plants, the most recent example being the apple-infesting host race of R. pomonella formed via a host plant shift from hawthorn-infesting flies within the last 160 y. Using population genetics, field-based behavioral observations, host fruit odor discrimination assays, and analyses of life history timing, we show that the same host-related ecological selection pressures that differentially adapt and reproductively isolate Rhagoletis to their respective host plants (host-associated differences in the timing of adult eclosion, host fruit odor preference and avoidance behaviors, and mating site fidelity) cascade through the ecosystem and induce host-associated genetic divergence for each of the three members of the parasitoid community. Thus, divergent selection at lower trophic levels can potentially multiplicatively and rapidly amplify biodiversity at higher levels on an ecological time scale, which may sequentially contribute to the rich diversity of life. PMID:26499247

  4. Imaging quality analysis of computer-generated holograms using the point-based method and slice-based method

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Chen, Siqing; Zheng, Huadong; Sun, Tao; Yu, Yingjie; Gao, Hongyue; Asundi, Anand K.

    2017-06-01

    Computer holography has made a notably progress in recent years. The point-based method and slice-based method are chief calculation algorithms for generating holograms in holographic display. Although both two methods are validated numerically and optically, the differences of the imaging quality of these methods have not been specifically analyzed. In this paper, we analyze the imaging quality of computer-generated phase holograms generated by point-based Fresnel zone plates (PB-FZP), point-based Fresnel diffraction algorithm (PB-FDA) and slice-based Fresnel diffraction algorithm (SB-FDA). The calculation formula and hologram generation with three methods are demonstrated. In order to suppress the speckle noise, sequential phase-only holograms are generated in our work. The results of reconstructed images numerically and experimentally are also exhibited. By comparing the imaging quality, the merits and drawbacks with three methods are analyzed. Conclusions are given by us finally.

  5. Toward virtual anatomy: a stereoscopic 3-D interactive multimedia computer program for cranial osteology.

    PubMed

    Trelease, R B

    1996-01-01

    Advances in computer visualization and user interface technologies have enabled development of "virtual reality" programs that allow users to perceive and to interact with objects in artificial three-dimensional environments. Such technologies were used to create an image database and program for studying the human skull, a specimen that has become increasingly expensive and scarce. Stereoscopic image pairs of a museum-quality skull were digitized from multiple views. For each view, the stereo pairs were interlaced into a single, field-sequential stereoscopic picture using an image processing program. The resulting interlaced image files are organized in an interactive multimedia program. At run-time, gray-scale 3-D images are displayed on a large-screen computer monitor and observed through liquid-crystal shutter goggles. Users can then control the program and change views with a mouse and cursor to point-and-click on screen-level control words ("buttons"). For each view of the skull, an ID control button can be used to overlay pointers and captions for important structures. Pointing and clicking on "hidden buttons" overlying certain structures triggers digitized audio spoken word descriptions or mini lectures.

  6. Predicting the Necessity for Extracorporeal Circulation During Lung Transplantation: A Feasibility Study.

    PubMed

    Hinske, Ludwig Christian; Hoechter, Dominik Johannes; Schröeer, Eva; Kneidinger, Nikolaus; Schramm, René; Preissler, Gerhard; Tomasi, Roland; Sisic, Alma; Frey, Lorenz; von Dossow, Vera; Scheiermann, Patrick

    2017-06-01

    The factors leading to the implementation of unplanned extracorporeal circulation during lung transplantation are poorly defined. Consequently, the authors aimed to identify patients at risk for unplanned extracorporeal circulation during lung transplantation. Retrospective data analysis. Single-center university hospital. A development data set of 170 consecutive patients and an independent validation cohort of 52 patients undergoing lung transplantation. The authors investigated a cohort of 170 consecutive patients undergoing single or sequential bilateral lung transplantation without a priori indication for extracorporeal circulation and evaluated the predictive capability of distinct preoperative and intraoperative variables by using automated model building techniques at three clinically relevant time points (preoperatively, after endotracheal intubation, and after establishing single-lung ventilation). Preoperative mean pulmonary arterial pressure was the strongest predictor for unplanned extracorporeal circulation. A logistic regression model based on preoperative mean pulmonary arterial pressure and lung allocation score achieved an area under the receiver operating characteristic curve of 0.85. Consequently, the authors developed a novel 3-point scoring system based on preoperative mean pulmonary arterial pressure and lung allocation score, which identified patients at risk for unplanned extracorporeal circulation and validated this score in an independent cohort of 52 patients undergoing lung transplantation. The authors showed that patients at risk for unplanned extracorporeal circulation during lung transplantation could be identified by their novel 3-point score. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Effects of Guideline and Formulary Changes on Statin Prescribing in the Veterans Affairs.

    PubMed

    Markovitz, Adam A; Holleman, Rob G; Hofer, Timothy P; Kerr, Eve A; Klamerus, Mandi L; Sussman, Jeremy B

    2017-12-01

    To compare the effects of two sequential policy changes-the addition of a high-potency statin to the Department of Veterans Affairs (VA) formulary and the release of the American College of Cardiology/American Heart Association (ACC/AHA) cholesterol guidelines-on VA provider prescribing. Retrospective analysis of 1,100,682 VA patients, 2011-2016. Interrupted time-series analysis of changes in prescribing of moderate-to-high-intensity statins among high-risk patients and across high-risk subgroups. We also assessed changes in prescribing of atorvastatin and other statin drugs. We estimated marginal effects (ME) of formulary and guideline changes by comparing predicted and observed statin use. Data from VA Corporate Data Warehouse. The use of moderate-to-high-intensity statins increased by 2 percentage points following the formulary change (ME, 2.4, 95% confidence interval [CI], 2.2 to 2.6) and less than 1 percentage point following the guideline change (ME, 0.8, 95% CI, 0.6 to 0.9). The formulary change led to approximately a 12 percentage-point increase in the use of moderate-to-high-intensity atorvastatin (ME, 11.5, 95% CI, 11.3 to 11.6). The relatively greater provider response to the formulary change occurred across all patient subgroups. Addition of a high-potency statin to formulary affected provider prescribing more than the ACC/AHA guidelines. © Health Research and Educational Trust.

  8. Subsonic Aircraft With Regression and Neural-Network Approximators Designed

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.

    2004-01-01

    At the NASA Glenn Research Center, NASA Langley Research Center's Flight Optimization System (FLOPS) and the design optimization testbed COMETBOARDS with regression and neural-network-analysis approximators have been coupled to obtain a preliminary aircraft design methodology. For a subsonic aircraft, the optimal design, that is the airframe-engine combination, is obtained by the simulation. The aircraft is powered by two high-bypass-ratio engines with a nominal thrust of about 35,000 lbf. It is to carry 150 passengers at a cruise speed of Mach 0.8 over a range of 3000 n mi and to operate on a 6000-ft runway. The aircraft design utilized a neural network and a regression-approximations-based analysis tool, along with a multioptimizer cascade algorithm that uses sequential linear programming, sequential quadratic programming, the method of feasible directions, and then sequential quadratic programming again. Optimal aircraft weight versus the number of design iterations is shown. The central processing unit (CPU) time to solution is given. It is shown that the regression-method-based analyzer exhibited a smoother convergence pattern than the FLOPS code. The optimum weight obtained by the approximation technique and the FLOPS code differed by 1.3 percent. Prediction by the approximation technique exhibited no error for the aircraft wing area and turbine entry temperature, whereas it was within 2 percent for most other parameters. Cascade strategy was required by FLOPS as well as the approximators. The regression method had a tendency to hug the data points, whereas the neural network exhibited a propensity to follow a mean path. The performance of the neural network and regression methods was considered adequate. It was at about the same level for small, standard, and large models with redundancy ratios (defined as the number of input-output pairs to the number of unknown coefficients) of 14, 28, and 57, respectively. In an SGI octane workstation (Silicon Graphics, Inc., Mountainview, CA), the regression training required a fraction of a CPU second, whereas neural network training was between 1 and 9 min, as given. For a single analysis cycle, the 3-sec CPU time required by the FLOPS code was reduced to milliseconds by the approximators. For design calculations, the time with the FLOPS code was 34 min. It was reduced to 2 sec with the regression method and to 4 min by the neural network technique. The performance of the regression and neural network methods was found to be satisfactory for the analysis and design optimization of the subsonic aircraft.

  9. AUTOMATIC CALIBRATING SYSTEM FOR PRESSURE TRANSDUCERS

    DOEpatents

    Amonette, E.L.; Rodgers, G.W.

    1958-01-01

    An automatic system for calibrating a number of pressure transducers is described. The disclosed embodiment of the invention uses a mercurial manometer to measure the air pressure applied to the transducer. A servo system follows the top of the mercury column as the pressure is changed and operates an analog- to-digital converter This converter furnishes electrical pulses, each representing an increment of pressure change, to a reversible counterThe transducer furnishes a signal at each calibration point, causing an electric typewriter and a card-punch machine to record the pressure at the instant as indicated by the counter. Another counter keeps track of the calibration points so that a number identifying each point is recorded with the corresponding pressure. A special relay control system controls the pressure trend and programs the sequential calibration of several transducers.

  10. Trading efficiency for effectiveness in similarity-based indexing for image databases

    NASA Astrophysics Data System (ADS)

    Barros, Julio E.; French, James C.; Martin, Worthy N.; Kelly, Patrick M.

    1995-11-01

    Image databases typically manage feature data that can be viewed as points in a feature space. Some features, however, can be better expressed as a collection of points or described by a probability distribution function (PDF) rather than as a single point. In earlier work we introduced a similarity measure and a method for indexing and searching the PDF descriptions of these items that guarantees an answer equivalent to sequential search. Unfortunately, certain properties of the data can restrict the efficiency of that method. In this paper we extend that work and examine trade-offs between efficiency and answer quality or effectiveness. These trade-offs reduce the amount of work required during a search by reducing the number of undesired items fetched without excluding an excessive number of the desired ones.

  11. World title boxing: from early beginnings to the first bell.

    PubMed

    Schinke, Robert J; Ramsay, Marc

    2009-11-01

    There is scant literature where applied sport scientists have considered first hand experiences preparing professional boxers for world title bouts. The present submission reflects more than 10 years of applied experience working with professional boxers, residing in Canada. What follows is a composite of sequential steps that ownership and coaching staff of one Canadian management group have tried leading up to more than 20 world title bout experiences. The strategies proposed have been built progressively over time, and what follows is a general overview of a more detailed pre-bout structure from shortly in advance of a world title bout offer to the moment when the athlete enters the ring to perform. We propose that an effective structure is founded upon detailed a priori preparation, tactical decisions throughout bout preparation, and a thorough understanding by the athlete of what he will encounter during the title bout. Key PointsWorld championship boxing.Competition preparation.Professional sport.Athlete performance.

  12. Hippocampal replay in the awake state: a potential physiological substrate of memory consolidation and retrieval

    PubMed Central

    Carr, Margaret F.; Jadhav, Shantanu P.; Frank, Loren M.

    2011-01-01

    The hippocampus is required for the encoding, consolidation, and retrieval of event memories. While the neural mechanisms that underlie these processes are only partially understood, a series of recent papers point to awake memory replay as a potential contributor to both consolidation and retrieval. Replay is the sequential reactivation of hippocampal place cells that represent previously experienced behavioral trajectories and occurs frequently in the awake state, particularly during periods of relative immobility. Awake replay may reflect trajectories through either the current environment or previously visited environments that are spatially remote. The repetition of learned sequences on a compressed time scale is well suited to promote memory consolidation in distributed circuits beyond the hippocampus, suggesting that consolidation occurs in both the awake and sleeping animal. Moreover, sensory information can influence the content of awake replay, suggesting a role for awake replay in memory retrieval. PMID:21270783

  13. Program Completion of a Web-Based Tailored Lifestyle Intervention for Adults: Differences between a Sequential and a Simultaneous Approach

    PubMed Central

    Schneider, Francine; de Vries, Hein; van Osch, Liesbeth ADM; van Nierop, Peter WM; Kremers, Stef PJ

    2012-01-01

    Background Unhealthy lifestyle behaviors often co-occur and are related to chronic diseases. One effective method to change multiple lifestyle behaviors is web-based computer tailoring. Dropout from Internet interventions, however, is rather high, and it is challenging to retain participants in web-based tailored programs, especially programs targeting multiple behaviors. To date, it is unknown how much information people can handle in one session while taking part in a multiple behavior change intervention, which could be presented either sequentially (one behavior at a time) or simultaneously (all behaviors at once). Objectives The first objective was to compare dropout rates of 2 computer-tailored interventions: a sequential and a simultaneous strategy. The second objective was to assess which personal characteristics are associated with completion rates of the 2 interventions. Methods Using an RCT design, demographics, health status, physical activity, vegetable consumption, fruit consumption, alcohol intake, and smoking were self-assessed through web-based questionnaires among 3473 adults, recruited through Regional Health Authorities in the Netherlands in the autumn of 2009. First, a health risk appraisal was offered, indicating whether respondents were meeting the 5 national health guidelines. Second, psychosocial determinants of the lifestyle behaviors were assessed and personal advice was provided, about one or more lifestyle behaviors. Results Our findings indicate a high non-completion rate for both types of intervention (71.0%; n = 2167), with more incompletes in the simultaneous intervention (77.1%; n = 1169) than in the sequential intervention (65.0%; n = 998). In both conditions, discontinuation was predicted by a lower age (sequential condition: OR = 1.04; P < .001; CI = 1.02-1.05; simultaneous condition: OR = 1.04; P < .001; CI = 1.02-1.05) and an unhealthy lifestyle (sequential condition: OR = 0.86; P = .01; CI = 0.76-0.97; simultaneous condition: OR = 0.49; P < .001; CI = 0.42-0.58). In the sequential intervention, being male (OR = 1.27; P = .04; CI = 1.01-1.59) also predicted dropout. When respondents failed to adhere to at least 2 of the guidelines, those receiving the simultaneous intervention were more inclined to drop out than were those receiving the sequential intervention. Conclusion Possible reasons for the higher dropout rate in our simultaneous intervention may be the amount of time required and information overload. Strategies to optimize program completion as well as continued use of computer-tailored interventions should be studied. Trial Registration Dutch Trial Register NTR2168 PMID:22403770

  14. Program completion of a web-based tailored lifestyle intervention for adults: differences between a sequential and a simultaneous approach.

    PubMed

    Schulz, Daniela N; Schneider, Francine; de Vries, Hein; van Osch, Liesbeth A D M; van Nierop, Peter W M; Kremers, Stef P J

    2012-03-08

    Unhealthy lifestyle behaviors often co-occur and are related to chronic diseases. One effective method to change multiple lifestyle behaviors is web-based computer tailoring. Dropout from Internet interventions, however, is rather high, and it is challenging to retain participants in web-based tailored programs, especially programs targeting multiple behaviors. To date, it is unknown how much information people can handle in one session while taking part in a multiple behavior change intervention, which could be presented either sequentially (one behavior at a time) or simultaneously (all behaviors at once). The first objective was to compare dropout rates of 2 computer-tailored interventions: a sequential and a simultaneous strategy. The second objective was to assess which personal characteristics are associated with completion rates of the 2 interventions. Using an RCT design, demographics, health status, physical activity, vegetable consumption, fruit consumption, alcohol intake, and smoking were self-assessed through web-based questionnaires among 3473 adults, recruited through Regional Health Authorities in the Netherlands in the autumn of 2009. First, a health risk appraisal was offered, indicating whether respondents were meeting the 5 national health guidelines. Second, psychosocial determinants of the lifestyle behaviors were assessed and personal advice was provided, about one or more lifestyle behaviors. Our findings indicate a high non-completion rate for both types of intervention (71.0%; n = 2167), with more incompletes in the simultaneous intervention (77.1%; n = 1169) than in the sequential intervention (65.0%; n = 998). In both conditions, discontinuation was predicted by a lower age (sequential condition: OR = 1.04; P < .001; CI = 1.02-1.05; simultaneous condition: OR = 1.04; P < .001; CI = 1.02-1.05) and an unhealthy lifestyle (sequential condition: OR = 0.86; P = .01; CI = 0.76-0.97; simultaneous condition: OR = 0.49; P < .001; CI = 0.42-0.58). In the sequential intervention, being male (OR = 1.27; P = .04; CI = 1.01-1.59) also predicted dropout. When respondents failed to adhere to at least 2 of the guidelines, those receiving the simultaneous intervention were more inclined to drop out than were those receiving the sequential intervention. Possible reasons for the higher dropout rate in our simultaneous intervention may be the amount of time required and information overload. Strategies to optimize program completion as well as continued use of computer-tailored interventions should be studied. Dutch Trial Register NTR2168.

  15. Efficacy of premixed versus sequential administration of clonidine as an adjuvant to hyperbaric bupivacaine intrathecally in cesarean section

    PubMed Central

    Sachan, Prachee; Kumar, Nidhi; Sharma, Jagdish Prasad

    2014-01-01

    Background: Density of the drugs injected intrathecally is an important factor that influences spread in the cerebrospinal fluid. Mixing adjuvants with local anesthetics (LA) alters their density and hence their spread compared to when given sequentially in seperate syringes. Aims: To evaluate the efficacy of intrathecal administration of hyperbaric bupivacaine (HB) and clonidine as a mixture and sequentially in terms of block characteristics, hemodynamics, neonatal outcome, and postoperative pain. Setting and Design: Prospective randomized single blind study at a tertiary center from 2010 to 2012. Materials and Methods: Ninety full-term parturient scheduled for elective cesarean sections were divided into three groups on the basis of technique of intrathecal drug administration. Group M received mixture of 75 μg clonidine and 10 mg HB 0.5%. Group A received 75 μg clonidine after administration of 10 mg HB 0.5% through separate syringe. Group B received 75 μg clonidine before HB 0.5% (10 mg) through separate syringe. Statistical analysis used: Observational descriptive statistics, analysis of variance with Bonferroni multiple comparison post hoc test, and Chi-square test. Results: Time to achieve complete sensory and motor block was less in group A and B in which drugs were given sequentially. Duration of analgesia lasted longer in group B (474.3 ± 20.79 min) and group A (472.50 ± 22.11 min) than in group M (337 ± 18.22 min) with clinically insignificant influence on hemodynamic parameters and sedation. Conclusion: Sequential technique reduces time to achieve complete sensory and motor block, delays block regression, and significantly prolongs the duration of analgesia. However, it did not matter much whether clonidine was administered before or after HB. PMID:25886098

  16. Depth treatment of coal-chemical engineering wastewater by a cost-effective sequential heterogeneous Fenton and biodegradation process.

    PubMed

    Fang, Yili; Yin, Weizhao; Jiang, Yanbin; Ge, Hengjun; Li, Ping; Wu, Jinhua

    2018-05-01

    In this study, a sequential Fe 0 /H 2 O 2 reaction and biological process was employed as a low-cost depth treatment method to remove recalcitrant compounds from coal-chemical engineering wastewater after regular biological treatment. First of all, a chemical oxygen demand (COD) and color removal efficiency of 66 and 63% was achieved at initial pH of 6.8, 25 mmol L -1 of H 2 O 2 , and 2 g L -1 of Fe 0 in the Fe 0 /H 2 O 2 reaction. According to the gas chromatography-mass spectrometer (GC-MS) and gas chromatography-flame ionization detector (GC-FID) analysis, the recalcitrant compounds were effectively decomposed into short-chain organic acids such as acetic, propionic, and butyric acids. Although these acids were resistant to the Fe 0 /H 2 O 2 reaction, they were effectively eliminated in the sequential air lift reactor (ALR) at a hydraulic retention time (HRT) of 2 h, resulting in a further decrease of COD and color from 120 to 51 mg L -1 and from 70 to 38 times, respectively. A low operational cost of 0.35 $ m -3 was achieved because pH adjustment and iron-containing sludge disposal could be avoided since a total COD and color removal efficiency of 85 and 79% could be achieved at an original pH of 6.8 by the above sequential process with a ferric ion concentration below 0.8 mg L -1 after the Fe 0 /H 2 O 2 reaction. It indicated that the above sequential process is a promising and cost-effective method for the depth treatment of coal-chemical engineering wastewaters to satisfy discharge requirements.

  17. Simultaneous biodegradation of three mononitrophenol isomers by a tailor-made microbial consortium immobilized in sequential batch reactors.

    PubMed

    Fu, H; Zhang, J-J; Xu, Y; Chao, H-J; Zhou, N-Y

    2017-03-01

    The ortho-nitrophenol (ONP)-utilizing Alcaligenes sp. strain NyZ215, meta-nitrophenol (MNP)-utilizing Cupriavidus necator JMP134 and para-nitrophenol (PNP)-utilizing Pseudomonas sp. strain WBC-3 were assembled as a consortium to degrade three nitrophenol isomers in sequential batch reactors. Pilot test was conducted in flasks to demonstrate that a mixture of three mononitrophenols at 0·5 mol l -1 each could be mineralized by this microbial consortium within 84 h. Interestingly, neither ONP nor MNP was degraded until PNP was almost consumed by strain WBC-3. By immobilizing this consortium into polyurethane cubes, all three mononitrophenols were continuously degraded in lab-scale sequential reactors for six batch cycles over 18 days. Total concentrations of ONP, MMP and PNP that were degraded were 2·8, 1·5 and 2·3 mol l -1 during this time course respectively. Quantitative real-time PCR analysis showed that each member in the microbial consortium was relatively stable during the entire degradation process. This study provides a novel approach to treat polluted water, particularly with a mixture of co-existing isomers. Nitroaromatic compounds are readily spread in the environment and pose great potential toxicity concerns. Here, we report the simultaneous degradation of three isomers of mononitrophenol in a single system by employing a consortium of three bacteria, both in flasks and lab-scale sequential batch reactors. The results demonstrate that simultaneous biodegradation of three mononitrophenol isomers can be achieved by a tailor-made microbial consortium immobilized in sequential batch reactors, providing a pilot study for a novel approach for the bioremediation of mixed pollutants, especially isomers present in wastewater. © 2016 The Society for Applied Microbiology.

  18. SUBCOOLING DETECTOR

    DOEpatents

    McCann, J.A.

    1963-12-17

    A system for detecting and measuring directly the subcooling margin in a liquid bulk coolant is described. A thermocouple sensor is electrically heated, and a small amount of nearly stagnant bulk coolant is heated to the boiling point by this heated thermocouple. The sequential measurement of the original ambient temperature, zeroing out this ambient temperature, and then measuring the boiling temperature of the coolant permits direct determination of the subcooling margin of the ambient liquid. (AEC)

  19. Position Estimation Using Image Derivative

    NASA Technical Reports Server (NTRS)

    Mortari, Daniele; deDilectis, Francesco; Zanetti, Renato

    2015-01-01

    This paper describes an image processing algorithm to process Moon and/or Earth images. The theory presented is based on the fact that Moon hard edge points are characterized by the highest values of the image derivative. Outliers are eliminated by two sequential filters. Moon center and radius are then estimated by nonlinear least-squares using circular sigmoid functions. The proposed image processing has been applied and validated using real and synthetic Moon images.

  20. Increasing methylation of the calcitonin gene during disease progression in sequential samples from CML patients.

    PubMed

    Mills, K I; Guinn, B A; Walsh, V A; Burnett, A K

    1996-09-01

    In chronic myeloid leukaemia (CML), disease progression from the initial chronic phase to the acute phase or blast crisis has previously been shown to be correlated with progressive increases in hyper-methylation of the calcitonin gene, located at chromosome 11p15. However, sequential studies of individual patients were not performed in these investigations. We have analysed 44 samples from nine patients with typical Philadelphia chromosome positive CML throughout their disease progression to determine the methylation state of the calcitonin gene at these time points. Densitometry was used to quantitate the ratio of the normal 2.0 kb Hpa II fragments, indicating normal methylation status of the gene, compared to the intensity of the abnormal, hyper-methylated, 2.6-3.1 kb Hpa II fragments. We found a gradual increase in the ratio of methylated:unmethylated calcitonin gene during chronic phase with a dramatic rise at blast crisis. Further, the ratio of the abnormal hypermethylated 3.1 kb fragments to the methylated 2.6 kb fragment resulted in the identification of a clonal expansion of abnormally methylated cells. This expansion of cells with hypermethylation of the calcitonin gene during chronic phase was shown to coincide with the presence of a mutation in the p53 gene. The data presented in this study would suggest that an increased methylation status of the calcitonin gene during disease progression may indicate the expansion of abnormal blast cell populations and subsequent progression to blast crisis.

  1. Beneficial effects of combining nilotinib and imatinib in preclinical models of BCR-ABL+ leukemias

    PubMed Central

    Weisberg, Ellen; Catley, Laurie; Wright, Renee D.; Moreno, Daisy; Banerji, Lolita; Ray, Arghya; Manley, Paul W.; Mestan, Juergen; Fabbro, Doriano; Jiang, Jingrui; Hall-Meyers, Elizabeth; Callahan, Linda; DellaGatta, Jamie L.; Kung, Andrew L.

    2007-01-01

    Drug resistance resulting from emergence of imatinib-resistant BCR-ABL point mutations is a significant problem in advanced-stage chronic myelogenous leukemia (CML). The BCR-ABL inhibitor, nilotinib (AMN107), is significantly more potent against BCR-ABL than imatinib, and is active against many imatinib-resistant BCR-ABL mutants. Phase 1/2 clinical trials show that nilotinib can induce remissions in patients who have previously failed imatinib, indicating that sequential therapy with these 2 agents has clinical value. However, simultaneous, rather than sequential, administration of 2 BCR-ABL kinase inhibitors is attractive for many reasons, including the theoretical possibility that this could reduce emergence of drug-resistant clones. Here, we show that exposure of a variety of BCR-ABL+ cell lines to imatinib and nilotinib results in additive or synergistic cytotoxicity, including testing of a large panel of cells expressing BCR-ABL point mutations causing resistance to imatinib in patients. Further, using a highly quantifiable bioluminescent in vivo model, drug combinations were at least additive in antileukemic activity, compared with each drug alone. These results suggest that despite binding to the same site in the same target kinase, the combination of imatinib and nilotinib is highly efficacious in these models, indicating that clinical testing of combinations of BCR-ABL kinase inhibitors is warranted. PMID:17068153

  2. Comparative evaluation of three shaft seals proposed for high performance turbomachinery

    NASA Technical Reports Server (NTRS)

    Hendricks, R. C.

    1982-01-01

    Experimental pressure profiles and leak rate characteristics for three shaft seal prototype model configurations proposed for the space shuttle turbopump were assessed in the concentric and fully eccentric, to point of rub, positions without the effects of rotation. The parallel-cylindrical configuration has moderate to good stiffness with a higher leak rate. It represents a simple concept, but for practical reasons and possible increases in stability, all such seals should be conical-convergent. The three-stepdown-sequential, parallel-cylindrical seal is converging and represents good to possible high stiffness when fluid separation occurs, with a significant decrease in leak rate. Such seals can be very effective. The three-stepdown-sequential labyrinth seal of 33-teeth (i.e., 12-11-10 teeth from inlet to exit) provides excellent leak control but usually has very poor stiffness, depending on cavity design. The seal is complex and not recommended for dynamic control.

  3. Cadmium partition in river sediments from an area affected by mining activities.

    PubMed

    Vasile, Georgiana D; Vlădescu, Luminiţa

    2010-08-01

    In this paper, the cadmium distribution in Certej River sediments in an area seriously affected by intense mining activities has been studied. The main objective of this study was the evaluation of partition of this metal into different operational defined fractions by sequential extractions. Community Bureau of Reference (BCR) sequential extraction was used to isolate different fractions. The sediment quality was assessed both upstream and downstream the pollution input points, along the Certej River, in order to reveal a possible accumulation of cadmium in sediments and the seasonal changes in cadmium concentrations in BCR sediment phases. Our results reveal that most of the cadmium content is divided between both the soluble and iron and manganese hydrated oxide fractions. Based on total cadmium concentrations in sediments, the enrichment factors were estimated using aluminum as normalizing element and the regression curve Cd/Al corresponding to the geochemical background of the studied area.

  4. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    PubMed

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  5. Sequential Learning and Recognition of Comprehensive Behavioral Patterns Based on Flow of People

    NASA Astrophysics Data System (ADS)

    Gibo, Tatsuya; Aoki, Shigeki; Miyamoto, Takao; Iwata, Motoi; Shiozaki, Akira

    Recently, surveillance cameras have been set up everywhere, for example, in streets and public places, in order to detect irregular situations. In the existing surveillance systems, as only a handful of surveillance agents watch a large number of images acquired from surveillance cameras, there is a possibility that they may miss important scenes such as accidents or abnormal incidents. Therefore, we propose a method for sequential learning and the recognition of comprehensive behavioral patterns in crowded places. First, we comprehensively extract a flow of people from input images by using optical flow. Second, we extract behavioral patterns on the basis of change-point detection of the flow of people. Finally, in order to recognize an observed behavioral pattern, we draw a comparison between the behavioral pattern and previous behavioral patterns in the database. We verify the effectiveness of our approach by placing a surveillance camera on a campus.

  6. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    NASA Astrophysics Data System (ADS)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  7. Operation of a single-channel, sequential Navstar GPS receiver in a helicopter mission environment

    NASA Technical Reports Server (NTRS)

    Edwards, F. G.; Hamlin, J. R.

    1984-01-01

    It is pointed out that the future utilization of the Navstar Global Positioning System (GPS) by civil helicopters will provide an enhanced performance not obtainable with current navigations systems. GPS will supply properly equipped users with extremely accurate three-dimensional position and velocity information anywhere in the world. Preliminary studies have been conducted to investigate differential GPS concept mechanizations and cost, and to theoretically predict navigation performance and the impact of degradation of the GPS C/A code for national security considerations. The obtained results are encouraging, but certain improvements are needed. As a second step in the program, a single-channel sequential GPS navigator was installed and operated in the NASA SH-3G helicopter. A series of flight tests were conducted. It is found that performance of the Navstar GPS Z-set is quite acceptable to support area navigation and nonprecision approach operations.

  8. An exact computational method for performance analysis of sequential test algorithms for detecting network intrusions

    NASA Astrophysics Data System (ADS)

    Chen, Xinjia; Lacy, Fred; Carriere, Patrick

    2015-05-01

    Sequential test algorithms are playing increasingly important roles for quick detecting network intrusions such as portscanners. In view of the fact that such algorithms are usually analyzed based on intuitive approximation or asymptotic analysis, we develop an exact computational method for the performance analysis of such algorithms. Our method can be used to calculate the probability of false alarm and average detection time up to arbitrarily pre-specified accuracy.

  9. Space-Time Fluid-Structure Interaction Computation of Flapping-Wing Aerodynamics

    DTIC Science & Technology

    2013-12-01

    SST-VMST." The structural mechanics computations are based on the Kirchhoff -Love shell model. We use a sequential coupling technique, which is...mechanics computations are based on the Kirchhoff -Love shell model. We use a sequential coupling technique, which is ap- plicable to some classes of FSI...we use the ST-VMS method in combination with the ST-SUPS method. The structural mechanics computations are mostly based on the Kirchhoff –Love shell

  10. Effectiveness of a myocardial infarction protocol in reducing door-to-ballon time.

    PubMed

    Correia, Luis Cláudio Lemos; Brito, Mariana; Kalil, Felipe; Sabino, Michael; Garcia, Guilherme; Ferreira, Felipe; Matos, Iracy; Jacobs, Peter; Ronzoni, Liliana; Noya-Rabelo, Márcia

    2013-07-01

    An adequate door-to-balloon time (<120 minutes) is the necessary condition for the efficacy of primary angioplasty in infarction to translate into effectiveness. To describe the effectiveness of a quality of care protocol in reducing the door-to-balloon time. Between May 2010 and August 2012, all individuals undergoing primary angioplasty in our hospital were analyzed. The door time was electronically recorded at the moment the patient took a number to be evaluated in the emergency room, which occurred prior to filling the check-in forms and to the triage. The balloon time was defined as the beginning of artery opening (introduction of the first device). The first 5 months of monitoring corresponded to the period of pre-implementation of the protocol. The protocol comprised the definition of a flowchart of actions from patient arrival at the hospital, the team's awareness raising in relation to the prioritization of time, and provision of a periodic feedback on the results and possible inadequacies. A total of 50 individuals were assessed. They were divided into five groups of 10 sequential patients (one group pre- and four groups post-protocol). The door-to-balloon time regarding the 10 cases recorded before protocol implementation was 200 ± 77 minutes. After protocol implementation, there was a progressive reduction of the door-to-balloon time to 142±78 minutes in the first 10 patients, then to 150±50 minutes, 131±37 minutes and, finally, 116±29 minutes in the three sequential groups of 10 patients, respectively. Linear regression between sequential patients and the door-to-balloon time (r = - 0.41) showed a regression coefficient of - 1.74 minutes. The protocol implementation proved effective in the reduction of the door-to-balloon time.

  11. Avalanche of entanglement and correlations at quantum phase transitions.

    PubMed

    Krutitsky, Konstantin V; Osterloh, Andreas; Schützhold, Ralf

    2017-06-16

    We study the ground-state entanglement in the quantum Ising model with nearest neighbor ferromagnetic coupling J and find a sequential increase of entanglement depth d with growing J. This entanglement avalanche starts with two-point entanglement, as measured by the concurrence, and continues via the three-tangle and four-tangle, until finally, deep in the ferromagnetic phase for J = ∞, arriving at a pure L-partite (GHZ type) entanglement of all L spins. Comparison with the two, three, and four-point correlations reveals a similar sequence and shows strong ties to the above entanglement measures for small J. However, we also find a partial inversion of the hierarchy, where the four-point correlation exceeds the three- and two-point correlations, well before the critical point is reached. Qualitatively similar behavior is also found for the Bose-Hubbard model, suggesting that this is a general feature of a quantum phase transition. This should be taken into account in the approximations starting from a mean-field limit.

  12. a Study on Automatic Uav Image Mosaic Method for Paroxysmal Disaster

    NASA Astrophysics Data System (ADS)

    Li, M.; Li, D.; Fan, D.

    2012-07-01

    As everyone knows, some paroxysmal disasters, such as flood, can do a great damage in short time. Timely, accurate, and fast acquisition of sufficient disaster information is the prerequisite facing with disaster emergency. Due to UAV's superiority in acquiring disaster data, UAV, a rising remote sensed data has gradually become the first choice for departments of disaster prevention and mitigation to collect the disaster information at first hand. In this paper, a novel and fast strategy is proposed for registering and mosaicing UAV data. Firstly, the original images will not be zoomed in to be 2 times larger ones at the initial course of SIFT operator, and the total number of the pyramid octaves in scale space is reduced to speed up the matching process; sequentially, RANSAC(Random Sample Consensus) is used to eliminate the mismatching tie points. Then, bundle adjustment is introduced to solve all of the camera geometrical calibration parameters jointly. Finally, the best seamline searching strategy based on dynamic schedule is applied to solve the dodging problem arose by aeroplane's side-looking. Beside, a weighted fusion estimation algorithm is employed to eliminate the "fusion ghost" phenomenon.

  13. Selective serotonin reuptake inhibitors versus placebo in patients with major depressive disorder. A systematic review with meta-analysis and Trial Sequential Analysis.

    PubMed

    Jakobsen, Janus Christian; Katakam, Kiran Kumar; Schou, Anne; Hellmuth, Signe Gade; Stallknecht, Sandra Elkjær; Leth-Møller, Katja; Iversen, Maria; Banke, Marianne Bjørnø; Petersen, Iggiannguaq Juhl; Klingenberg, Sarah Louise; Krogh, Jesper; Ebert, Sebastian Elgaard; Timm, Anne; Lindschou, Jane; Gluud, Christian

    2017-02-08

    The evidence on selective serotonin reuptake inhibitors (SSRIs) for major depressive disorder is unclear. Our objective was to conduct a systematic review assessing the effects of SSRIs versus placebo, 'active' placebo, or no intervention in adult participants with major depressive disorder. We searched for eligible randomised clinical trials in The Cochrane Library's CENTRAL, PubMed, EMBASE, PsycLIT, PsycINFO, Science Citation Index Expanded, clinical trial registers of Europe and USA, websites of pharmaceutical companies, the U.S. Food and Drug Administration (FDA), and the European Medicines Agency until January 2016. All data were extracted by at least two independent investigators. We used Cochrane systematic review methodology, Trial Sequential Analysis, and calculation of Bayes factor. An eight-step procedure was followed to assess if thresholds for statistical and clinical significance were crossed. Primary outcomes were reduction of depressive symptoms, remission, and adverse events. Secondary outcomes were suicides, suicide attempts, suicide ideation, and quality of life. A total of 131 randomised placebo-controlled trials enrolling a total of 27,422 participants were included. None of the trials used 'active' placebo or no intervention as control intervention. All trials had high risk of bias. SSRIs significantly reduced the Hamilton Depression Rating Scale (HDRS) at end of treatment (mean difference -1.94 HDRS points; 95% CI -2.50 to -1.37; P < 0.00001; 49 trials; Trial Sequential Analysis-adjusted CI -2.70 to -1.18); Bayes factor below predefined threshold (2.01*10 -23 ). The effect estimate, however, was below our predefined threshold for clinical significance of 3 HDRS points. SSRIs significantly decreased the risk of no remission (RR 0.88; 95% CI 0.84 to 0.91; P < 0.00001; 34 trials; Trial Sequential Analysis adjusted CI 0.83 to 0.92); Bayes factor (1426.81) did not confirm the effect). SSRIs significantly increased the risks of serious adverse events (OR 1.37; 95% CI 1.08 to 1.75; P = 0.009; 44 trials; Trial Sequential Analysis-adjusted CI 1.03 to 1.89). This corresponds to 31/1000 SSRI participants will experience a serious adverse event compared with 22/1000 control participants. SSRIs also significantly increased the number of non-serious adverse events. There were almost no data on suicidal behaviour, quality of life, and long-term effects. SSRIs might have statistically significant effects on depressive symptoms, but all trials were at high risk of bias and the clinical significance seems questionable. SSRIs significantly increase the risk of both serious and non-serious adverse events. The potential small beneficial effects seem to be outweighed by harmful effects. PROSPERO CRD42013004420.

  14. Tracking Adolescents With Global Positioning System-Enabled Cell Phones to Study Contextual Exposures and Alcohol and Marijuana Use: A Pilot Study.

    PubMed

    Byrnes, Hilary F; Miller, Brenda A; Wiebe, Douglas J; Morrison, Christopher N; Remer, Lillian G; Wiehe, Sarah E

    2015-08-01

    Measuring activity spaces, places adolescents spend time, provides information about relations between contextual exposures and risk behaviors. We studied whether contextual exposures in adolescents' activity spaces differ from contextual risks present in residential contexts and examined relationships between contextual exposures in activity spaces and alcohol/marijuana use. Adolescents (N = 18) aged 16-17 years carried global positioning system (GPS)-enabled smartphones for 1 week, with locations tracked. Activity spaces were created by connecting global positioning system points sequentially and adding buffers. Contextual exposure data (e.g., alcohol outlets) were connected to routes. Adolescents completed texts regarding behaviors. Adolescent activity spaces intersected 24.3 census tracts and contained nine times more alcohol outlets than that of residential census tracts. Outlet exposure in activity spaces was related to drinking. Low-socioeconomic status exposure was related to marijuana use. Findings suggest substantial differences between activity spaces and residential contexts and suggest that activity spaces are relevant for adolescent risk behaviors. Copyright © 2015 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  15. THE ROLE OF SELF-INJURY IN THE ORGANIZATION OF BEHAVIOUR

    PubMed Central

    Sandman, Curt A.; Kemp, Aaron S.; Mabini, Christopher; Pincus, David; Magnusson, Magnus

    2012-01-01

    Background Self-injuring acts are among the most dramatic behaviours exhibited by human beings. There is no known single cause and there is no universally agreed upon treatment. Sophisticated sequential and temporal analysis of behaviour has provided alternative descriptions of self-injury that provide new insights into its initiation and maintenance. Method Forty hours of observations for each of 32 participants were collected in a contiguous two-week period. Twenty categories of behavioural and environmental events were recorded electronically that captured the precise time each observation occurred. Temporal behavioural/environmental patterns associated with self-injurious events were revealed with a method (t-patterns; THEME) for detecting non-linear, real-time patterns. Results Results indicated that acts of self-injury contributed both to more patterns and to more complex patterns. Moreover, self-injury left its imprint on the organization of behaviour even when counts of self-injury were expelled from the continuous record. Conclusions Behaviour of participants was organized in a more diverse array of patterns with SIB was present. Self-injuring acts may function as singular points, increasing coherence within self-organizing patterns of behaviour. PMID:22452417

  16. Fast-responding liquid crystal light-valve technology for color-sequential display applications

    NASA Astrophysics Data System (ADS)

    Janssen, Peter J.; Konovalov, Victor A.; Muravski, Anatoli A.; Yakovenko, Sergei Y.

    1996-04-01

    A color sequential projection system has some distinct advantages over conventional systems which make it uniquely suitable for consumer TV as well as high performance professional applications such as computer monitors and electronic cinema. A fast responding light-valve is, clearly, essential for a good performing system. Response speed of transmissive LC lightvalves has been marginal thus far for good color rendition. Recently, Sevchenko Institute has made some very fast reflective LC cells which were evaluated at Philips Labs. These devices showed sub millisecond-large signal-response times, even at room temperature, and produced good color in a projector emulation testbed. In our presentation we describe our highly efficient color sequential projector and demonstrate its operation on video tape. Next we discuss light-valve requirements and reflective light-valve test results.

  17. Three-dimensional paper-based electrochemiluminescence immunodevice for multiplexed measurement of biomarkers and point-of-care testing.

    PubMed

    Ge, Lei; Yan, Jixian; Song, Xianrang; Yan, Mei; Ge, Shenguang; Yu, Jinghua

    2012-02-01

    In this work, electrochemiluminescence (ECL) immunoassay was introduced into the recently proposed microfluidic paper-based analytical device (μPADs) based on directly screen-printed electrodes on paper for the very first time. The screen-printed paper-electrodes will be more important for further development of this paper-based ECL device in simple, low-cost and disposable application than commercialized ones. To further perform high-performance, high-throughput, simple and inexpensive ECL immunoassay on μPAD for point-of-care testing, a wax-patterned three-dimensional (3D) paper-based ECL device was demonstrated for the very first time. In this 3D paper-based ECL device, eight carbon working electrodes including their conductive pads were screen-printed on a piece of square paper and shared the same Ag/AgCl reference and carbon counter electrodes on another piece of square paper after stacking. Using typical tris-(bipyridine)-ruthenium (Ⅱ) - tri-n-propylamine ECL system, the application test of this 3D paper-based ECL device was performed through the diagnosis of four tumor markers in real clinical serum samples. With the aid of a facile device-holder and a section-switch assembled on the analyzer, eight working electrodes were sequentially placed into the circuit to trigger the ECL reaction in the sweeping range from 0.5 to 1.1 V at room temperature. In addition, this 3D paper-based ECL device can be easily integrated and combined with the recently emerging paper electronics to further develop simple, sensitive, low-cost, disposable and portable μPAD for point-of-care testing, public health and environmental monitoring in remote regions, developing or developed countries. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Pulsatile release of biomolecules from polydimethylsiloxane (PDMS) chips with hydrolytically degradable seals.

    PubMed

    Intra, Janjira; Glasgow, Justin M; Mai, Hoang Q; Salem, Aliasger K

    2008-05-08

    We demonstrate, for the first time, a robust novel polydimethylsiloxane (PDMS) chip that can provide controlled pulsatile release of DNA based molecules, proteins and oligonucleotides without external stimuli or triggers. The PDMS chip with arrays of wells was constructed by replica molding. Poly(lactic acid-co-glycolic acid) (PLGA) polymer films of varying composition and thickness were used as seals to the wells. The composition, molecular weight and thickness of the PLGA films were all parameters used to control the degradation rate of the seals and therefore the release profiles. Degradation of the films followed the PLGA composition order of 50:50 PLGA>75:25 PLGA>85:15 PLGA at all time-points beyond week 1. Scanning electron microscopy images showed that films were initially smooth, became porous and ruptured as the osmotic pressure pushed the degrading PLGA film outwards. Pulsatile release of DNA was controlled by the composition and thickness of the PLGA used to seal the well. Transfection experiments in a model Human Embryonic Kidney 293 (HEK293) cell line showed that plasmid DNA loaded in the wells was functional after pulsatile release in comparison to control plasmid DNA at all time-points. Thicker films degraded faster than thinner films and could be used to fine-tune the release of DNA over day length periods. Finally the PDMS chip was shown to provide repeated sequential release of CpG oligonucleotides and a model antigen, Ovalbumin (OVA), indicating significant potential for this device for vaccinations or applications that require defined complex release patterns of a variety of chemicals, drugs and biomolecules.

  19. A review of statistical issues with progression-free survival as an interval-censored time-to-event endpoint.

    PubMed

    Sun, Xing; Li, Xiaoyun; Chen, Cong; Song, Yang

    2013-01-01

    Frequent rise of interval-censored time-to-event data in randomized clinical trials (e.g., progression-free survival [PFS] in oncology) challenges statistical researchers in the pharmaceutical industry in various ways. These challenges exist in both trial design and data analysis. Conventional statistical methods treating intervals as fixed points, which are generally practiced by pharmaceutical industry, sometimes yield inferior or even flawed analysis results in extreme cases for interval-censored data. In this article, we examine the limitation of these standard methods under typical clinical trial settings and further review and compare several existing nonparametric likelihood-based methods for interval-censored data, methods that are more sophisticated but robust. Trial design issues involved with interval-censored data comprise another topic to be explored in this article. Unlike right-censored survival data, expected sample size or power for a trial with interval-censored data relies heavily on the parametric distribution of the baseline survival function as well as the frequency of assessments. There can be substantial power loss in trials with interval-censored data if the assessments are very infrequent. Such an additional dependency controverts many fundamental assumptions and principles in conventional survival trial designs, especially the group sequential design (e.g., the concept of information fraction). In this article, we discuss these fundamental changes and available tools to work around their impacts. Although progression-free survival is often used as a discussion point in the article, the general conclusions are equally applicable to other interval-censored time-to-event endpoints.

  20. Early mortality in multiple myeloma: the time-dependent impact of comorbidity: A population-based study in 621 real-life patients.

    PubMed

    Ríos-Tamayo, Rafael; Sáinz, Juan; Martínez-López, Joaquín; Puerta, José Manuel; Chang, Daysi-Yoe-Ling; Rodríguez, Teresa; Garrido, Pilar; de Veas, José Luís García; Romero, Antonio; Moratalla, Lucía; López-Fernández, Elisa; González, Pedro Antonio; Sánchez, María José; Jiménez-Moleón, José Juan; Jurado, Manuel; Lahuerta, Juan José

    2016-07-01

    Multiple myeloma is a heterogeneous disease with variable survival; this variability cannot be fully explained by the current systems of risk stratification. Early mortality remains a serious obstacle to further improve the trend toward increased survival demonstrated in recent years. However, the definition of early mortality is not standardized yet. Importantly, no study has focused on the impact of comorbidity on early mortality in multiple myeloma to date. Therefore, we analyzed the role of baseline comorbidity in a large population-based cohort of 621 real-life myeloma patients over a 31-year period. To evaluate early mortality, a sequential multivariate regression model at 2, 6, and 12 months from diagnosis was performed. It was demonstrated that comorbidity had an independent impact on early mortality, which is differential and time-dependent. Besides renal failure, respiratory disease at 2 months, liver disease at 6 months, and hepatitis virus C infection at 12 months, were, respectively, associated with early mortality, adjusting for other well-established prognostic factors. On the other hand, the long-term monitoring in our study points out a modest downward trend in early mortality over time. This is the first single institution population-based study aiming to assess the impact of comorbidity on early mortality in multiple myeloma. It is suggested that early mortality should be analyzed at three key time points (2, 6, and 12 months), in order to allow comparisons between studies. Comorbidity plays a critical role in the outcome of myeloma patients in terms of early mortality. Am. J. Hematol. 91:700-704, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

Top