Sample records for multiple time point

  1. Common pitfalls in statistical analysis: The perils of multiple testing

    PubMed Central

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2016-01-01

    Multiple testing refers to situations where a dataset is subjected to statistical testing multiple times - either at multiple time-points or through multiple subgroups or for multiple end-points. This amplifies the probability of a false-positive finding. In this article, we look at the consequences of multiple testing and explore various methods to deal with this issue. PMID:27141478

  2. Ruminal bacteria and protozoa composition, digestibility, and amino acid profile determined by multiple hydrolysis times.

    PubMed

    Fessenden, S W; Hackmann, T J; Ross, D A; Foskolos, A; Van Amburgh, M E

    2017-09-01

    Microbial samples from 4 independent experiments in lactating dairy cattle were obtained and analyzed for nutrient composition, AA digestibility, and AA profile after multiple hydrolysis times ranging from 2 to 168 h. Similar bacterial and protozoal isolation techniques were used for all isolations. Omasal bacteria and protozoa samples were analyzed for AA digestibility using a new in vitro technique. Multiple time point hydrolysis and least squares nonlinear regression were used to determine the AA content of omasal bacteria and protozoa, and equivalency comparisons were made against single time point hydrolysis. Formalin was used in 1 experiment, which negatively affected AA digestibility and likely limited the complete release of AA during acid hydrolysis. The mean AA digestibility was 87.8 and 81.6% for non-formalin-treated bacteria and protozoa, respectively. Preservation of microbe samples in formalin likely decreased recovery of several individual AA. Results from the multiple time point hydrolysis indicated that Ile, Val, and Met hydrolyzed at a slower rate compared with other essential AA. Singe time point hydrolysis was found to be nonequivalent to multiple time point hydrolysis when considering biologically important changes in estimated microbial AA profiles. Several AA, including Met, Ile, and Val, were underpredicted using AA determination after a single 24-h hydrolysis. Models for predicting postruminal supply of AA might need to consider potential bias present in postruminal AA flow literature when AA determinations are performed after single time point hydrolysis and when using formalin as a preservative for microbial samples. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  3. Parallel point-multiplication architecture using combined group operations for high-speed cryptographic applications

    PubMed Central

    Saeedi, Ehsan; Kong, Yinan

    2017-01-01

    In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance (1Area×Time=1AT) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature. PMID:28459831

  4. Coexistence and local μ-stability of multiple equilibrium points for memristive neural networks with nonmonotonic piecewise linear activation functions and unbounded time-varying delays.

    PubMed

    Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde

    2016-12-01

    In this paper, the coexistence and dynamical behaviors of multiple equilibrium points are discussed for a class of memristive neural networks (MNNs) with unbounded time-varying delays and nonmonotonic piecewise linear activation functions. By means of the fixed point theorem, nonsmooth analysis theory and rigorous mathematical analysis, it is proven that under some conditions, such n-neuron MNNs can have 5 n equilibrium points located in ℜ n , and 3 n of them are locally μ-stable. As a direct application, some criteria are also obtained on the multiple exponential stability, multiple power stability, multiple log-stability and multiple log-log-stability. All these results reveal that the addressed neural networks with activation functions introduced in this paper can generate greater storage capacity than the ones with Mexican-hat-type activation function. Numerical simulations are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Optimization of a therapeutic protocol for intravenous injection of human mesenchymal stem cells after cerebral ischemia in adult rats.

    PubMed

    Omori, Yoshinori; Honmou, Osamu; Harada, Kuniaki; Suzuki, Junpei; Houkin, Kiyohiro; Kocsis, Jeffery D

    2008-10-21

    The systemic injection of human mesenchymal stem cells (hMSCs) prepared from adult bone marrow has therapeutic benefits after cerebral artery occlusion in rats, and may have multiple therapeutic effects at various sites and times within the lesion as the cells respond to a particular pathological microenvironment. However, the comparative therapeutic benefits of multiple injections of hMSCs at different time points after cerebral artery occlusion in rats remain unclear. In this study, we induced middle cerebral artery occlusion (MCAO) in rats using intra-luminal vascular occlusion, and infused hMSCs intravenously at a single 6 h time point (low and high cell doses) and various multiple time points after MCAO. From MRI analyses lesion volume was reduced in all hMSC cell injection groups as compared to serum alone injections. However, the greatest therapeutic benefit was achieved following a single high cell dose injection at 6 h post-MCAO, rather than multiple lower cell infusions over multiple time points. Three-dimensional analysis of capillary vessels in the lesion indicated that the capillary volume was equally increased in all of the cell-injected groups. Thus, differences in functional outcome in the hMSC transplantation subgroups are not likely the result of differences in angiogenesis, but rather from differences in neuroprotective effects.

  6. New fast DCT algorithms based on Loeffler's factorization

    NASA Astrophysics Data System (ADS)

    Hong, Yoon Mi; Kim, Il-Koo; Lee, Tammy; Cheon, Min-Su; Alshina, Elena; Han, Woo-Jin; Park, Jeong-Hoon

    2012-10-01

    This paper proposes a new 32-point fast discrete cosine transform (DCT) algorithm based on the Loeffler's 16-point transform. Fast integer realizations of 16-point and 32-point transforms are also provided based on the proposed transform. For the recent development of High Efficiency Video Coding (HEVC), simplified quanti-zation and de-quantization process are proposed. Three different forms of implementation with the essentially same performance, namely matrix multiplication, partial butterfly, and full factorization can be chosen accord-ing to the given platform. In terms of the number of multiplications required for the realization, our proposed full-factorization is 3~4 times faster than a partial butterfly, and about 10 times faster than direct matrix multiplication.

  7. Estimating vehicle height using homographic projections

    DOEpatents

    Cunningham, Mark F; Fabris, Lorenzo; Gee, Timothy F; Ghebretati, Jr., Frezghi H; Goddard, James S; Karnowski, Thomas P; Ziock, Klaus-peter

    2013-07-16

    Multiple homography transformations corresponding to different heights are generated in the field of view. A group of salient points within a common estimated height range is identified in a time series of video images of a moving object. Inter-salient point distances are measured for the group of salient points under the multiple homography transformations corresponding to the different heights. Variations in the inter-salient point distances under the multiple homography transformations are compared. The height of the group of salient points is estimated to be the height corresponding to the homography transformation that minimizes the variations.

  8. Temporally-Constrained Group Sparse Learning for Longitudinal Data Analysis in Alzheimer’s Disease

    PubMed Central

    Jie, Biao; Liu, Mingxia; Liu, Jun

    2016-01-01

    Sparse learning has been widely investigated for analysis of brain images to assist the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, most existing sparse learning-based studies only adopt cross-sectional analysis methods, where the sparse model is learned using data from a single time-point. Actually, multiple time-points of data are often available in brain imaging applications, which can be used in some longitudinal analysis methods to better uncover the disease progression patterns. Accordingly, in this paper we propose a novel temporally-constrained group sparse learning method aiming for longitudinal analysis with multiple time-points of data. Specifically, we learn a sparse linear regression model by using the imaging data from multiple time-points, where a group regularization term is first employed to group the weights for the same brain region across different time-points together. Furthermore, to reflect the smooth changes between data derived from adjacent time-points, we incorporate two smoothness regularization terms into the objective function, i.e., one fused smoothness term which requires that the differences between two successive weight vectors from adjacent time-points should be small, and another output smoothness term which requires the differences between outputs of two successive models from adjacent time-points should also be small. We develop an efficient optimization algorithm to solve the proposed objective function. Experimental results on ADNI database demonstrate that, compared with conventional sparse learning-based methods, our proposed method can achieve improved regression performance and also help in discovering disease-related biomarkers. PMID:27093313

  9. Parallel point-multiplication architecture using combined group operations for high-speed cryptographic applications.

    PubMed

    Hossain, Md Selim; Saeedi, Ehsan; Kong, Yinan

    2017-01-01

    In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text]) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature.

  10. Acoustic field in unsteady moving media

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Maestrello, L.; Ting, L.

    1995-01-01

    In the interaction of an acoustic field with a moving airframe the authors encounter a canonical initial value problem for an acoustic field induced by an unsteady source distribution, q(t,x) with q equivalent to 0 for t less than or equal to 0, in a medium moving with a uniform unsteady velocity U(t)i in the coordinate system x fixed on the airframe. Signals issued from a source point S in the domain of dependence D of an observation point P at time t will arrive at point P more than once corresponding to different retarded times, Tau in the interval (0, t). The number of arrivals is called the multiplicity of the point S. The multiplicity equals 1 if the velocity U remains subsonic and can be greater when U becomes supersonic. For an unsteady uniform flow U(t)i, rules are formulated for defining the smallest number of I subdomains V(sub i) of D with the union of V(sub i) equal to D. Each subdomain has multiplicity 1 and a formula for the corresponding retarded time. The number of subdomains V(sub i) with nonempty intersection is the multiplicity m of the intersection. The multiplicity is at most I. Examples demonstrating these rules are presented for media at accelerating and/or decelerating supersonic speed.

  11. A model for incomplete longitudinal multivariate ordinal data.

    PubMed

    Liu, Li C

    2008-12-30

    In studies where multiple outcome items are repeatedly measured over time, missing data often occur. A longitudinal item response theory model is proposed for analysis of multivariate ordinal outcomes that are repeatedly measured. Under the MAR assumption, this model accommodates missing data at any level (missing item at any time point and/or missing time point). It allows for multiple random subject effects and the estimation of item discrimination parameters for the multiple outcome items. The covariates in the model can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is described utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher-scoring solution, which provides standard errors for all model parameters, is used. A data set from a longitudinal prevention study is used to motivate the application of the proposed model. In this study, multiple ordinal items of health behavior are repeatedly measured over time. Because of a planned missing design, subjects answered only two-third of all items at a given point. Copyright 2008 John Wiley & Sons, Ltd.

  12. How to Assess the Existence of Competing Strategies in Cognitive Tasks: A Primer on the Fixed-Point Property

    PubMed Central

    van Maanen, Leendert; de Jong, Ritske; van Rijn, Hedderik

    2014-01-01

    When multiple strategies can be used to solve a type of problem, the observed response time distributions are often mixtures of multiple underlying base distributions each representing one of these strategies. For the case of two possible strategies, the observed response time distributions obey the fixed-point property. That is, there exists one reaction time that has the same probability of being observed irrespective of the actual mixture proportion of each strategy. In this paper we discuss how to compute this fixed-point, and how to statistically assess the probability that indeed the observed response times are generated by two competing strategies. Accompanying this paper is a free R package that can be used to compute and test the presence or absence of the fixed-point property in response time data, allowing for easy to use tests of strategic behavior. PMID:25170893

  13. Estimating Statistical Power When Making Adjustments for Multiple Tests

    ERIC Educational Resources Information Center

    Porter, Kristin E.

    2016-01-01

    In recent years, there has been increasing focus on the issue of multiple hypotheses testing in education evaluation studies. In these studies, researchers are typically interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time or across multiple treatment groups. When…

  14. Statistical Power in Evaluations That Investigate Effects on Multiple Outcomes: A Guide for Researchers

    ERIC Educational Resources Information Center

    Porter, Kristin E.

    2018-01-01

    Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…

  15. Real-time optical multiple object recognition and tracking system and method

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin (Inventor); Liu, Hua Kuang (Inventor)

    1987-01-01

    The invention relates to an apparatus and associated methods for the optical recognition and tracking of multiple objects in real time. Multiple point spatial filters are employed that pre-define the objects to be recognized at run-time. The system takes the basic technology of a Vander Lugt filter and adds a hololens. The technique replaces time, space and cost-intensive digital techniques. In place of multiple objects, the system can also recognize multiple orientations of a single object. This later capability has potential for space applications where space and weight are at a premium.

  16. Identification of driving network of cellular differentiation from single sample time course gene expression data

    NASA Astrophysics Data System (ADS)

    Chen, Ye; Wolanyk, Nathaniel; Ilker, Tunc; Gao, Shouguo; Wang, Xujing

    Methods developed based on bifurcation theory have demonstrated their potential in driving network identification for complex human diseases, including the work by Chen, et al. Recently bifurcation theory has been successfully applied to model cellular differentiation. However, there one often faces a technical challenge in driving network prediction: time course cellular differentiation study often only contains one sample at each time point, while driving network prediction typically require multiple samples at each time point to infer the variation and interaction structures of candidate genes for the driving network. In this study, we investigate several methods to identify both the critical time point and the driving network through examination of how each time point affects the autocorrelation and phase locking. We apply these methods to a high-throughput sequencing (RNA-Seq) dataset of 42 subsets of thymocytes and mature peripheral T cells at multiple time points during their differentiation (GSE48138 from GEO). We compare the predicted driving genes with known transcription regulators of cellular differentiation. We will discuss the advantages and limitations of our proposed methods, as well as potential further improvements of our methods.

  17. Statistical Power in Evaluations That Investigate Effects on Multiple Outcomes: A Guide for Researchers

    ERIC Educational Resources Information Center

    Porter, Kristin E.

    2016-01-01

    In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…

  18. Multiple μ-stability of neural networks with unbounded time-varying delays.

    PubMed

    Wang, Lili; Chen, Tianping

    2014-05-01

    In this paper, we are concerned with a class of recurrent neural networks with unbounded time-varying delays. Based on the geometrical configuration of activation functions, the phase space R(n) can be divided into several Φη-type subsets. Accordingly, a new set of regions Ωη are proposed, and rigorous mathematical analysis is provided to derive the existence of equilibrium point and its local μ-stability in each Ωη. It concludes that the n-dimensional neural networks can exhibit at least 3(n) equilibrium points and 2(n) of them are μ-stable. Furthermore, due to the compatible property, a set of new conditions are presented to address the dynamics in the remaining 3(n)-2(n) subset regions. As direct applications of these results, we can get some criteria on the multiple exponential stability, multiple power stability, multiple log-stability, multiple log-log-stability and so on. In addition, the approach and results can also be extended to the neural networks with K-level nonlinear activation functions and unbounded time-varying delays, in which there can store (2K+1)(n) equilibrium points, (K+1)(n) of them are locally μ-stable. Numerical examples are given to illustrate the effectiveness of our results. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. An efficient method for the prediction of deleterious multiple-point mutations in the secondary structure of RNAs using suboptimal folding solutions

    PubMed Central

    Churkin, Alexander; Barash, Danny

    2008-01-01

    Background RNAmute is an interactive Java application which, given an RNA sequence, calculates the secondary structure of all single point mutations and organizes them into categories according to their similarity to the predicted structure of the wild type. The secondary structure predictions are performed using the Vienna RNA package. A more efficient implementation of RNAmute is needed, however, to extend from the case of single point mutations to the general case of multiple point mutations, which may often be desired for computational predictions alongside mutagenesis experiments. But analyzing multiple point mutations, a process that requires traversing all possible mutations, becomes highly expensive since the running time is O(nm) for a sequence of length n with m-point mutations. Using Vienna's RNAsubopt, we present a method that selects only those mutations, based on stability considerations, which are likely to be conformational rearranging. The approach is best examined using the dot plot representation for RNA secondary structure. Results Using RNAsubopt, the suboptimal solutions for a given wild-type sequence are calculated once. Then, specific mutations are selected that are most likely to cause a conformational rearrangement. For an RNA sequence of about 100 nts and 3-point mutations (n = 100, m = 3), for example, the proposed method reduces the running time from several hours or even days to several minutes, thus enabling the practical application of RNAmute to the analysis of multiple-point mutations. Conclusion A highly efficient addition to RNAmute that is as user friendly as the original application but that facilitates the practical analysis of multiple-point mutations is presented. Such an extension can now be exploited prior to site-directed mutagenesis experiments by virologists, for example, who investigate the change of function in an RNA virus via mutations that disrupt important motifs in its secondary structure. A complete explanation of the application, called MultiRNAmute, is available at [1]. PMID:18445289

  20. The Dynamics of Smoking-Related Disturbed Methylation: A Two Time-Point Study of Methylation Change in Smokers, Non-Smokers and Former Smokers

    EPA Science Inventory

    BACKGROUND: The evidence for epigenome-wide associations between smoking and DNA methylation continues to grow through cross-sectional studies. However, few large­ scale investigations have explored the associations using observations for individuals at multiple time-points. ...

  1. Modal Analysis Using the Singular Value Decomposition and Rational Fraction Polynomials

    DTIC Science & Technology

    2017-04-06

    information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...results. The programs are designed for experimental datasets with multiple drive and response points and have proven effective even for systems with... designed for experimental datasets with multiple drive and response points and have proven effective even for systems with numerous closely-spaced

  2. [Development of the automatic dental X-ray film processor].

    PubMed

    Bai, J; Chen, H

    1999-07-01

    This paper introduces a multiple-point detecting technique of the density of dental X-ray films. With the infrared ray multiple-point detecting technique, a single-chip microcomputer control system is used to analyze the effectiveness of the film-developing in real time in order to achieve a good image. Based on the new technology, We designed the intelligent automatic dental X-ray film processing.

  3. A Multivariate Model for the Meta-Analysis of Study Level Survival Data at Multiple Times

    ERIC Educational Resources Information Center

    Jackson, Dan; Rollins, Katie; Coughlin, Patrick

    2014-01-01

    Motivated by our meta-analytic dataset involving survival rates after treatment for critical leg ischemia, we develop and apply a new multivariate model for the meta-analysis of study level survival data at multiple times. Our data set involves 50 studies that provide mortality rates at up to seven time points, which we model simultaneously, and…

  4. Optical control of multi-stage thin film solar cell production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jian; Levi, Dean H.; Contreras, Miguel A.

    2016-05-17

    Embodiments include methods of depositing and controlling the deposition of a film in multiple stages. The disclosed deposition and deposition control methods include the optical monitoring of a deposition matrix to determine a time when at least one transition point occurs. In certain embodiments, the transition point or transition points are a stoichiometry point. Methods may also include controlling the length of time in which material is deposited during a deposition stage or controlling the amount of the first, second or subsequent materials deposited during any deposition stage in response to a determination of the time when a selected transitionmore » point occurs.« less

  5. Composite analysis for Escherichia coli at coastal beaches

    USGS Publications Warehouse

    Bertke, E.E.

    2007-01-01

    At some coastal beaches, concentrations of fecal-indicator bacteria can differ substantially between multiple points at the same beach at the same time. Because of this spatial variability, the recreational water quality at beaches is sometimes determined by stratifying a beach into several areas and collecting a sample from each area to analyze for the concentration of fecal-indicator bacteria. The average concentration of bacteria from those points is often used to compare to the recreational standard for advisory postings. Alternatively, if funds are limited, a single sample is collected to represent the beach. Compositing the samples collected from each section of the beach may yield equally accurate data as averaging concentrations from multiple points, at a reduced cost. In the study described herein, water samples were collected at multiple points from three Lake Erie beaches and analyzed for Escherichia coli on modified mTEC agar (EPA Method 1603). From the multiple-point samples, a composite sample (n = 116) was formed at each beach by combining equal aliquots of well-mixed water from each point. Results from this study indicate that E. coli concentrations from the arithmetic average of multiple-point samples and from composited samples are not significantly different (t = 1.59, p = 0.1139) and yield similar measures of recreational water quality; additionally, composite samples could result in a significant cost savings.

  6. Reciprocal Associations between Negative Affect, Binge Eating, and Purging in the Natural Environment in Women with Bulimia Nervosa

    PubMed Central

    Lavender, Jason M.; Utzinger, Linsey M.; Cao, Li; Wonderlich, Stephen A.; Engel, Scott G.; Mitchell, James E.; Crosby, Ross D.

    2016-01-01

    Although negative affect (NA) has been identified as a common trigger for bulimic behaviors, findings regarding NA following such behaviors have been mixed. This study examined reciprocal associations between NA and bulimic behaviors using real-time, naturalistic data. Participants were 133 women with DSM-IV bulimia nervosa (BN) who completed a two-week ecological momentary assessment (EMA) protocol in which they recorded bulimic behaviors and provided multiple daily ratings of NA. A multilevel autoregressive cross-lagged analysis was conducted to examine concurrent, first-order autoregressive, and prospective associations between NA, binge eating, and purging across the day. Results revealed positive concurrent associations between all variables across all time points, as well as numerous autoregressive associations. For prospective associations, higher NA predicted subsequent bulimic symptoms at multiple time points; conversely, binge eating predicted lower NA at multiple time points, and purging predicted higher NA at one time point. Several autoregressive and prospective associations were also found between binge eating and purging. This study used a novel approach to examine NA in relation to bulimic symptoms, contributing to the existing literature by directly examining the magnitude of the associations, examining differences in the associations across the day, and controlling for other associations in testing each effect in the model. These findings may have relevance for understanding the etiology and/or maintenance of bulimic symptoms, as well as potentially informing psychological interventions for BN. PMID:26692122

  7. A method to approximate a closest loadability limit using multiple load flow solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yorino, Naoto; Harada, Shigemi; Cheng, Haozhong

    A new method is proposed to approximate a closest loadability limit (CLL), or closest saddle node bifurcation point, using a pair of multiple load flow solutions. More strictly, the obtainable points by the method are the stationary points including not only CLL but also farthest and saddle points. An operating solution and a low voltage load flow solution are used to efficiently estimate the node injections at a CLL as well as the left and right eigenvectors corresponding to the zero eigenvalue of the load flow Jacobian. They can be used in monitoring loadability margin, in identification of weak spotsmore » in a power system and in the examination of an optimal control against voltage collapse. Most of the computation time of the proposed method is taken in calculating the load flow solution pair. The remaining computation time is less than that of an ordinary load flow.« less

  8. Multiple Role Occupancy in Midlife: Balancing Work and Family Life in Britain.

    ERIC Educational Resources Information Center

    Evandrou, Maria; Glaser, Karen; Henz, Ursula

    2002-01-01

    Investigates the extent of multiple-role occupancy among midlife individuals in Britain, focusing on work and family commitments. The proportion of individuals in midlife who have multiple roles, in terms of paid work and family care, at any one point in time is low, but a much higher proportion of individuals have occupied multiple roles over…

  9. A wavefront orientation method for precise numerical determination of tsunami travel time

    NASA Astrophysics Data System (ADS)

    Fine, I. V.; Thomson, R. E.

    2013-04-01

    We present a highly accurate and computationally efficient method (herein, the "wavefront orientation method") for determining the travel time of oceanic tsunamis. Based on Huygens principle, the method uses an eight-point grid-point pattern and the most recent information on the orientation of the advancing wave front to determine the time for a tsunami to travel to a specific oceanic location. The method is shown to provide improved accuracy and reduced anisotropy compared with the conventional multiple grid-point method presently in widespread use.

  10. Noise and time delay induce critical point in a bistable system

    NASA Astrophysics Data System (ADS)

    Zhang, Jianqiang; Nie, Linru; Yu, Lilong; Zhang, Xinyu

    2014-07-01

    We study relaxation time Tc of time-delayed bistable system driven by two cross-correlated Gaussian white noises that one is multiplicative and the other is additive. By means of numerical calculations, the results indicate that: (i) Combination of noise and time delay can induce two critical points about the relaxation time at some certain noise cross-correlation strength λ under the condition that the multiplicative intensity D equals to the additive noise intensity α. (ii) For each fixed D or α, there are two symmetrical critical points which locates in the regions of positive and negative correlations, respectively. Namely, as λ equals to the critical value λc, Tc is independent of the delay time and the result of Tc versus τ is a horizontal line, but as |λ|>|λc| (or |λ|<|λc|), the relaxation time Tc monotonically increases (or decreases) with the delay time increasing. (iii) In the presence of D = α, the change of λc with D is two symmetrical curves about the axis of λc = 0, and the critical value λc is close to zero for a smaller D, which approaches to +1 or -1 for a greater D.

  11. Reciprocal associations between negative affect, binge eating, and purging in the natural environment in women with bulimia nervosa.

    PubMed

    Lavender, Jason M; Utzinger, Linsey M; Cao, Li; Wonderlich, Stephen A; Engel, Scott G; Mitchell, James E; Crosby, Ross D

    2016-04-01

    Although negative affect (NA) has been identified as a common trigger for bulimic behaviors, findings regarding NA following such behaviors have been mixed. This study examined reciprocal associations between NA and bulimic behaviors using real-time, naturalistic data. Participants were 133 women with bulimia nervosa (BN) according to the 4th edition of the Diagnostic and Statistical Manual of Mental Disorders who completed a 2-week ecological momentary assessment protocol in which they recorded bulimic behaviors and provided multiple daily ratings of NA. A multilevel autoregressive cross-lagged analysis was conducted to examine concurrent, first-order autoregressive, and prospective associations between NA, binge eating, and purging across the day. Results revealed positive concurrent associations between all variables across all time points, as well as numerous autoregressive associations. For prospective associations, higher NA predicted subsequent bulimic symptoms at multiple time points; conversely, binge eating predicted lower NA at multiple time points, and purging predicted higher NA at 1 time point. Several autoregressive and prospective associations were also found between binge eating and purging. This study used a novel approach to examine NA in relation to bulimic symptoms, contributing to the existing literature by directly examining the magnitude of the associations, examining differences in the associations across the day, and controlling for other associations in testing each effect in the model. These findings may have relevance for understanding the etiology and/or maintenance of bulimic symptoms, as well as potentially informing psychological interventions for BN. (c) 2016 APA, all rights reserved).

  12. [Multiple time scales analysis of spatial differentiation characteristics of non-point source nitrogen loss within watershed].

    PubMed

    Liu, Mei-bing; Chen, Xing-wei; Chen, Ying

    2015-07-01

    Identification of the critical source areas of non-point source pollution is an important means to control the non-point source pollution within the watershed. In order to further reveal the impact of multiple time scales on the spatial differentiation characteristics of non-point source nitrogen loss, a SWAT model of Shanmei Reservoir watershed was developed. Based on the simulation of total nitrogen (TN) loss intensity of all 38 subbasins, spatial distribution characteristics of nitrogen loss and critical source areas were analyzed at three time scales of yearly average, monthly average and rainstorms flood process, respectively. Furthermore, multiple linear correlation analysis was conducted to analyze the contribution of natural environment and anthropogenic disturbance on nitrogen loss. The results showed that there were significant spatial differences of TN loss in Shanmei Reservoir watershed at different time scales, and the spatial differentiation degree of nitrogen loss was in the order of monthly average > yearly average > rainstorms flood process. TN loss load mainly came from upland Taoxi subbasin, which was identified as the critical source area. At different time scales, land use types (such as farmland and forest) were always the dominant factor affecting the spatial distribution of nitrogen loss, while the effect of precipitation and runoff on the nitrogen loss was only taken in no fertilization month and several processes of storm flood at no fertilization date. This was mainly due to the significant spatial variation of land use and fertilization, as well as the low spatial variability of precipitation and runoff.

  13. A floating-point/multiple-precision processor for airborne applications

    NASA Technical Reports Server (NTRS)

    Yee, R.

    1982-01-01

    A compact input output (I/O) numerical processor capable of performing floating-point, multiple precision and other arithmetic functions at execution times which are at least 100 times faster than comparable software emulation is described. The I/O device is a microcomputer system containing a 16 bit microprocessor, a numerical coprocessor with eight 80 bit registers running at a 5 MHz clock rate, 18K random access memory (RAM) and 16K electrically programmable read only memory (EPROM). The processor acts as an intelligent slave to the host computer and can be programmed in high order languages such as FORTRAN and PL/M-86.

  14. Improvement on Timing Accuracy of LIDAR for Remote Sensing

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.

    2018-05-01

    The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.

  15. Real-time terrain storage generation from multiple sensors towards mobile robot operation interface.

    PubMed

    Song, Wei; Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun; Um, Kyhyun

    2014-01-01

    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.

  16. Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface

    PubMed Central

    Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun

    2014-01-01

    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots. PMID:25101321

  17. Real-time electroholography using a multiple-graphics processing unit cluster system with a single spatial light modulator and the InfiniBand network

    NASA Astrophysics Data System (ADS)

    Niwase, Hiroaki; Takada, Naoki; Araki, Hiromitsu; Maeda, Yuki; Fujiwara, Masato; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2016-09-01

    Parallel calculations of large-pixel-count computer-generated holograms (CGHs) are suitable for multiple-graphics processing unit (multi-GPU) cluster systems. However, it is not easy for a multi-GPU cluster system to accomplish fast CGH calculations when CGH transfers between PCs are required. In these cases, the CGH transfer between the PCs becomes a bottleneck. Usually, this problem occurs only in multi-GPU cluster systems with a single spatial light modulator. To overcome this problem, we propose a simple method using the InfiniBand network. The computational speed of the proposed method using 13 GPUs (NVIDIA GeForce GTX TITAN X) was more than 3000 times faster than that of a CPU (Intel Core i7 4770) when the number of three-dimensional (3-D) object points exceeded 20,480. In practice, we achieved ˜40 tera floating point operations per second (TFLOPS) when the number of 3-D object points exceeded 40,960. Our proposed method was able to reconstruct a real-time movie of a 3-D object comprising 95,949 points.

  18. DL-sQUAL: A Multiple-Item Scale for Measuring Service Quality of Online Distance Learning Programs

    ERIC Educational Resources Information Center

    Shaik, Naj; Lowe, Sue; Pinegar, Kem

    2006-01-01

    Education is a service with multiplicity of student interactions over time and across multiple touch points. Quality teaching needs to be supplemented by consistent quality supporting services for programs to succeed under the competitive distance learning landscape. ServQual and e-SQ scales have been proposed for measuring quality of traditional…

  19. Development of a Prototype System for Accessing Linked NCES Data. Working Paper Series.

    ERIC Educational Resources Information Center

    Salvucci, Sameena; Wenck, Stephen; Tyson, James

    A project has been developed to advance the capabilities of the National Center for Education Statistics (NCES) to support the dissemination of linked data from multiple surveys, multiple components within a survey, and multiple time points. An essential element of this study is the development of a software prototype system to facilitate NCES…

  20. Enhancing Ground Based Telescope Performance with Image Processing

    DTIC Science & Technology

    2013-11-13

    driven by the need to detect small faint objects with relatively short integration times to avoid streaking of the satellite image across multiple...the time right before the eclipse. The orbital elements of the satellite were entered into the SST’s tracking system, so that the SST could be...short integration times , thereby avoiding streaking of the satellite image across multiple CCD pixels so that the objects are suitably modeled as point

  1. Multiplicity Counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geist, William H.

    2015-12-01

    This set of slides begins by giving background and a review of neutron counting; three attributes of a verification item are discussed: 240Pu eff mass; α, the ratio of (α,n) neutrons to spontaneous fission neutrons; and leakage multiplication. It then takes up neutron detector systems – theory & concepts (coincidence counting, moderation, die-away time); detector systems – some important details (deadtime, corrections); introduction to multiplicity counting; multiplicity electronics and example distributions; singles, doubles, and triples from measured multiplicity distributions; and the point model: multiplicity mathematics.

  2. Techniques for improving transients in learning control systems

    NASA Technical Reports Server (NTRS)

    Chang, C.-K.; Longman, Richard W.; Phan, Minh

    1992-01-01

    A discrete modern control formulation is used to study the nature of the transient behavior of the learning process during repetitions. Several alternative learning control schemes are developed to improve the transient performance. These include a new method using an alternating sign on the learning gain, which is very effective in limiting peak transients and also very useful in multiple-input, multiple-output systems. Other methods include learning at an increasing number of points progressing with time, or an increasing number of points of increasing density.

  3. Robustness analysis of uncertain dynamical neural networks with multiple time delays.

    PubMed

    Senan, Sibel

    2015-10-01

    This paper studies the problem of global robust asymptotic stability of the equilibrium point for the class of dynamical neural networks with multiple time delays with respect to the class of slope-bounded activation functions and in the presence of the uncertainties of system parameters of the considered neural network model. By using an appropriate Lyapunov functional and exploiting the properties of the homeomorphism mapping theorem, we derive a new sufficient condition for the existence, uniqueness and global robust asymptotic stability of the equilibrium point for the class of neural networks with multiple time delays. The obtained stability condition basically relies on testing some relationships imposed on the interconnection matrices of the neural system, which can be easily verified by using some certain properties of matrices. An instructive numerical example is also given to illustrate the applicability of our result and show the advantages of this new condition over the previously reported corresponding results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. A noniterative greedy algorithm for multiframe point correspondence.

    PubMed

    Shafique, Khurram; Shah, Mubarak

    2005-01-01

    This paper presents a framework for finding point correspondences in monocular image sequences over multiple frames. The general problem of multiframe point correspondence is NP-hard for three or more frames. A polynomial time algorithm for a restriction of this problem is presented and is used as the basis of the proposed greedy algorithm for the general problem. The greedy nature of the proposed algorithm allows it to be used in real-time systems for tracking and surveillance, etc. In addition, the proposed algorithm deals with the problems of occlusion, missed detections, and false positives by using a single noniterative greedy optimization scheme and, hence, reduces the complexity of the overall algorithm as compared to most existing approaches where multiple heuristics are used for the same purpose. While most greedy algorithms for point tracking do not allow for entry and exit of the points from the scene, this is not a limitation for the proposed algorithm. Experiments with real and synthetic data over a wide range of scenarios and system parameters are presented to validate the claims about the performance of the proposed algorithm.

  5. Adaptive marker-free registration using a multiple point strategy for real-time and robust endoscope electromagnetic navigation.

    PubMed

    Luo, Xiongbiao; Wan, Ying; He, Xiangjian; Mori, Kensaku

    2015-02-01

    Registration of pre-clinical images to physical space is indispensable for computer-assisted endoscopic interventions in operating rooms. Electromagnetically navigated endoscopic interventions are increasingly performed at current diagnoses and treatments. Such interventions use an electromagnetic tracker with a miniature sensor that is usually attached at an endoscope distal tip to real time track endoscope movements in a pre-clinical image space. Spatial alignment between the electromagnetic tracker (or sensor) and pre-clinical images must be performed to navigate the endoscope to target regions. This paper proposes an adaptive marker-free registration method that uses a multiple point selection strategy. This method seeks to address an assumption that the endoscope is operated along the centerline of an intraluminal organ which is easily violated during interventions. We introduce an adaptive strategy that generates multiple points in terms of sensor measurements and endoscope tip center calibration. From these generated points, we adaptively choose the optimal point, which is the closest to its assigned the centerline of the hollow organ, to perform registration. The experimental results demonstrate that our proposed adaptive strategy significantly reduced the target registration error from 5.32 to 2.59 mm in static phantoms validation, as well as from at least 7.58 mm to 4.71 mm in dynamic phantom validation compared to current available methods. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Effect of processing conditions on oil point pressure of moringa oleifera seed.

    PubMed

    Aviara, N A; Musa, W B; Owolarafe, O K; Ogunsina, B S; Oluwole, F A

    2015-07-01

    Seed oil expression is an important economic venture in rural Nigeria. The traditional techniques of carrying out the operation is not only energy sapping and time consuming but also wasteful. In order to reduce the tedium involved in the expression of oil from moringa oleifera seed and develop efficient equipment for carrying out the operation, the oil point pressure of the seed was determined under different processing conditions using a laboratory press. The processing conditions employed were moisture content (4.78, 6.00, 8.00 and 10.00 % wet basis), heating temperature (50, 70, 85 and 100 °C) and heating time (15, 20, 25 and 30 min). Results showed that the oil point pressure increased with increase in seed moisture content, but decreased with increase in heating temperature and heating time within the above ranges. Highest oil point pressure value of 1.1239 MPa was obtained at the processing conditions of 10.00 % moisture content, 50 °C heating temperature and 15 min heating time. The lowest oil point pressure obtained was 0.3164 MPa and it occurred at the moisture content of 4.78 %, heating temperature of 100 °C and heating time of 30 min. Analysis of Variance (ANOVA) showed that all the processing variables and their interactions had significant effect on the oil point pressure of moringa oleifera seed at 1 % level of significance. This was further demonstrated using Response Surface Methodology (RSM). Tukey's test and Duncan's Multiple Range Analysis successfully separated the means and a multiple regression equation was used to express the relationship existing between the oil point pressure of moringa oleifera seed and its moisture content, processing temperature, heating time and their interactions. The model yielded coefficients that enabled the oil point pressure of the seed to be predicted with very high coefficient of determination.

  7. Note to Budget Cutters: The Arts Are Good Business--Multiple Studies Point to Arts Education as an Important Economic Engine

    ERIC Educational Resources Information Center

    Olson, Catherine Applefeld

    2009-01-01

    They say desperate times call for desperate measures. But in this time of economic uncertainty, the desperate cutting of budgets for arts funding and, by extension, all types of arts education, including music, is not prudent. That is the consensus of several national and local studies, which converge on a single point--that the arts actually can…

  8. Local pulse wave velocity estimated from small vibrations measured ultrasonically at multiple points on the arterial wall

    NASA Astrophysics Data System (ADS)

    Ito, Mika; Arakawa, Mototaka; Kanai, Hiroshi

    2018-07-01

    Pulse wave velocity (PWV) is used as a diagnostic criterion for arteriosclerosis, a major cause of heart disease and cerebrovascular disease. However, there are several problems with conventional PWV measurement techniques. One is that a pulse wave is assumed to only have an incident component propagating at a constant speed from the heart to the femoral artery, and another is that PWV is only determined from a characteristic time such as the rise time of the blood pressure waveform. In this study, we noninvasively measured the velocity waveform of small vibrations at multiple points on the carotid arterial wall using ultrasound. Local PWV was determined by analyzing the phase component of the velocity waveform by the least squares method. This method allowed measurement of the time change of the PWV at approximately the arrival time of the pulse wave, which discriminates the period when the reflected component is not contaminated.

  9. Fast underdetermined BSS architecture design methodology for real time applications.

    PubMed

    Mopuri, Suresh; Reddy, P Sreenivasa; Acharyya, Amit; Naik, Ganesh R

    2015-01-01

    In this paper, we propose a high speed architecture design methodology for the Under-determined Blind Source Separation (UBSS) algorithm using our recently proposed high speed Discrete Hilbert Transform (DHT) targeting real time applications. In UBSS algorithm, unlike the typical BSS, the number of sensors are less than the number of the sources, which is of more interest in the real time applications. The DHT architecture has been implemented based on sub matrix multiplication method to compute M point DHT, which uses N point architecture recursively and where M is an integer multiples of N. The DHT architecture and state of the art architecture are coded in VHDL for 16 bit word length and ASIC implementation is carried out using UMC 90 - nm technology @V DD = 1V and @ 1MHZ clock frequency. The proposed architecture implementation and experimental comparison results show that the DHT design is two times faster than state of the art architecture.

  10. Ku-band multiple beam antenna

    NASA Technical Reports Server (NTRS)

    Chen, C. C.; Franklin, C. F.

    1980-01-01

    The frequency reuse capability is demonstrated for a Ku-band multiple beam antenna which provides contiguous low sidelobe spot beams for point-to-point communications between any two points within the continental United States (CONUS), or regional coverage beams for direct broadcast systems. A spot beam antenna in the 14/21 GHz band which provides contiguous overlapping beams covering CONUS and two discrete beams covering Hawaii and Alaska were designed, developed, and tested. Two reflector antennas are required for providing contiguous coverage of CONUS. Each is comprised of one offset parabolic reflector, one flat polarization diplexer, and two separate planar array feeds. This antenna system provides contiguous spot beam coverage of CONUS, utilizing 15 beams. Also designed, developed and demonstrated was a shaped contoured beam antenna system which provides contiguous four time zone coverage of CONUS from a single offset parabolic reflector incorporating one flat polarization diplexer and two separate planar array feeds. The beams which illuminate the eastern time zone and the mountain time zone are horizontally polarized, while the beams which illuminate the central time zone and the pacific time zone are vertically polarized. Frequency reuse is achieved by amplitude and polarization isolation.

  11. Detecting multiple moving objects in crowded environments with coherent motion regions

    DOEpatents

    Cheriyadat, Anil M.; Radke, Richard J.

    2013-06-11

    Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks.

  12. Computing multiple aggregation levels and contextual features for road facilities recognition using mobile laser scanning data

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen; Liu, Yuan; Liang, Fuxun; Wang, Yongjun

    2017-04-01

    In recent years, updating the inventory of road infrastructures based on field work is labor intensive, time consuming, and costly. Fortunately, vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. However, robust recognition of road facilities from huge volumes of 3D point clouds is still a challenging issue because of complicated and incomplete structures, occlusions and varied point densities. Most existing methods utilize point or object based features to recognize object candidates, and can only extract limited types of objects with a relatively low recognition rate, especially for incomplete and small objects. To overcome these drawbacks, this paper proposes a semantic labeling framework by combing multiple aggregation levels (point-segment-object) of features and contextual features to recognize road facilities, such as road surfaces, road boundaries, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, and cars, for highway infrastructure inventory. The proposed method first identifies ground and non-ground points, and extracts road surfaces facilities from ground points. Non-ground points are segmented into individual candidate objects based on the proposed multi-rule region growing method. Then, the multiple aggregation levels of features and the contextual features (relative positions, relative directions, and spatial patterns) associated with each candidate object are calculated and fed into a SVM classifier to label the corresponding candidate object. The recognition performance of combining multiple aggregation levels and contextual features was compared with single level (point, segment, or object) based features using large-scale highway scene point clouds. Comparative studies demonstrated that the proposed semantic labeling framework significantly improves road facilities recognition precision (90.6%) and recall (91.2%), particularly for incomplete and small objects.

  13. Knee point search using cascading top-k sorting with minimized time complexity.

    PubMed

    Wang, Zheng; Tseng, Shian-Shyong

    2013-01-01

    Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.

  14. Multiplicative point process as a model of trading activity

    NASA Astrophysics Data System (ADS)

    Gontis, V.; Kaulakys, B.

    2004-11-01

    Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.

  15. Mirrored pyramidal wells for simultaneous multiple vantage point microscopy.

    PubMed

    Seale, K T; Reiserer, R S; Markov, D A; Ges, I A; Wright, C; Janetopoulos, C; Wikswo, J P

    2008-10-01

    We report a novel method for obtaining simultaneous images from multiple vantage points of a microscopic specimen using size-matched microscopic mirrors created from anisotropically etched silicon. The resulting pyramidal wells enable bright-field and fluorescent side-view images, and when combined with z-sectioning, provide additional information for 3D reconstructions of the specimen. We have demonstrated the 3D localization and tracking over time of the centrosome of a live Dictyostelium discoideum. The simultaneous acquisition of images from multiple perspectives also provides a five-fold increase in the theoretical collection efficiency of emitted photons, a property which may be useful for low-light imaging modalities such as bioluminescence, or low abundance surface-marker labelling.

  16. Unified dead-time compensation structure for SISO processes with multiple dead times.

    PubMed

    Normey-Rico, Julio E; Flesch, Rodolfo C C; Santos, Tito L M

    2014-11-01

    This paper proposes a dead-time compensation structure for processes with multiple dead times. The controller is based on the filtered Smith predictor (FSP) dead-time compensator structure and it is able to control stable, integrating, and unstable processes with multiple input/output dead times. An equivalent model of the process is first computed in order to define the predictor structure. Using this equivalent model, the primary controller and the predictor filter are tuned to obtain an internally stable closed-loop system which also attempts some closed-loop specifications in terms of set-point tracking, disturbance rejection, and robustness. Some simulation case studies are used to illustrate the good properties of the proposed approach. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  17. A multiple-time-scale turbulence model based on variable partitioning of turbulent kinetic energy spectrum

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.; Chen, C.-P.

    1988-01-01

    The paper presents a multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method. Consideration is given to a class of turbulent boundary layer flows and of separated and/or swirling elliptic turbulent flows. For the separated and/or swirling turbulent flows, the present turbulence model yielded significantly improved computational results over those obtained with the standard k-epsilon turbulence model.

  18. A novel method for vaginal cylinder treatment planning: a seamless transition to 3D brachytherapy

    PubMed Central

    Wu, Vincent; Wang, Zhou; Patil, Sachin

    2012-01-01

    Purpose Standard treatment plan libraries are often used to ensure a quick turn-around time for vaginal cylinder treatments. Recently there is increasing interest in transitioning from conventional 2D radiograph based brachytherapy to 3D image based brachytherapy, which has resulted in a substantial increase in treatment planning time and decrease in patient through-put. We describe a novel technique that significantly reduces the treatment planning time for CT-based vaginal cylinder brachytherapy. Material and methods Oncentra MasterPlan TPS allows multiple sets of data points to be classified as applicator points which has been harnessed in this method. The method relies on two hard anchor points: the first dwell position in a catheter and an applicator configuration specific dwell position as the plan origin and a soft anchor point beyond the last active dwell position to define the axis of the catheter. The spatial location of various data points on the applicator's surface and at 5 mm depth are stored in an Excel file that can easily be transferred into a patient CT data set using window operations and then used for treatment planning. The remainder of the treatment planning process remains unaffected. Results The treatment plans generated on the Oncentra MasterPlan TPS using this novel method yielded results comparable to those generated on the Plato TPS using a standard treatment plan library in terms of treatment times, dwell weights and dwell times for a given optimization method and normalization points. Less than 2% difference was noticed between the treatment times generated between both systems. Using the above method, the entire planning process, including CT importing, catheter reconstruction, multiple data point definition, optimization and dose prescription, can be completed in ~5–10 minutes. Conclusion The proposed method allows a smooth and efficient transition to 3D CT based vaginal cylinder brachytherapy planning. PMID:23349650

  19. An Optimal Parameter Discretization Strategy for Multiple Model Adaptive Estimation and Control

    DTIC Science & Technology

    1989-12-01

    Zicker . MMAE-Based Control with Space- Time Point Process Observations. IEEE Transactions on Aerospace and Elec- tronic Systems, AES-21 (3):292-300, 1985...Transactions of the Conference of Army Math- ematicians, Bethesda MD, 1982. (AD-POO1 033). 65. William L. Zicker . Pointing and Tracking of Particle

  20. Photonic crystals possessing multiple Weyl points and the experimental observation of robust surface states

    PubMed Central

    Chen, Wen-Jie; Xiao, Meng; Chan, C. T.

    2016-01-01

    Weyl points, as monopoles of Berry curvature in momentum space, have captured much attention recently in various branches of physics. Realizing topological materials that exhibit such nodal points is challenging and indeed, Weyl points have been found experimentally in transition metal arsenide and phosphide and gyroid photonic crystal whose structure is complex. If realizing even the simplest type of single Weyl nodes with a topological charge of 1 is difficult, then making a real crystal carrying higher topological charges may seem more challenging. Here we design, and fabricate using planar fabrication technology, a photonic crystal possessing single Weyl points (including type-II nodes) and multiple Weyl points with topological charges of 2 and 3. We characterize this photonic crystal and find nontrivial 2D bulk band gaps for a fixed kz and the associated surface modes. The robustness of these surface states against kz-preserving scattering is experimentally observed for the first time. PMID:27703140

  1. Multivariate random regression analysis for body weight and main morphological traits in genetically improved farmed tilapia (Oreochromis niloticus).

    PubMed

    He, Jie; Zhao, Yunfeng; Zhao, Jingli; Gao, Jin; Han, Dandan; Xu, Pao; Yang, Runqing

    2017-11-02

    Because of their high economic importance, growth traits in fish are under continuous improvement. For growth traits that are recorded at multiple time-points in life, the use of univariate and multivariate animal models is limited because of the variable and irregular timing of these measures. Thus, the univariate random regression model (RRM) was introduced for the genetic analysis of dynamic growth traits in fish breeding. We used a multivariate random regression model (MRRM) to analyze genetic changes in growth traits recorded at multiple time-point of genetically-improved farmed tilapia. Legendre polynomials of different orders were applied to characterize the influences of fixed and random effects on growth trajectories. The final MRRM was determined by optimizing the univariate RRM for the analyzed traits separately via penalizing adaptively the likelihood statistical criterion, which is superior to both the Akaike information criterion and the Bayesian information criterion. In the selected MRRM, the additive genetic effects were modeled by Legendre polynomials of three orders for body weight (BWE) and body length (BL) and of two orders for body depth (BD). By using the covariance functions of the MRRM, estimated heritabilities were between 0.086 and 0.628 for BWE, 0.155 and 0.556 for BL, and 0.056 and 0.607 for BD. Only heritabilities for BD measured from 60 to 140 days of age were consistently higher than those estimated by the univariate RRM. All genetic correlations between growth time-points exceeded 0.5 for either single or pairwise time-points. Moreover, correlations between early and late growth time-points were lower. Thus, for phenotypes that are measured repeatedly in aquaculture, an MRRM can enhance the efficiency of the comprehensive selection for BWE and the main morphological traits.

  2. Field programmable gate array-assigned complex-valued computation and its limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernard-Schwarz, Maria, E-mail: maria.bernardschwarz@ni.com; Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien; Zwick, Wolfgang

    We discuss how leveraging Field Programmable Gate Array (FPGA) technology as part of a high performance computing platform reduces latency to meet the demanding real time constraints of a quantum optics simulation. Implementations of complex-valued operations using fixed point numeric on a Virtex-5 FPGA compare favorably to more conventional solutions on a central processing unit. Our investigation explores the performance of multiple fixed point options along with a traditional 64 bits floating point version. With this information, the lowest execution times can be estimated. Relative error is examined to ensure simulation accuracy is maintained.

  3. A Framework and Algorithms for Multivariate Time Series Analytics (MTSA): Learning, Monitoring, and Recommendation

    ERIC Educational Resources Information Center

    Ngan, Chun-Kit

    2013-01-01

    Making decisions over multivariate time series is an important topic which has gained significant interest in the past decade. A time series is a sequence of data points which are measured and ordered over uniform time intervals. A multivariate time series is a set of multiple, related time series in a particular domain in which domain experts…

  4. Predictors of outcomes of psychological treatments for disordered gambling: A systematic review.

    PubMed

    Merkouris, S S; Thomas, S A; Browning, C J; Dowling, N A

    2016-08-01

    This systematic review aimed to synthesise the evidence relating to pre-treatment predictors of gambling outcomes following psychological treatment for disordered gambling across multiple time-points (i.e., post-treatment, short-term, medium-term, and long-term). A systematic search from 1990 to 2016 identified 50 articles, from which 11 socio-demographic, 16 gambling-related, 21 psychological/psychosocial, 12 treatment, and no therapist-related variables, were identified. Male gender and low depression levels were the most consistent predictors of successful treatment outcomes across multiple time-points. Likely predictors of successful treatment outcomes also included older age, lower gambling symptom severity, lower levels of gambling behaviours and alcohol use, and higher treatment session attendance. Significant associations, at a minimum of one time-point, were identified between successful treatment outcomes and being employed, ethnicity, no gambling debt, personality traits and being in the action stage of change. Mixed results were identified for treatment goal, while education, income, preferred gambling activity, problem gambling duration, anxiety, any psychiatric comorbidity, psychological distress, substance use, prior gambling treatment and medication use were not significantly associated with treatment outcomes at any time-point. Further research involving consistent treatment outcome frameworks, examination of treatment and therapist predictor variables, and evaluation of predictors across long-term follow-ups is warranted to advance this developing field of research. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Mirrored pyramidal wells for simultaneous multiple vantage point microscopy

    PubMed Central

    Seale, K.T.; Reiserer, R.S.; Markov, D.A.; Ges, I.A.; Wright, C.; Janetopoulos, C.; Wikswo, J.P.

    2013-01-01

    Summary We report a novel method for obtaining simultaneous images from multiple vantage points of a microscopic specimen using size-matched microscopic mirrors created from anisotropically etched silicon. The resulting pyramidal wells enable bright-field and fluorescent side-view images, and when combined with z-sectioning, provide additional information for 3D reconstructions of the specimen. We have demonstrated the 3D localization and tracking over time of the centrosome of a live Dictyostelium discoideum. The simultaneous acquisition of images from multiple perspectives also provides a five-fold increase in the theoretical collection efficiency of emitted photons, a property which may be useful for low-light imaging modalities such as bioluminescence, or low abundance surface-marker labelling. PMID:19017196

  6. Injection System for Multi-Well Injection Using a Single Pump

    PubMed Central

    Wovkulich, Karen; Stute, Martin; Protus, Thomas J.; Mailloux, Brian J.; Chillrud, Steven N.

    2015-01-01

    Many hydrological and geochemical studies rely on data resulting from injection of tracers and chemicals into groundwater wells. The even distribution of liquids to multiple injection points can be challenging or expensive, especially when using multiple pumps. An injection system was designed using one chemical metering pump to evenly distribute the desired influent simultaneously to 15 individual injection points through an injection manifold. The system was constructed with only one metal part contacting the fluid due to the low pH of the injection solutions. The injection manifold system was used during a three-month pilot scale injection experiment at the Vineland Chemical Company Superfund site. During the two injection phases of the experiment (Phase I = 0.27 L/min total flow, Phase II = 0.56 L/min total flow), flow measurements were made 20 times over three months; an even distribution of flow to each injection well was maintained (RSD <4%). This durable system is expandable to at least 16 injection points and should be adaptable to other injection experiments that require distribution of air-stable liquids to multiple injection points with a single pump. PMID:26140014

  7. A method for analyzing temporal patterns of variability of a time series from Poincare plots.

    PubMed

    Fishman, Mikkel; Jacono, Frank J; Park, Soojin; Jamasebi, Reza; Thungtong, Anurak; Loparo, Kenneth A; Dick, Thomas E

    2012-07-01

    The Poincaré plot is a popular two-dimensional, time series analysis tool because of its intuitive display of dynamic system behavior. Poincaré plots have been used to visualize heart rate and respiratory pattern variabilities. However, conventional quantitative analysis relies primarily on statistical measurements of the cumulative distribution of points, making it difficult to interpret irregular or complex plots. Moreover, the plots are constructed to reflect highly correlated regions of the time series, reducing the amount of nonlinear information that is presented and thereby hiding potentially relevant features. We propose temporal Poincaré variability (TPV), a novel analysis methodology that uses standard techniques to quantify the temporal distribution of points and to detect nonlinear sources responsible for physiological variability. In addition, the analysis is applied across multiple time delays, yielding a richer insight into system dynamics than the traditional circle return plot. The method is applied to data sets of R-R intervals and to synthetic point process data extracted from the Lorenz time series. The results demonstrate that TPV complements the traditional analysis and can be applied more generally, including Poincaré plots with multiple clusters, and more consistently than the conventional measures and can address questions regarding potential structure underlying the variability of a data set.

  8. A multiple imputation strategy for sequential multiple assignment randomized trials

    PubMed Central

    Shortreed, Susan M.; Laber, Eric; Stroup, T. Scott; Pineau, Joelle; Murphy, Susan A.

    2014-01-01

    Sequential multiple assignment randomized trials (SMARTs) are increasingly being used to inform clinical and intervention science. In a SMART, each patient is repeatedly randomized over time. Each randomization occurs at a critical decision point in the treatment course. These critical decision points often correspond to milestones in the disease process or other changes in a patient’s health status. Thus, the timing and number of randomizations may vary across patients and depend on evolving patient-specific information. This presents unique challenges when analyzing data from a SMART in the presence of missing data. This paper presents the first comprehensive discussion of missing data issues typical of SMART studies: we describe five specific challenges, and propose a flexible imputation strategy to facilitate valid statistical estimation and inference using incomplete data from a SMART. To illustrate these contributions, we consider data from the Clinical Antipsychotic Trial of Intervention and Effectiveness (CATIE), one of the most well-known SMARTs to date. PMID:24919867

  9. HIV-1 infections with multiple founders are associated with higher viral loads than infections with single founders.

    PubMed

    Janes, Holly; Herbeck, Joshua T; Tovanabutra, Sodsai; Thomas, Rasmi; Frahm, Nicole; Duerr, Ann; Hural, John; Corey, Lawrence; Self, Steve G; Buchbinder, Susan P; McElrath, M Juliana; O'Connell, Robert J; Paris, Robert M; Rerks-Ngarm, Supachai; Nitayaphan, Sorachai; Pitisuttihum, Punnee; Kaewkungwal, Jaranit; Robb, Merlin L; Michael, Nelson L; Mullins, James I; Kim, Jerome H; Gilbert, Peter B; Rolland, Morgane

    2015-10-01

    Given the variation in the HIV-1 viral load (VL) set point across subjects, as opposed to a fairly stable VL over time within an infected individual, it is important to identify the characteristics of the host and virus that affect VL set point. Although recently infected individuals with multiple phylogenetically linked HIV-1 founder variants represent a minority of HIV-1 infections, we found--n two different cohorts--hat more diverse HIV-1 populations in early infection were associated with significantly higher VL 1 year after HIV-1 diagnosis.

  10. Fast REDOR with CPMG multiple-echo acquisition

    NASA Astrophysics Data System (ADS)

    Hung, Ivan; Gan, Zhehong

    2014-01-01

    Rotational-Echo Double Resonance (REDOR) is a widely used experiment for distance measurements in solids. The conventional REDOR experiment measures the signal dephasing from hetero-nuclear recoupling under magic-angle spinning (MAS) in a point by point manner. A modified Carr-Purcell Meiboom-Gill (CPMG) multiple-echo scheme is introduced for fast REDOR measurement. REDOR curves are measured from the CPMG echo amplitude modulation under dipolar recoupling. The real time CPMG-REDOR experiment can speed up the measurement by an order of magnitude. The effects from hetero-nuclear recoupling, the Bloch-Siegert shift and echo truncation to the signal acquisition are discussed and demonstrated.

  11. Initial Correction versus Negative Marking in Multiple Choice Examinations

    ERIC Educational Resources Information Center

    Van Hecke, Tanja

    2015-01-01

    Optimal assessment tools should measure in a limited time the knowledge of students in a correct and unbiased way. A method for automating the scoring is multiple choice scoring. This article compares scoring methods from a probabilistic point of view by modelling the probability to pass: the number right scoring, the initial correction (IC) and…

  12. [Application of damage control concept in severe limbs fractures combining with multiple trauma].

    PubMed

    Bayin, Er-gu-le; Jin, Hong-bing; Li, Ming

    2015-09-01

    To discuss the application and clinical effect of damage control concept in the treatment of severe limbs fractures combining with multiple trauma. From July 2009 to July 2012, 30 patients with severe limbs fractures combining with multiple trauma were treated with the damage control concept, included 20 males and 10 females with an average age of (34.03 ± 12.81) years old ranging from 20 to 60 years old; the ISS averaged (35.00 ± 12.81) points (ranged from 26 to 54 points). And the control group also contained 30 patients with severe limbs fractures combining with multiple trauma treated by the traditional operation from June 2006 to June 2009, there were 23 males and 7 females with an average age of (34.23 ± 11.04) years old ranging from 18 to 65 years old. The ISS averaged (35.56 ± 11.04) points (ranged from 26 to 51 points). The age, gender, ISS, Gustilo classification, operation time, intraoperative blood loss, blood transfusion,postoperative complications and mortality rate were observed and compared. In the damage control concept group,there were 28 cases surviving and 2 cases (6.7%) death; 6 cases of postoperative complication included 2 cases of adult respiratory distress syndrome, 1 case of multiple organ failure, 1 case of disseminated intravascular coagulation and 2 cases of wound infection. In the control group, there were 22 cases surviving and 8 cases death(26.7%); 13 cases of postoperative complication included 4 cases of adult respiratory distress syndrome,2 cases of multiple organ failure, 2 cases of disseminated intravascular coagulation and 3 cases of wound infection. There were no statistically significant differences between two groups in age, gender, ISS, Gustilo classfication and complication (P > 0.05), however there were statistically significant differences in mortality rate, operation time, blodd loss, blodd transfusion between two groups (P < 0.05). Damage control concept is used to treat severe limbs fractures combining with multiple trauma which has the rapid and effective therapy, can improve survival rate and reduce complication.

  13. [Modern concepts of trauma care and multiple trauma management in oral and maxillofacial region].

    PubMed

    Tan, Yinghui

    2015-06-01

    Multiple trauma management requires the application of modem trauma care theories. Optimal treatment results can be achieved by reinforcing cooperation and stipulating a treatment plan together with other disciplines. Based on modem theories in trauma care and our understanding of the theoretical points, this paper analyzes the injury assessment strategies and methods in oral and maxillofacial multiple trauma management. Moreover, this paper discusses operating time and other influencing factors as well as proposed definitive surgical timing and indications in comprehensive management of oral and maxillofacial multiple trauma patients associated with injuries in other body parts. We hope that this paper can help stomatological physicians deepen their understanding of modem trauma care theories and improve their capacity and results in the treatment of oral and maxillofacial multiple trauma.

  14. Channel Model Optimization with Reflection Residual Component for Indoor MIMO-VLC System

    NASA Astrophysics Data System (ADS)

    Chen, Yong; Li, Tengfei; Liu, Huanlin; Li, Yichao

    2017-12-01

    A fast channel modeling method is studied to solve the problem of reflection channel gain for multiple input multiple output-visible light communications (MIMO-VLC) in the paper. For reducing the computational complexity when associating with the reflection times, no more than 3 reflections are taken into consideration in VLC. We think that higher order reflection link consists of corresponding many times line of sight link and firstly present reflection residual component to characterize higher reflection (more than 2 reflections). We perform computer simulation results for point-to-point channel impulse response, receiving optical power and receiving signal to noise ratio. Based on theoretical analysis and simulation results, the proposed method can effectively reduce the computational complexity of higher order reflection in channel modeling.

  15. Brain MRI volumetry in a single patient with mild traumatic brain injury.

    PubMed

    Ross, David E; Castelvecchi, Cody; Ochs, Alfred L

    2013-01-01

    This letter to the editor describes the case of a 42 year old man with mild traumatic brain injury and multiple neuropsychiatric symptoms which persisted for a few years after the injury. Initial CT scans and MRI scans of the brain showed no signs of atrophy. Brain volume was measured using NeuroQuant®, an FDA-approved, commercially available software method. Volumetric cross-sectional (one point in time) analysis also showed no atrophy. However, volumetric longitudinal (two points in time) analysis showed progressive atrophy in several brain regions. This case illustrated in a single patient the principle discovered in multiple previous group studies, namely that the longitudinal design is more powerful than the cross-sectional design for finding atrophy in patients with traumatic brain injury.

  16. Accessing the exceptional points of parity-time symmetric acoustics

    PubMed Central

    Shi, Chengzhi; Dubois, Marc; Chen, Yun; Cheng, Lei; Ramezani, Hamidreza; Wang, Yuan; Zhang, Xiang

    2016-01-01

    Parity-time (PT) symmetric systems experience phase transition between PT exact and broken phases at exceptional point. These PT phase transitions contribute significantly to the design of single mode lasers, coherent perfect absorbers, isolators, and diodes. However, such exceptional points are extremely difficult to access in practice because of the dispersive behaviour of most loss and gain materials required in PT symmetric systems. Here we introduce a method to systematically tame these exceptional points and control PT phases. Our experimental demonstration hinges on an active acoustic element that realizes a complex-valued potential and simultaneously controls the multiple interference in the structure. The manipulation of exceptional points offers new routes to broaden applications for PT symmetric physics in acoustics, optics, microwaves and electronics, which are essential for sensing, communication and imaging. PMID:27025443

  17. Estimating the number of people in crowded scenes

    NASA Astrophysics Data System (ADS)

    Kim, Minjin; Kim, Wonjun; Kim, Changick

    2011-01-01

    This paper presents a method to estimate the number of people in crowded scenes without using explicit object segmentation or tracking. The proposed method consists of three steps as follows: (1) extracting space-time interest points using eigenvalues of the local spatio-temporal gradient matrix, (2) generating crowd regions based on space-time interest points, and (3) estimating the crowd density based on the multiple regression. In experimental results, the efficiency and robustness of our proposed method are demonstrated by using PETS 2009 dataset.

  18. Distributed optimization system and method

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  19. Distributed Optimization System

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  20. Software Aspects of IEEE Floating-Point Computations for Numerical Applications in High Energy Physics

    ScienceCinema

    Arnold, Jeffrey

    2018-05-14

    Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided. About the speaker: Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.

  1. Analyzing latent state-trait and multiple-indicator latent growth curve models as multilevel structural equation models

    PubMed Central

    Geiser, Christian; Bishop, Jacob; Lockhart, Ginger; Shiffman, Saul; Grenard, Jerry L.

    2013-01-01

    Latent state-trait (LST) and latent growth curve (LGC) models are frequently used in the analysis of longitudinal data. Although it is well-known that standard single-indicator LGC models can be analyzed within either the structural equation modeling (SEM) or multilevel (ML; hierarchical linear modeling) frameworks, few researchers realize that LST and multivariate LGC models, which use multiple indicators at each time point, can also be specified as ML models. In the present paper, we demonstrate that using the ML-SEM rather than the SL-SEM framework to estimate the parameters of these models can be practical when the study involves (1) a large number of time points, (2) individually-varying times of observation, (3) unequally spaced time intervals, and/or (4) incomplete data. Despite the practical advantages of the ML-SEM approach under these circumstances, there are also some limitations that researchers should consider. We present an application to an ecological momentary assessment study (N = 158 youths with an average of 23.49 observations of positive mood per person) using the software Mplus (Muthén and Muthén, 1998–2012) and discuss advantages and disadvantages of using the ML-SEM approach to estimate the parameters of LST and multiple-indicator LGC models. PMID:24416023

  2. The relevance of time series in molecular ecology and conservation biology.

    PubMed

    Habel, Jan C; Husemann, Martin; Finger, Aline; Danley, Patrick D; Zachos, Frank E

    2014-05-01

    The genetic structure of a species is shaped by the interaction of contemporary and historical factors. Analyses of individuals from the same population sampled at different points in time can help to disentangle the effects of current and historical forces and facilitate the understanding of the forces driving the differentiation of populations. The use of such time series allows for the exploration of changes at the population and intraspecific levels over time. Material from museum collections plays a key role in understanding and evaluating observed population structures, especially if large numbers of individuals have been sampled from the same locations at multiple time points. In these cases, changes in population structure can be assessed empirically. The development of new molecular markers relying on short DNA fragments (such as microsatellites or single nucleotide polymorphisms) allows for the analysis of long-preserved and partially degraded samples. Recently developed techniques to construct genome libraries with a reduced complexity and next generation sequencing and their associated analysis pipelines have the potential to facilitate marker development and genotyping in non-model species. In this review, we discuss the problems with sampling and available marker systems for historical specimens and demonstrate that temporal comparative studies are crucial for the estimation of important population genetic parameters and to measure empirically the effects of recent habitat alteration. While many of these analyses can be performed with samples taken at a single point in time, the measurements are more robust if multiple points in time are studied. Furthermore, examining the effects of habitat alteration, population declines, and population bottlenecks is only possible if samples before and after the respective events are included. © 2013 The Authors. Biological Reviews © 2013 Cambridge Philosophical Society.

  3. Longitudinal Indicators of the Social Context of Families: Beyond the Snapshot

    ERIC Educational Resources Information Center

    Moore, Kristin Anderson; Vandivere, Sharon

    2007-01-01

    Longitudinal indicators are measures of an individual or family behavior, interaction, attitude, or value that are assessed consistently or comparably across multiple points in time and cumulated over time. Examples include the percentage of time a family lived in poverty or the proportion of childhood a person lived in a single-parent family.…

  4. Normalization methods in time series of platelet function assays

    PubMed Central

    Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham

    2016-01-01

    Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217

  5. Multiple contacts with diversion at the point of arrest.

    PubMed

    Riordan, Sharon; Wix, Stuart; Haque, M Sayeed; Humphreys, Martin

    2003-04-01

    A diversion at the point of arrest (DAPA) scheme was set up in five police stations in South Birmingham in 1992. In a study of all referrals made over a four-year period a sub group of multiple contact individuals was identified. During that time four hundred and ninety-two contacts were recorded in total, of which 130 were made by 58 individuals. The latter group was generally no different from the single contact group but did have a tendency to be younger. This research highlights the need for a re-evaluation of service provision and associated education of police officers and relevant mental health care professionals.

  6. Simulating the complex output of rainfall and hydrological processes using the information contained in large data sets: the Direct Sampling approach.

    NASA Astrophysics Data System (ADS)

    Oriani, Fabio

    2017-04-01

    The unpredictable nature of rainfall makes its estimation as much difficult as it is essential to hydrological applications. Stochastic simulation is often considered a convenient approach to asses the uncertainty of rainfall processes, but preserving their irregular behavior and variability at multiple scales is a challenge even for the most advanced techniques. In this presentation, an overview on the Direct Sampling technique [1] and its recent application to rainfall and hydrological data simulation [2, 3] is given. The algorithm, having its roots in multiple-point statistics, makes use of a training data set to simulate the outcome of a process without inferring any explicit probability measure: the data are simulated in time or space by sampling the training data set where a sufficiently similar group of neighbor data exists. This approach allows preserving complex statistical dependencies at different scales with a good approximation, while reducing the parameterization to the minimum. The straights and weaknesses of the Direct Sampling approach are shown through a series of applications to rainfall and hydrological data: from time-series simulation to spatial rainfall fields conditioned by elevation or a climate scenario. In the era of vast databases, is this data-driven approach a valid alternative to parametric simulation techniques? [1] Mariethoz G., Renard P., and Straubhaar J. (2010), The Direct Sampling method to perform multiple-point geostatistical simulations, Water. Rerous. Res., 46(11), http://dx.doi.org/10.1029/2008WR007621 [2] Oriani F., Straubhaar J., Renard P., and Mariethoz G. (2014), Simulation of rainfall time series from different climatic regions using the direct sampling technique, Hydrol. Earth Syst. Sci., 18, 3015-3031, http://dx.doi.org/10.5194/hess-18-3015-2014 [3] Oriani F., Borghi A., Straubhaar J., Mariethoz G., Renard P. (2016), Missing data simulation inside flow rate time-series using multiple-point statistics, Environ. Model. Softw., vol. 86, pp. 264 - 276, http://dx.doi.org/10.1016/j.envsoft.2016.10.002

  7. PCTO-SIM: Multiple-point geostatistical modeling using parallel conditional texture optimization

    NASA Astrophysics Data System (ADS)

    Pourfard, Mohammadreza; Abdollahifard, Mohammad J.; Faez, Karim; Motamedi, Sayed Ahmad; Hosseinian, Tahmineh

    2017-05-01

    Multiple-point Geostatistics is a well-known general statistical framework by which complex geological phenomena have been modeled efficiently. Pixel-based and patch-based are two major categories of these methods. In this paper, the optimization-based category is used which has a dual concept in texture synthesis as texture optimization. Our extended version of texture optimization uses the energy concept to model geological phenomena. While honoring the hard point, the minimization of our proposed cost function forces simulation grid pixels to be as similar as possible to training images. Our algorithm has a self-enrichment capability and creates a richer training database from a sparser one through mixing the information of all surrounding patches of the simulation nodes. Therefore, it preserves pattern continuity in both continuous and categorical variables very well. It also shows a fuzzy result in its every realization similar to the expected result of multi realizations of other statistical models. While the main core of most previous Multiple-point Geostatistics methods is sequential, the parallel main core of our algorithm enabled it to use GPU efficiently to reduce the CPU time. One new validation method for MPS has also been proposed in this paper.

  8. Comparing Estimates of Multiple and Concurrent Partnerships Across Population Based Surveys: Implications for Combination HIV Prevention

    PubMed Central

    Morris, Martina; Leslie-Cook, Ayn; Akom, Eniko; Stephen, Aloo; Sherard, Donna

    2014-01-01

    We compare estimates of multiple and concurrent sexual partnerships from Demographic and Health Surveys (DHS) with comparable Population Services International (PSI) surveys in four African countries (Kenya, Lesotho, Uganda, Zambia). DHS data produce significantly lower estimates of all indicators for both sexes in all countries. PSI estimates of multiple partnerships are 1.7 times higher [1.4 for men (M), 3.0 for women (W)], cumulative prevalence of concurrency is 2.4 times higher (2.2 M, 2.7 W), the point prevalence of concurrency is 3.5 times higher (3.5 M, 3.3 W), and the fraction of multi-partnered persons who report concurrency last year is 1.4 times higher (1.6 M, 0.9 W). These findings provide strong empirical evidence that DHS surveys systematically underestimate levels of multiple and concurrent partnerships. The underestimates will contaminate both empirical analyses of the link between sexual behavior and HIV infection, and theoretical models for combination prevention that use these data for inputs. PMID:24077973

  9. Comparing Estimates of Multiple and Concurrent Partnerships Across Population Based Surveys: Implications for Combination HIV Prevention.

    PubMed

    Morris, Martina; Vu, Lung; Leslie-Cook, Ayn; Akom, Eniko; Stephen, Aloo; Sherard, Donna

    2014-04-01

    We compare estimates of multiple and concurrent sexual partnerships from Demographic and Health Surveys (DHS) with comparable Population Services International (PSI) surveys in four African countries (Kenya, Lesotho, Uganda, Zambia). DHS data produce significantly lower estimates of all indicators for both sexes in all countries. PSI estimates of multiple partnerships are 1.7 times higher [1.4 for men (M), 3.0 for women (W)], cumulative prevalence of concurrency is 2.4 times higher (2.2 M, 2.7 W), the point prevalence of concurrency is 3.5 times higher (3.5 M, 3.3 W), and the fraction of multi-partnered persons who report concurrency last year is 1.4 times higher (1.6 M, 0.9 W). These findings provide strong empirical evidence that DHS surveys systematically underestimate levels of multiple and concurrent partnerships. The underestimates will contaminate both empirical analyses of the link between sexual behavior and HIV infection, and theoretical models for combination prevention that use these data for inputs.

  10. Time-dependent probability density functions and information geometry in stochastic logistic and Gompertz models

    NASA Astrophysics Data System (ADS)

    Tenkès, Lucille-Marie; Hollerbach, Rainer; Kim, Eun-jin

    2017-12-01

    A probabilistic description is essential for understanding growth processes in non-stationary states. In this paper, we compute time-dependent probability density functions (PDFs) in order to investigate stochastic logistic and Gompertz models, which are two of the most popular growth models. We consider different types of short-correlated multiplicative and additive noise sources and compare the time-dependent PDFs in the two models, elucidating the effects of the additive and multiplicative noises on the form of PDFs. We demonstrate an interesting transition from a unimodal to a bimodal PDF as the multiplicative noise increases for a fixed value of the additive noise. A much weaker (leaky) attractor in the Gompertz model leads to a significant (singular) growth of the population of a very small size. We point out the limitation of using stationary PDFs, mean value and variance in understanding statistical properties of the growth in non-stationary states, highlighting the importance of time-dependent PDFs. We further compare these two models from the perspective of information change that occurs during the growth process. Specifically, we define an infinitesimal distance at any time by comparing two PDFs at times infinitesimally apart and sum these distances in time. The total distance along the trajectory quantifies the total number of different states that the system undergoes in time, and is called the information length. We show that the time-evolution of the two models become more similar when measured in units of the information length and point out the merit of using the information length in unifying and understanding the dynamic evolution of different growth processes.

  11. Prediction of Autism at 3 Years from Behavioural and Developmental Measures in High-Risk Infants: A Longitudinal Cross-Domain Classifier Analysis

    ERIC Educational Resources Information Center

    Bussu, G.; Jones, E. J. H.; Charman, T.; Johnson, M. H.; Buitelaar, J. K.; Baron-Cohen, S.; Bedford, R.; Bolton, P.; Blasi, A.; Chandler, S.; Cheung, C.; Davies, K.; Elsabbagh, M.; Fernandes, J.; Gammer, I.; Garwood, H.; Gliga, T.; Guiraud, J.; Hudry, K.; Liew, M.; Lloyd-Fox, S.; Maris, H.; O'Hara, L.; Pasco, G.; Pickles, A.; Ribeiro, H.; Salomone, E.; Tucker, L.; Volein, A.

    2018-01-01

    We integrated multiple behavioural and developmental measures from multiple time-points using machine learning to improve early prediction of individual Autism Spectrum Disorder (ASD) outcome. We examined Mullen Scales of Early Learning, Vineland Adaptive Behavior Scales, and early ASD symptoms between 8 and 36 months in high-risk siblings (HR; n…

  12. A time-series study of sick building syndrome: chronic, biotoxin-associated illness from exposure to water-damaged buildings.

    PubMed

    Shoemaker, Ritchie C; House, Dennis E

    2005-01-01

    The human health risk for chronic illnesses involving multiple body systems following inhalation exposure to the indoor environments of water-damaged buildings (WDBs) has remained poorly characterized and the subject of intense controversy. The current study assessed the hypothesis that exposure to the indoor environments of WDBs with visible microbial colonization was associated with illness. The study used a cross-sectional design with assessments at five time points, and the interventions of cholestyramine (CSM) therapy, exposure avoidance following therapy, and reexposure to the buildings after illness resolution. The methodological approach included oral administration of questionnaires, medical examinations, laboratory analyses, pulmonary function testing, and measurements of visual function. Of the 21 study volunteers, 19 completed assessment at each of the five time points. Data at Time Point 1 indicated multiple symptoms involving at least four organ systems in all study participants, a restrictive respiratory condition in four participants, and abnormally low visual contrast sensitivity (VCS) in 18 participants. Serum leptin levels were abnormally high and alpha melanocyte stimulating hormone (MSH) levels were abnormally low. Assessments at Time Point 2, following 2 weeks of CSM therapy, indicated a highly significant improvement in health status. Improvement was maintained at Time Point 3, which followed exposure avoidance without therapy. Reexposure to the WDBs resulted in illness reacquisition in all participants within 1 to 7 days. Following another round of CSM therapy, assessments at Time Point 5 indicated a highly significant improvement in health status. The group-mean number of symptoms decreased from 14.9+/-0.8 S.E.M. at Time Point 1 to 1.2+/-0.3 S.E.M., and the VCS deficit of approximately 50% at Time Point 1 was fully resolved. Leptin and MSH levels showed statistically significant improvement. The results indicated that CSM was an effective therapeutic agent, that VCS was a sensitive and specific indicator of neurologic function, and that illness involved systemic and hypothalamic processes. Although the results supported the general hypothesis that illness was associated with exposure to the WDBs, this conclusion was tempered by several study limitations. Exposure to specific agents was not demonstrated, study participants were not randomly selected, and double-blinding procedures were not used. Additional human and animal studies are needed to confirm this conclusion, investigate the role of complex mixtures of bacteria, fungi, mycotoxins, endotoxins, and antigens in illness causation, and characterize modes of action. Such data will improve the assessment of human health risk from chronic exposure to WDBs.

  13. Use of personalized Dynamic Treatment Regimes (DTRs) and Sequential Multiple Assignment Randomized Trials (SMARTs) in mental health studies

    PubMed Central

    Liu, Ying; ZENG, Donglin; WANG, Yuanjia

    2014-01-01

    Summary Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each point where a clinical decision is made based on each patient’s time-varying characteristics and intermediate outcomes observed at earlier points in time. The complexity, patient heterogeneity, and chronicity of mental disorders call for learning optimal DTRs to dynamically adapt treatment to an individual’s response over time. The Sequential Multiple Assignment Randomized Trial (SMARTs) design allows for estimating causal effects of DTRs. Modern statistical tools have been developed to optimize DTRs based on personalized variables and intermediate outcomes using rich data collected from SMARTs; these statistical methods can also be used to recommend tailoring variables for designing future SMART studies. This paper introduces DTRs and SMARTs using two examples in mental health studies, discusses two machine learning methods for estimating optimal DTR from SMARTs data, and demonstrates the performance of the statistical methods using simulated data. PMID:25642116

  14. Apparent diffusion coefficient histogram analysis can evaluate radiation-induced parotid damage and predict late xerostomia degree in nasopharyngeal carcinoma

    PubMed Central

    Zhou, Nan; Guo, Tingting; Zheng, Huanhuan; Pan, Xia; Chu, Chen; Dou, Xin; Li, Ming; Liu, Song; Zhu, Lijing; Liu, Baorui; Chen, Weibo; He, Jian; Yan, Jing; Zhou, Zhengyang; Yang, Xiaofeng

    2017-01-01

    We investigated apparent diffusion coefficient (ADC) histogram analysis to evaluate radiation-induced parotid damage and predict xerostomia degrees in nasopharyngeal carcinoma (NPC) patients receiving radiotherapy. The imaging of bilateral parotid glands in NPC patients was conducted 2 weeks before radiotherapy (time point 1), one month after radiotherapy (time point 2), and four months after radiotherapy (time point 3). From time point 1 to 2, parotid volume, skewness, and kurtosis decreased (P < 0.001, = 0.001, and < 0.001, respectively), but all other ADC histogram parameters increased (all P < 0.001, except P = 0.006 for standard deviation [SD]). From time point 2 to 3, parotid volume continued to decrease (P = 0.022), and SD, 75th and 90th percentiles continued to increase (P = 0.024, 0.010, and 0.006, respectively). Early change rates of parotid ADCmean, ADCmin, kurtosis, and 25th, 50th, 75th, 90th percentiles (from time point 1 to 2) correlated with late parotid atrophy rate (from time point 1 to 3) (all P < 0.05). Multiple linear regression analysis revealed correlations among parotid volume, time point, and ADC histogram parameters. Early mean change rates for bilateral parotid SD and ADCmax could predict late xerostomia degrees at seven months after radiotherapy (three months after time point 3) with AUC of 0.781 and 0.818 (P = 0.014, 0.005, respectively). ADC histogram parameters were reproducible (intraclass correlation coefficient, 0.830 - 0.999). ADC histogram analysis could be used to evaluate radiation-induced parotid damage noninvasively, and predict late xerostomia degrees of NPC patients treated with radiotherapy. PMID:29050274

  15. Apparent diffusion coefficient histogram analysis can evaluate radiation-induced parotid damage and predict late xerostomia degree in nasopharyngeal carcinoma.

    PubMed

    Zhou, Nan; Guo, Tingting; Zheng, Huanhuan; Pan, Xia; Chu, Chen; Dou, Xin; Li, Ming; Liu, Song; Zhu, Lijing; Liu, Baorui; Chen, Weibo; He, Jian; Yan, Jing; Zhou, Zhengyang; Yang, Xiaofeng

    2017-09-19

    We investigated apparent diffusion coefficient (ADC) histogram analysis to evaluate radiation-induced parotid damage and predict xerostomia degrees in nasopharyngeal carcinoma (NPC) patients receiving radiotherapy. The imaging of bilateral parotid glands in NPC patients was conducted 2 weeks before radiotherapy (time point 1), one month after radiotherapy (time point 2), and four months after radiotherapy (time point 3). From time point 1 to 2, parotid volume, skewness, and kurtosis decreased ( P < 0.001, = 0.001, and < 0.001, respectively), but all other ADC histogram parameters increased (all P < 0.001, except P = 0.006 for standard deviation [SD]). From time point 2 to 3, parotid volume continued to decrease ( P = 0.022), and SD, 75 th and 90 th percentiles continued to increase ( P = 0.024, 0.010, and 0.006, respectively). Early change rates of parotid ADC mean , ADC min , kurtosis, and 25 th , 50 th , 75 th , 90 th percentiles (from time point 1 to 2) correlated with late parotid atrophy rate (from time point 1 to 3) (all P < 0.05). Multiple linear regression analysis revealed correlations among parotid volume, time point, and ADC histogram parameters. Early mean change rates for bilateral parotid SD and ADC max could predict late xerostomia degrees at seven months after radiotherapy (three months after time point 3) with AUC of 0.781 and 0.818 ( P = 0.014, 0.005, respectively). ADC histogram parameters were reproducible (intraclass correlation coefficient, 0.830 - 0.999). ADC histogram analysis could be used to evaluate radiation-induced parotid damage noninvasively, and predict late xerostomia degrees of NPC patients treated with radiotherapy.

  16. On the design of a radix-10 online floating-point multiplier

    NASA Astrophysics Data System (ADS)

    McIlhenny, Robert D.; Ercegovac, Milos D.

    2009-08-01

    This paper describes an approach to design and implement a radix-10 online floating-point multiplier. An online approach is considered because it offers computational flexibility not available with conventional arithmetic. The design was coded in VHDL and compiled, synthesized, and mapped onto a Virtex 5 FPGA to measure cost in terms of LUTs (look-up-tables) as well as the cycle time and total latency. The routing delay which was not optimized is the major component in the cycle time. For a rough estimate of the cost/latency characteristics, our design was compared to a standard radix-2 floating-point multiplier of equivalent precision. The results demonstrate that even an unoptimized radix-10 online design is an attractive implementation alternative for FPGA floating-point multiplication.

  17. Comparison of breast DCE-MRI contrast time points for predicting response to neoadjuvant chemotherapy using deep convolutional neural network features with transfer learning

    NASA Astrophysics Data System (ADS)

    Huynh, Benjamin Q.; Antropova, Natasha; Giger, Maryellen L.

    2017-03-01

    DCE-MRI datasets have a temporal aspect to them, resulting in multiple regions of interest (ROIs) per subject, based on contrast time points. It is unclear how the different contrast time points vary in terms of usefulness for computer-aided diagnosis tasks in conjunction with deep learning methods. We thus sought to compare the different DCE-MRI contrast time points with regard to how well their extracted features predict response to neoadjuvant chemotherapy within a deep convolutional neural network. Our dataset consisted of 561 ROIs from 64 subjects. Each subject was categorized as a non-responder or responder, determined by recurrence-free survival. First, features were extracted from each ROI using a convolutional neural network (CNN) pre-trained on non-medical images. Linear discriminant analysis classifiers were then trained on varying subsets of these features, based on their contrast time points of origin. Leave-one-out cross validation (by subject) was used to assess performance in the task of estimating probability of response to therapy, with area under the ROC curve (AUC) as the metric. The classifier trained on features from strictly the pre-contrast time point performed the best, with an AUC of 0.85 (SD = 0.033). The remaining classifiers resulted in AUCs ranging from 0.71 (SD = 0.028) to 0.82 (SD = 0.027). Overall, we found the pre-contrast time point to be the most effective at predicting response to therapy and that including additional contrast time points moderately reduces variance.

  18. The fast multipole method and point dipole moment polarizable force fields.

    PubMed

    Coles, Jonathan P; Masella, Michel

    2015-01-14

    We present an implementation of the fast multipole method for computing Coulombic electrostatic and polarization forces from polarizable force-fields based on induced point dipole moments. We demonstrate the expected O(N) scaling of that approach by performing single energy point calculations on hexamer protein subunits of the mature HIV-1 capsid. We also show the long time energy conservation in molecular dynamics at the nanosecond scale by performing simulations of a protein complex embedded in a coarse-grained solvent using a standard integrator and a multiple time step integrator. Our tests show the applicability of fast multipole method combined with state-of-the-art chemical models in molecular dynamical systems.

  19. Compartmentalization and Transmission of Multiple Epstein-Barr Virus Strains in Asymptomatic Carriers

    PubMed Central

    Sitki-Green, Diane; Covington, Mary; Raab-Traub, Nancy

    2003-01-01

    Infection with the Epstein-Barr virus (EBV) is often subclinical in the presence of a healthy immune response; thus, asymptomatic infection is largely uncharacterized. This study analyzed the nature of EBV infection in 20 asymptomatic immunocompetent hosts over time through the identification of EBV strain variants in the peripheral blood and oral cavity. A heteroduplex tracking assay specific for the EBV gene LMP1 precisely identified the presence of multiple EBV strains in each subject. The strains present in the peripheral blood and oral cavity were often completely discordant, indicating the existence of distinct infections, and the strains present and their relative abundance changed considerably between time points. The possible transmission of strains between the oral cavity and peripheral blood compartments could be tracked within subjects, suggesting that reactivation in the oral cavity and subsequent reinfection of B lymphocytes that reenter the periphery contribute to the maintenance of persistence. In addition, distinct virus strains persisted in the oral cavity over many time points, suggesting an important role for epithelial cells in the maintenance of persistence. Asymptomatic individuals without tonsillar tissue, which is believed to be an important source of virus for the oral cavity, also exhibited multiple strains and a cyclic pattern of transmission between compartments. This study revealed that the majority of patients with infectious mononucleosis were infected with multiple strains of EBV that were also compartmentalized, suggesting that primary infection involves the transmission of multiple strains. Both the primary and carrier states of infection with EBV are more complex than previously thought. PMID:12525618

  20. Compartmentalization and transmission of multiple epstein-barr virus strains in asymptomatic carriers.

    PubMed

    Sitki-Green, Diane; Covington, Mary; Raab-Traub, Nancy

    2003-02-01

    Infection with the Epstein-Barr virus (EBV) is often subclinical in the presence of a healthy immune response; thus, asymptomatic infection is largely uncharacterized. This study analyzed the nature of EBV infection in 20 asymptomatic immunocompetent hosts over time through the identification of EBV strain variants in the peripheral blood and oral cavity. A heteroduplex tracking assay specific for the EBV gene LMP1 precisely identified the presence of multiple EBV strains in each subject. The strains present in the peripheral blood and oral cavity were often completely discordant, indicating the existence of distinct infections, and the strains present and their relative abundance changed considerably between time points. The possible transmission of strains between the oral cavity and peripheral blood compartments could be tracked within subjects, suggesting that reactivation in the oral cavity and subsequent reinfection of B lymphocytes that reenter the periphery contribute to the maintenance of persistence. In addition, distinct virus strains persisted in the oral cavity over many time points, suggesting an important role for epithelial cells in the maintenance of persistence. Asymptomatic individuals without tonsillar tissue, which is believed to be an important source of virus for the oral cavity, also exhibited multiple strains and a cyclic pattern of transmission between compartments. This study revealed that the majority of patients with infectious mononucleosis were infected with multiple strains of EBV that were also compartmentalized, suggesting that primary infection involves the transmission of multiple strains. Both the primary and carrier states of infection with EBV are more complex than previously thought.

  1. Investigation of Multiple Frequency Ranges Using Discrete Wavelet Decomposition of Resting-State Functional Connectivity in Mild Traumatic Brain Injury Patients

    PubMed Central

    Chen, Haoxing; Roys, Steven; Zhuo, Jiachen; Varshney, Amitabh; Gullapalli, Rao P.

    2015-01-01

    Abstract The aim of this study was to investigate if discrete wavelet decomposition provides additional insight into resting-state processes through the analysis of functional connectivity within specific frequency ranges within the default mode network (DMN) that may be affected by mild traumatic brain injury (mTBI). Participants included 32 mTBI patients (15 with postconcussive syndrome [PCS+] and 17 without [PCS−]). mTBI patients received resting-state functional magnetic resonance imaging (rs-fMRI) at acute (within 10 days of injury) and chronic (6 months postinjury) time points and were compared with 31 controls (healthy control [HC]). The wavelet decomposition divides the time series into multiple frequency ranges based on four scaling factors (SF1: 0.125–0.250 Hz, SF2: 0.060–0.125 Hz, SF3: 0.030–0.060 Hz, SF4: 0.015–0.030 Hz). Within each SF, wavelet connectivity matrices for nodes of the DMN were created for each group (HC, PCS+, PCS−), and bivariate measures of strength and diversity were calculated. The results demonstrate reduced strength of connectivity in PCS+ patients compared with PCS− patients within SF1 during both the acute and chronic stages of injury, as well as recovery of connectivity within SF1 across the two time points. Furthermore, the PCS− group demonstrated greater network strength compared with controls at both time points, suggesting a potential compensatory or protective mechanism in these patients. These findings stress the importance of investigating resting-state connectivity within multiple frequency ranges; however, many of our findings are within SF1, which may overlap with frequencies associated with cardiac and respiratory activities. PMID:25808612

  2. Investigation of Multiple Frequency Ranges Using Discrete Wavelet Decomposition of Resting-State Functional Connectivity in Mild Traumatic Brain Injury Patients.

    PubMed

    Sours, Chandler; Chen, Haoxing; Roys, Steven; Zhuo, Jiachen; Varshney, Amitabh; Gullapalli, Rao P

    2015-09-01

    The aim of this study was to investigate if discrete wavelet decomposition provides additional insight into resting-state processes through the analysis of functional connectivity within specific frequency ranges within the default mode network (DMN) that may be affected by mild traumatic brain injury (mTBI). Participants included 32 mTBI patients (15 with postconcussive syndrome [PCS+] and 17 without [PCS-]). mTBI patients received resting-state functional magnetic resonance imaging (rs-fMRI) at acute (within 10 days of injury) and chronic (6 months postinjury) time points and were compared with 31 controls (healthy control [HC]). The wavelet decomposition divides the time series into multiple frequency ranges based on four scaling factors (SF1: 0.125-0.250 Hz, SF2: 0.060-0.125 Hz, SF3: 0.030-0.060 Hz, SF4: 0.015-0.030 Hz). Within each SF, wavelet connectivity matrices for nodes of the DMN were created for each group (HC, PCS+, PCS-), and bivariate measures of strength and diversity were calculated. The results demonstrate reduced strength of connectivity in PCS+ patients compared with PCS- patients within SF1 during both the acute and chronic stages of injury, as well as recovery of connectivity within SF1 across the two time points. Furthermore, the PCS- group demonstrated greater network strength compared with controls at both time points, suggesting a potential compensatory or protective mechanism in these patients. These findings stress the importance of investigating resting-state connectivity within multiple frequency ranges; however, many of our findings are within SF1, which may overlap with frequencies associated with cardiac and respiratory activities.

  3. How Much Is Too Little to Detect Impacts? A Case Study of a Nuclear Power Plant

    PubMed Central

    Széchy, Maria T. M.; Viana, Mariana S.; Curbelo-Fernandez, Maria P.; Lavrado, Helena P.; Junqueira, Andrea O. R.; Vilanova, Eduardo; Silva, Sérgio H. G.

    2012-01-01

    Several approaches have been proposed to assess impacts on natural assemblages. Ideally, the potentially impacted site and multiple reference sites are sampled through time, before and after the impact. Often, however, the lack of information regarding the potential overall impact, the lack of knowledge about the environment in many regions worldwide, budgets constraints and the increasing dimensions of human activities compromise the reliability of the impact assessment. We evaluated the impact, if any, and its extent of a nuclear power plant effluent on sessile epibiota assemblages using a suitable and feasible sampling design with no ‘before’ data and budget and logistic constraints. Assemblages were sampled at multiple times and at increasing distances from the point of the discharge of the effluent. There was a clear and localized effect of the power plant effluent (up to 100 m from the point of the discharge). However, depending on the time of the year, the impact reaches up to 600 m. We found a significantly lower richness of taxa in the Effluent site when compared to other sites. Furthermore, at all times, the variability of assemblages near the discharge was also smaller than in other sites. Although the sampling design used here (in particular the number of replicates) did not allow an unambiguously evaluation of the full extent of the impact in relation to its intensity and temporal variability, the multiple temporal and spatial scales used allowed the detection of some differences in the intensity of the impact, depending on the time of sampling. Our findings greatly contribute to increase the knowledge on the effects of multiple stressors caused by the effluent of a power plant and also have important implications for management strategies and conservation ecology, in general. PMID:23110117

  4. How much is too little to detect impacts? A case study of a nuclear power plant.

    PubMed

    Mayer-Pinto, Mariana; Ignacio, Barbara L; Széchy, Maria T M; Viana, Mariana S; Curbelo-Fernandez, Maria P; Lavrado, Helena P; Junqueira, Andrea O R; Vilanova, Eduardo; Silva, Sérgio H G

    2012-01-01

    Several approaches have been proposed to assess impacts on natural assemblages. Ideally, the potentially impacted site and multiple reference sites are sampled through time, before and after the impact. Often, however, the lack of information regarding the potential overall impact, the lack of knowledge about the environment in many regions worldwide, budgets constraints and the increasing dimensions of human activities compromise the reliability of the impact assessment. We evaluated the impact, if any, and its extent of a nuclear power plant effluent on sessile epibiota assemblages using a suitable and feasible sampling design with no 'before' data and budget and logistic constraints. Assemblages were sampled at multiple times and at increasing distances from the point of the discharge of the effluent. There was a clear and localized effect of the power plant effluent (up to 100 m from the point of the discharge). However, depending on the time of the year, the impact reaches up to 600 m. We found a significantly lower richness of taxa in the Effluent site when compared to other sites. Furthermore, at all times, the variability of assemblages near the discharge was also smaller than in other sites. Although the sampling design used here (in particular the number of replicates) did not allow an unambiguously evaluation of the full extent of the impact in relation to its intensity and temporal variability, the multiple temporal and spatial scales used allowed the detection of some differences in the intensity of the impact, depending on the time of sampling. Our findings greatly contribute to increase the knowledge on the effects of multiple stressors caused by the effluent of a power plant and also have important implications for management strategies and conservation ecology, in general.

  5. Evaluating Dense 3d Reconstruction Software Packages for Oblique Monitoring of Crop Canopy Surface

    NASA Astrophysics Data System (ADS)

    Brocks, S.; Bareth, G.

    2016-06-01

    Crop Surface Models (CSMs) are 2.5D raster surfaces representing absolute plant canopy height. Using multiple CMSs generated from data acquired at multiple time steps, a crop surface monitoring is enabled. This makes it possible to monitor crop growth over time and can be used for monitoring in-field crop growth variability which is useful in the context of high-throughput phenotyping. This study aims to evaluate several software packages for dense 3D reconstruction from multiple overlapping RGB images on field and plot-scale. A summer barley field experiment located at the Campus Klein-Altendorf of University of Bonn was observed by acquiring stereo images from an oblique angle using consumer-grade smart cameras. Two such cameras were mounted at an elevation of 10 m and acquired images for a period of two months during the growing period of 2014. The field experiment consisted of nine barley cultivars that were cultivated in multiple repetitions and nitrogen treatments. Manual plant height measurements were carried out at four dates during the observation period. The software packages Agisoft PhotoScan, VisualSfM with CMVS/PMVS2 and SURE are investigated. The point clouds are georeferenced through a set of ground control points. Where adequate results are reached, a statistical analysis is performed.

  6. The Multiple Sclerosis Self-Management Scale

    PubMed Central

    Ghahari, Setareh; Khoshbin, Lana S.

    2014-01-01

    Background: The Multiple Sclerosis Self-Management Scale (MSSM) is currently the only measure that was developed specifically to address self-management among individuals with multiple sclerosis (MS). While good internal consistency (α = 0.85) and construct validity have been demonstrated, other psychometric properties have not been established. This study was undertaken to evaluate the criterion validity, test-retest reliability, and face validity of the MSSM. Methods: Thirty-one individuals with MS who met the inclusion criteria were recruited to complete a series of questionnaires at two time points. At Time 1, participants completed the MSSM and two generic self-management tools—the Partners in Health (PIH-12) and the Health Education Impact Questionnaire (heiQ)—as well as a short questionnaire to capture participants' opinions about the MSSM. At Time 2, approximately 2 weeks after Time 1, participants completed the MSSM again. Results: The available MSSM factors showed moderate to high correlations with both PIH-12 and heiQ and were deemed to have satisfactory test-retest reliability. Face validity pointed to areas of the MSSM that need to be revised in future work. As indicated by the participants, some dimensions of MS self-management are missing in the MSSM and some items such as medication are redundant. Conclusions: This study provides evidence for the reliability and validity of the MSSM; however, further changes are required for both researchers and clinicians to use the tool meaningfully in practice. PMID:25061429

  7. Prevalence of mental health symptoms in Dutch military personnel returning from deployment to Afghanistan: a 2-year longitudinal analysis.

    PubMed

    Reijnen, A; Rademaker, A R; Vermetten, E; Geuze, E

    2015-02-01

    Recent studies in troops deployed to Iraq and Afghanistan have shown that combat exposure and exposure to deployment-related stressors increase the risk for the development of mental health symptoms. The aim of this study is to assess the prevalence of mental health symptoms in a cohort of Dutch military personnel prior to and at multiple time-points after deployment. Military personnel (n=994) completed various questionnaires at 5 time-points; starting prior to deployment and following the same cohort at 1 and 6 months and 1 and 2 years after their return from Afghanistan. The prevalence of symptoms of fatigue, PTSD, hostility, depression and anxiety was found to significantly increase after deployment compared with pre-deployment rates. As opposed to depressive symptoms and fatigue, the prevalence of PTSD was found to decrease after the 6-month assessment. The prevalence of sleeping problems and hostility remained relatively stable. The prevalence of mental health symptoms in military personnel increases after deployment, however, symptoms progression over time appears to be specific for various mental health symptoms. Comprehensive screening and monitoring for a wide range of mental health symptoms at multiple time-points after deployment is essential for early detection and to provide opportunities for intervention. This project was funded by the Dutch Ministry of Defence. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  8. Estimation of the quantification uncertainty from flow injection and liquid chromatography transient signals in inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Laborda, Francisco; Medrano, Jesús; Castillo, Juan R.

    2004-06-01

    The quality of the quantitative results obtained from transient signals in high-performance liquid chromatography-inductively coupled plasma mass spectrometry (HPLC-ICPMS) and flow injection-inductively coupled plasma mass spectrometry (FI-ICPMS) was investigated under multielement conditions. Quantification methods were based on multiple-point calibration by simple and weighted linear regression, and double-point calibration (measurement of the baseline and one standard). An uncertainty model, which includes the main sources of uncertainty from FI-ICPMS and HPLC-ICPMS (signal measurement, sample flow rate and injection volume), was developed to estimate peak area uncertainties and statistical weights used in weighted linear regression. The behaviour of the ICPMS instrument was characterized in order to be considered in the model, concluding that the instrument works as a concentration detector when it is used to monitorize transient signals from flow injection or chromatographic separations. Proper quantification by the three calibration methods was achieved when compared to reference materials, although the double-point calibration allowed to obtain results of the same quality as the multiple-point calibration, shortening the calibration time. Relative expanded uncertainties ranged from 10-20% for concentrations around the LOQ to 5% for concentrations higher than 100 times the LOQ.

  9. SAM-CE; A Three Dimensional Monte Carlo Code for the Dolution of the Forward Neutron and Forward and Adjoint Gamma Ray Transport Equations. Revision C

    DTIC Science & Technology

    1974-07-31

    Multiple scoring regions are permitted and these may be either finite volume regions or point detectors or both. Other sccres of interest, e.g., collision... Multiplicities ...... . . . . 43 2,3.5.2 Photon Production Cross Sections. . 44 2.3.5.3 Anisotropy of Photon Production . . 44 2.3.5.4 Continuous...hepting, count rates, etc., are calculated as functions of energy, time and position. Multiple scoring regions are permitted and these may be either

  10. Method for measuring multiple scattering corrections between liquid scintillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.

    2016-04-11

    In this study, a time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.

  11. Local and global bifurcations in an economic growth model with endogenous labour supply and multiplicative external habits

    NASA Astrophysics Data System (ADS)

    Gori, Luca; Sodini, Mauro

    2014-03-01

    This paper analyses the mathematical properties of an economic growth model with overlapping generations, endogenous labour supply, and multiplicative external habits. The dynamics of the economy is characterised by a two-dimensional map describing the time evolution of capital and labour supply. We show that if the relative importance of external habits in the utility function is sufficiently high, multiple (determinate or indeterminate) fixed points and poverty traps can exist. In addition, periodic or quasiperiodic behaviour and/or coexistence of attractors may occur.

  12. Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.

    PubMed

    De Queiroz, Ricardo; Chou, Philip A

    2016-06-01

    In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.

  13. Association between education and future leisure-time physical inactivity: a study of Finnish twins over a 35-year follow-up.

    PubMed

    Piirtola, Maarit; Kaprio, Jaakko; Kujala, Urho M; Heikkilä, Kauko; Koskenvuo, Markku; Svedberg, Pia; Silventoinen, Karri; Ropponen, Annina

    2016-08-04

    Education is associated with health related lifestyle choices including leisure-time physical inactivity. However, the longitudinal associations between education and inactivity merit further studies. We investigated the association between education and leisure-time physical inactivity over a 35-year follow-up with four time points controlling for multiple covariates including familial confounding. This study of the population-based Finnish Twin Cohort consisted of 5254 twin individuals born in 1945-1957 (59 % women), of which 1604 were complete same-sexed twin pairs. Data on leisure-time physical activity and multiple covariates was available from four surveys conducted in 1975, 1981, 1990 and 2011 (response rates 72 to 89 %). The association between years of education and leisure-time physical inactivity (<1.5 metabolic equivalent hours/day) was first analysed for each survey. Then, the role of education was investigated for 15-year and 35-year inactivity periods in the longitudinal analyses. The co-twin control design was used to analyse the potential familial confounding of the effects. All analyses were conducted with and without multiple covariates. Odds Ratios (OR) with 95 % Confidence Intervals (CI) were calculated using logistic and conditional (fixed-effects) regression models. Each additional year of education was associated with less inactivity (OR 0.94 to 0.95, 95 % CI 0.92, 0.99) in the cross-sectional age- and sex-adjusted analyses. The associations of education with inactivity in the 15- and 35-year follow-ups showed a similar trend: OR 0.97 (95 % CI 0.93, 1.00) and OR 0.94 (95 % CI 0.91, 0.98), respectively. In all co-twin control analyses, each year of higher education was associated with a reduced likelihood of inactivity suggesting direct effect (i.e. independent from familial confounding) of education on inactivity. However, the point estimates were lower than in the individual-level analyses. Adjustment for multiple covariates did not change these associations. Higher education is associated with lower odds of leisure-time physical inactivity during the three-decade follow-up. The association was found after adjusting for several confounders, including familial factors. Hence, the results point to the conclusion that education has an independent role in the development of long-term physical inactivity and tailored efforts to promote physical activity among lower educated people would be needed throughout adulthood.

  14. Influence of phase inversion on the formation and stability of one-step multiple emulsions.

    PubMed

    Morais, Jacqueline M; Rocha-Filho, Pedro A; Burgess, Diane J

    2009-07-21

    A novel method of preparation of water-in-oil-in-micelle-containing water (W/O/W(m)) multiple emulsions using the one-step emulsification method is reported. These multiple emulsions were normal (not temporary) and stable over a 60 day test period. Previously, reported multiple emulsion by the one-step method were abnormal systems that formed at the inversion point of simple emulsion (where there is an incompatibility in the Ostwald and Bancroft theories, and typically these are O/W/O systems). Pseudoternary phase diagrams and bidimensional process-composition (phase inversion) maps were constructed to assist in process and composition optimization. The surfactants used were PEG40 hydrogenated castor oil and sorbitan oleate, and mineral and vegetables oils were investigated. Physicochemical characterization studies showed experimentally, for the first time, the significance of the ultralow surface tension point on multiple emulsion formation by one-step via phase inversion processes. Although the significance of ultralow surface tension has been speculated previously, to the best of our knowledge, this is the first experimental confirmation. The multiple emulsion system reported here was dependent not only upon the emulsification temperature, but also upon the component ratios, therefore both the emulsion phase inversion and the phase inversion temperature were considered to fully explain their formation. Accordingly, it is hypothesized that the formation of these normal multiple emulsions is not a result of a temporary incompatibility (at the inversion point) during simple emulsion preparation, as previously reported. Rather, these normal W/O/W(m) emulsions are a result of the simultaneous occurrence of catastrophic and transitional phase inversion processes. The formation of the primary emulsions (W/O) is in accordance with the Ostwald theory ,and the formation of the multiple emulsions (W/O/W(m)) is in agreement with the Bancroft theory.

  15. Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.

    PubMed

    Omer, Travis; Intes, Xavier; Hahn, Juergen

    2015-01-01

    Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.

  16. Potential damage to DC superconducting magnets due to the high frequency electromagnetic waves

    NASA Technical Reports Server (NTRS)

    Gabriel, G. J.

    1977-01-01

    Experimental data are presented in support of the hypothesis that a dc superconducting magnet coil does not behave strictly as an inductor, but as a complicated electrodynamic device capable of supporting electromagnetic waves. Travel times of nanosecond pulses and evidence of sinusoidal standing waves were observed on a prototype four-layer solenoidal coil at room temperature. Ringing observed during switching transients appears as a sequence of multiple reflected square pulses whose durations are related to the layer lengths. With sinusoidal excitation of the coil, the voltage amplitude between a pair of points on the coil exhibits maxima at those frequencies such that the distance between these points is an odd multiple of half wavelength in free space. Evidence indicates that any disturbance, such as that resulting from switching or sudden fault, initiates multiple reflections between layers, thus raising the possibility for sufficiently high voltages to cause breakdown.

  17. Sensory and Instrumental Flavor Changes in Green Tea Brewed Multiple Times

    PubMed Central

    Lee, Jeehyun; Chambers, Delores; Chambers, Edgar

    2013-01-01

    Green teas in leaf form are brewed multiple times, a common selling point. However, the flavor changes, both sensory and volatile compounds, of green teas that have been brewed multiple times are unknown. The objectives of this study were to determine how the aroma and flavor of green teas change as they are brewed multiple times, to determine if a relationship exists between green tea flavors and green tea volatile compounds, and to suggest the number of times that green tea leaves can be brewed. The first and second brews of the green tea samples provided similar flavor intensities. The third and fourth brews provided milder flavors and lower bitterness and astringency when measured using descriptive sensory analysis. In the brewed liquor of green tea mostly linalool, nonanal, geraniol, jasmone, and β-ionone volatile compounds were present at low levels (using gas chromatography-mass spectrometry). The geraniol, linalool, and linalool oxide compounds in green tea may contribute to the floral/perfumy flavor. Green teas in leaf form may be brewed up to four times: the first two brews providing stronger flavor, bitterness, and astringency whereas the third and fourth brews will provide milder flavor, bitterness, and astringency. PMID:28239138

  18. Programmable release of multiple protein drugs from aptamer-functionalized hydrogels via nucleic acid hybridization.

    PubMed

    Battig, Mark R; Soontornworajit, Boonchoy; Wang, Yong

    2012-08-01

    Polymeric delivery systems have been extensively studied to achieve localized and controlled release of protein drugs. However, it is still challenging to control the release of multiple protein drugs in distinct stages according to the progress of disease or treatment. This study successfully demonstrates that multiple protein drugs can be released from aptamer-functionalized hydrogels with adjustable release rates at predetermined time points using complementary sequences (CSs) as biomolecular triggers. Because both aptamer-protein interactions and aptamer-CS hybridization are sequence-specific, aptamer-functionalized hydrogels constitute a promising polymeric delivery system for the programmable release of multiple protein drugs to treat complex human diseases.

  19. Time dependent reduction in platelet aggregation using the multiplate analyser and hirudin blood due to platelet clumping.

    PubMed

    Chapman, Kent; Favaloro, Emmanuel J

    2018-05-01

    The Multiplate is a popular instrument that measures platelet function using whole blood. Potentially considered a point of care instrument, it is also used by hemostasis laboratories. The instrument is usually utilized to assess antiplatelet medication or as a screen of platelet function. According to the manufacturer, testing should be performed within 0.5-3 hours of blood collection, and preferably using manufacturer provided hirudin tubes. We report time-associated reduction in platelet aggregation using the Multiplate and hirudin blood collection tubes, for all the major employed agonists. Blood for Multiplate analysis was collected into manufacturer supplied hirudin tubes, and 21 consecutive samples assessed using manufacturer supplied agonists (ADP, arachidonic acid, TRAP, collagen and ristocetin), at several time-points post-sample collection within the recommended test time period. Blood was also collected into EDTA as a reference method for platelet counts, with samples collected into sodium citrate and hirudin used for comparative counts. All platelet agonists showed a diminution of response with time. Depending on the agonist, the reduction caused 5-20% and 22-47% of responses initially in the normal reference range to fall below the reference range at 120min and 180min, respectively. Considering any agonist, 35% and 67% of initially "normal" responses became 'abnormal' at 120 min and 180 min, respectively. Platelet counts showed generally minimal changes in EDTA blood, but were markedly reduced over time in both citrate and hirudin blood, with up to 40% and 60% reduction, respectively, at 240 min. The presence of platelet clumping (micro-aggregate formation) was also observed in a time dependent manner, especially for hirudin. In conclusion, considering any platelet agonist, around two-thirds of samples can, within the recommended 0.5-3 hour testing window post-blood collection, yield a reduction in platelet aggregation that may lead to a change in interpretation (i.e., normal to reduced). Thus, the stability of Multiplate testing can more realistically be considered as being between 30-120 min of blood collection for samples collected into hirudin.

  20. A 640-MHz 32-megachannel real-time polyphase-FFT spectrum analyzer

    NASA Technical Reports Server (NTRS)

    Zimmerman, G. A.; Garyantes, M. F.; Grimm, M. J.; Charny, B.

    1991-01-01

    A polyphase fast Fourier transform (FFT) spectrum analyzer being designed for NASA's Search for Extraterrestrial Intelligence (SETI) Sky Survey at the Jet Propulsion Laboratory is described. By replacing the time domain multiplicative window preprocessing with polyphase filter processing, much of the processing loss of windowed FFTs can be eliminated. Polyphase coefficient memory costs are minimized by effective use of run length compression. Finite word length effects are analyzed, producing a balanced system with 8 bit inputs, 16 bit fixed point polyphase arithmetic, and 24 bit fixed point FFT arithmetic. Fixed point renormalization midway through the computation is seen to be naturally accommodated by the matrix FFT algorithm proposed. Simulation results validate the finite word length arithmetic analysis and the renormalization technique.

  1. Determination of the Persistence of Non-Spore-Forming ...

    EPA Pesticide Factsheets

    Report This report presents the results of an investigation to evaluate the persistence (or natural attenuation) of Yersinia pestis (Y. pestis), Francisella tularensis (F. tularensis), and Burkholderia mallei (B. mallei) on glass and soil under multiple environmental conditions and time points.

  2. Toxicogenomic Effects Common to Triazole Antifungals and Conserved Between Rats and Humans

    EPA Science Inventory

    The triazole antifungals myclobutanil, propiconazole and triadimefon cause varying degrees of hepatic toxicity and disrupt steroid hormone homeostasis in rodent in vivo models. To identify biological pathways consistently modulated across multiple time-points and various study d...

  3. Impact of Breakfasts (with or without Eggs) on Body Weight Regulation and Blood Lipids in University Students over a 14-Week Semester

    PubMed Central

    Rueda, Janice M.; Khosla, Pramod

    2013-01-01

    The effects of breakfast type on body weight and blood lipids were evaluated in university freshman. Seventy-three subjects were instructed to consume a breakfast with eggs (Egg Breakfast, EB, n = 39) or without (Non-Egg Breakfast, NEB, n = 34), five times/week for 14 weeks. Breakfast composition, anthropometric measurements and blood lipids were measured at multiple times. During the study, mean weight change was 1.6 ± 5.3 lbs (0.73 ± 2.41 kg), but there was no difference between groups. Both groups consumed similar calories for breakfast at all time-points. The EB group consumed significantly more calories at breakfast from protein, total fat and saturated fat, but significantly fewer calories from carbohydrate at every time-point. Cholesterol consumption at breakfast in the EB group was significantly higher than the NEB group at all time points. Breakfast food choices (other than eggs) were similar between groups. Blood lipids were similar between groups at all time points, indicating that the additional 400 mg/day of dietary cholesterol did not negatively impact blood lipids. PMID:24352089

  4. The "Best Worst" Field Optimization and Focusing

    NASA Technical Reports Server (NTRS)

    Vaughnn, David; Moore, Ken; Bock, Noah; Zhou, Wei; Ming, Liang; Wilson, Mark

    2008-01-01

    A simple algorithm for optimizing and focusing lens designs is presented. The goal of the algorithm is to simultaneously create the best and most uniform image quality over the field of view. Rather than relatively weighting multiple field points, only the image quality from the worst field point is considered. When optimizing a lens design, iterations are made to make this worst field point better until such a time as a different field point becomes worse. The same technique is used to determine focus position. The algorithm works with all the various image quality metrics. It works with both symmetrical and asymmetrical systems. It works with theoretical models and real hardware.

  5. Development and Appraisal of Multiple Accounting Record System (Mars).

    PubMed

    Yu, H C; Chen, M C

    2016-01-01

    The aim of the system is to achieve simplification of workflow, reduction of recording time, and increase the income for the study hospital. The project team decided to develop a multiple accounting record system that generates the account records based on the nursing records automatically, reduces the time and effort for nurses to review the procedure and provide another note of material consumption. Three configuration files were identified to demonstrate the relationship of treatments and reimbursement items. The workflow was simplified. The nurses averagely reduced 10 minutes of daily recording time, and the reimbursement points have been increased by 7.49%. The project streamlined the workflow and provides the institute a better way in finical management.

  6. FAST TRACK PAPER: A construct of internal multiples from surface data only: the concept of virtual seismic events

    NASA Astrophysics Data System (ADS)

    Ikelle, Luc T.

    2006-02-01

    We here describe one way of constructing internal multiples from surface seismic data only. The key feature of our construct of internal multiples is the introduction of the concept of virtual seismic events. Virtual events here are events, which are not directly recorded in standard seismic data acquisition, but their existence allows us to construct internal multiples with scattering points at the sea surface; the standard construct of internal multiples does not include any scattering points at the sea surface. The mathematical and computational operations invoked in our construction of virtual events and internal multiples are similar to those encountered in the construction of free-surface multiples based on the Kirchhoff or Born scattering theory. For instance, our construct operates on one temporal frequency at a time, just like free-surface demultiple algorithms; other internal multiple constructs tend to require all frequencies for the computation of an internal multiple at a given frequency. It does not require any knowledge of the subsurface nor an explicit knowledge of specific interfaces that are responsible for the generation of internal multiples in seismic data. However, our construct requires that the data be divided into two, three or four windows to avoid generating primaries. This segmentation of the data also allows us to select a range of periods of internal multiples that one wishes to construct because, in the context of the attenuation of internal multiples, it is important to avoid generating short-period internal multiples that may constructively average to form primaries at the seismic scale.

  7. Curvelet-domain multiple matching method combined with cubic B-spline function

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming

    2018-05-01

    Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.

  8. Frequency comb-based multiple-access ultrastable frequency dissemination with 7 × 10(-17) instability.

    PubMed

    Zhang, Shuangyou; Zhao, Jianye

    2015-01-01

    In this letter, we demonstrate frequency-comb-based multiple-access ultrastable frequency dissemination over a 10-km single-mode fiber link. First, we synchronize optical pulse trains from an Er-fiber frequency comb to the remote site by using a simple and robust phase-conjugate stabilization method. The fractional frequency-transfer instability at the remote site is 2.6×10(-14) and 4.9×10(-17) for averaging times of 1 and 10,000 s, respectively. Then, we reproduce the harmonic of the repetition rate from the disseminated optical pulse trains at an arbitrary point along the fiber link to test comb-based multiple-access performance, and demonstrate frequency instability of 4×10(-14) and 7×10(-17) at 1 and 10,000 s averaging time, respectively. The proposed comb-based multiple-access frequency dissemination can easily achieve highly stable wideband microwave extraction along the whole link.

  9. Effect of multiple circular holes Fraunhofer diffraction for the infrared optical imaging

    NASA Astrophysics Data System (ADS)

    Lu, Chunlian; Lv, He; Cao, Yang; Cai, Zhisong; Tan, Xiaojun

    2014-11-01

    With the development of infrared optics, infrared optical imaging systems play an increasingly important role in modern optical imaging systems. Infrared optical imaging is used in industry, agriculture, medical, military and transportation. But in terms of infrared optical imaging systems which are exposed for a long time, some contaminations will affect the infrared optical imaging. When the contamination contaminate on the lens surface of the optical system, it would affect diffraction. The lens can be seen as complementary multiple circular holes screen happen Fraunhofer diffraction. According to Babinet principle, you can get the diffraction of the imaging system. Therefore, by studying the multiple circular holes Fraunhofer diffraction, conclusions can be drawn about the effect of infrared imaging. This paper mainly studies the effect of multiple circular holes Fraunhofer diffraction for the optical imaging. Firstly, we introduce the theory of Fraunhofer diffraction and Point Spread Function. Point Spread Function is a basic tool to evaluate the image quality of the optical system. Fraunhofer diffraction will affect Point Spread Function. Then, the results of multiple circular holes Fraunhofer diffraction are given for different hole size and hole spacing. We choose the hole size from 0.1mm to 1mm and hole spacing from 0.3mm to 0.8mm. The infrared wavebands of optical imaging are chosen from 1μm to 5μm. We use the MATLAB to simulate light intensity distribution of multiple circular holes Fraunhofer diffraction. Finally, three-dimensional diffraction maps of light intensity are given to contrast.

  10. Simulation and analysis of chemical release in the ionosphere

    NASA Astrophysics Data System (ADS)

    Gao, Jing-Fan; Guo, Li-Xin; Xu, Zheng-Wen; Zhao, Hai-Sheng; Feng, Jie

    2018-05-01

    Ionospheric inhomogeneous plasma produced by single point chemical release has simple space-time structure, and cannot impact radio wave frequencies higher than Very High Frequency (VHF) band. In order to produce more complicated ionospheric plasma perturbation structure and trigger instabilities phenomena, multiple-point chemical release scheme is presented in this paper. The effects of chemical release on low latitude ionospheric plasma are estimated by linear instability growth rate theory that high growth rate represents high irregularities, ionospheric scintillation occurrence probability and high scintillation intension in scintillation duration. The amplitude scintillations and the phase scintillations of 150 MHz, 400 MHz, and 1000 MHz are calculated based on the theory of multiple phase screen (MPS), when they propagate through the disturbed area.

  11. Time-Series Analysis: Assessing the Effects of Multiple Educational Interventions in a Small-Enrollment Course

    NASA Astrophysics Data System (ADS)

    Warren, Aaron R.

    2009-11-01

    Time-series designs are an alternative to pretest-posttest methods that are able to identify and measure the impacts of multiple educational interventions, even for small student populations. Here, we use an instrument employing standard multiple-choice conceptual questions to collect data from students at regular intervals. The questions are modified by asking students to distribute 100 Confidence Points among the options in order to indicate the perceived likelihood of each answer option being the correct one. Tracking the class-averaged ratings for each option produces a set of time-series. ARIMA (autoregressive integrated moving average) analysis is then used to test for, and measure, changes in each series. In particular, it is possible to discern which educational interventions produce significant changes in class performance. Cluster analysis can also identify groups of students whose ratings evolve in similar ways. A brief overview of our methods and an example are presented.

  12. Satellite attitude prediction by multiple time scales method

    NASA Technical Reports Server (NTRS)

    Tao, Y. C.; Ramnath, R.

    1975-01-01

    An investigation is made of the problem of predicting the attitude of satellites under the influence of external disturbing torques. The attitude dynamics are first expressed in a perturbation formulation which is then solved by the multiple scales approach. The independent variable, time, is extended into new scales, fast, slow, etc., and the integration is carried out separately in the new variables. The theory is applied to two different satellite configurations, rigid body and dual spin, each of which may have an asymmetric mass distribution. The disturbing torques considered are gravity gradient and geomagnetic. Finally, as multiple time scales approach separates slow and fast behaviors of satellite attitude motion, this property is used for the design of an attitude control device. A nutation damping control loop, using the geomagnetic torque for an earth pointing dual spin satellite, is designed in terms of the slow equation.

  13. Implementation and Characterization of Three-Dimensional Particle-in-Cell Codes on Multiple-Instruction-Multiple-Data Massively Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Lyster, P. M.; Liewer, P. C.; Decyk, V. K.; Ferraro, R. D.

    1995-01-01

    A three-dimensional electrostatic particle-in-cell (PIC) plasma simulation code has been developed on coarse-grain distributed-memory massively parallel computers with message passing communications. Our implementation is the generalization to three-dimensions of the general concurrent particle-in-cell (GCPIC) algorithm. In the GCPIC algorithm, the particle computation is divided among the processors using a domain decomposition of the simulation domain. In a three-dimensional simulation, the domain can be partitioned into one-, two-, or three-dimensional subdomains ("slabs," "rods," or "cubes") and we investigate the efficiency of the parallel implementation of the push for all three choices. The present implementation runs on the Intel Touchstone Delta machine at Caltech; a multiple-instruction-multiple-data (MIMD) parallel computer with 512 nodes. We find that the parallel efficiency of the push is very high, with the ratio of communication to computation time in the range 0.3%-10.0%. The highest efficiency (> 99%) occurs for a large, scaled problem with 64(sup 3) particles per processing node (approximately 134 million particles of 512 nodes) which has a push time of about 250 ns per particle per time step. We have also developed expressions for the timing of the code which are a function of both code parameters (number of grid points, particles, etc.) and machine-dependent parameters (effective FLOP rate, and the effective interprocessor bandwidths for the communication of particles and grid points). These expressions can be used to estimate the performance of scaled problems--including those with inhomogeneous plasmas--to other parallel machines once the machine-dependent parameters are known.

  14. Early mortality in multiple myeloma: the time-dependent impact of comorbidity: A population-based study in 621 real-life patients.

    PubMed

    Ríos-Tamayo, Rafael; Sáinz, Juan; Martínez-López, Joaquín; Puerta, José Manuel; Chang, Daysi-Yoe-Ling; Rodríguez, Teresa; Garrido, Pilar; de Veas, José Luís García; Romero, Antonio; Moratalla, Lucía; López-Fernández, Elisa; González, Pedro Antonio; Sánchez, María José; Jiménez-Moleón, José Juan; Jurado, Manuel; Lahuerta, Juan José

    2016-07-01

    Multiple myeloma is a heterogeneous disease with variable survival; this variability cannot be fully explained by the current systems of risk stratification. Early mortality remains a serious obstacle to further improve the trend toward increased survival demonstrated in recent years. However, the definition of early mortality is not standardized yet. Importantly, no study has focused on the impact of comorbidity on early mortality in multiple myeloma to date. Therefore, we analyzed the role of baseline comorbidity in a large population-based cohort of 621 real-life myeloma patients over a 31-year period. To evaluate early mortality, a sequential multivariate regression model at 2, 6, and 12 months from diagnosis was performed. It was demonstrated that comorbidity had an independent impact on early mortality, which is differential and time-dependent. Besides renal failure, respiratory disease at 2 months, liver disease at 6 months, and hepatitis virus C infection at 12 months, were, respectively, associated with early mortality, adjusting for other well-established prognostic factors. On the other hand, the long-term monitoring in our study points out a modest downward trend in early mortality over time. This is the first single institution population-based study aiming to assess the impact of comorbidity on early mortality in multiple myeloma. It is suggested that early mortality should be analyzed at three key time points (2, 6, and 12 months), in order to allow comparisons between studies. Comorbidity plays a critical role in the outcome of myeloma patients in terms of early mortality. Am. J. Hematol. 91:700-704, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  15. Virtual viewpoint generation for three-dimensional display based on the compressive light field

    NASA Astrophysics Data System (ADS)

    Meng, Qiao; Sang, Xinzhu; Chen, Duo; Guo, Nan; Yan, Binbin; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    Virtual view-point generation is one of the key technologies the three-dimensional (3D) display, which renders the new scene image perspective with the existing viewpoints. The three-dimensional scene information can be effectively recovered at different viewing angles to allow users to switch between different views. However, in the process of multiple viewpoints matching, when N free viewpoints are received, we need to match N viewpoints each other, namely matching C 2N = N(N-1)/2 times, and even in the process of matching different baselines errors can occur. To address the problem of great complexity of the traditional virtual view point generation process, a novel and rapid virtual view point generation algorithm is presented in this paper, and actual light field information is used rather than the geometric information. Moreover, for better making the data actual meaning, we mainly use nonnegative tensor factorization(NTF). A tensor representation is introduced for virtual multilayer displays. The light field emitted by an N-layer, M-frame display is represented by a sparse set of non-zero elements restricted to a plane within an Nth-order, rank-M tensor. The tensor representation allows for optimal decomposition of a light field into time-multiplexed, light-attenuating layers using NTF. Finally, the compressive light field of multilayer displays information synthesis is used to obtain virtual view-point by multiple multiplication. Experimental results show that the approach not only the original light field is restored with the high image quality, whose PSNR is 25.6dB, but also the deficiency of traditional matching is made up and any viewpoint can obtained from N free viewpoints.

  16. Setting Environmental Standards

    ERIC Educational Resources Information Center

    Fishbein, Gershon

    1975-01-01

    Recent court decisions have pointed out the complexities involved in setting environmental standards. Environmental health is composed of multiple causative agents, most of which work over long periods of time. This makes the cause-and-effect relationship between health statistics and environmental contaminant exposures difficult to prove in…

  17. Personalized long-term prediction of cognitive function: Using sequential assessments to improve model performance.

    PubMed

    Chi, Chih-Lin; Zeng, Wenjun; Oh, Wonsuk; Borson, Soo; Lenskaia, Tatiana; Shen, Xinpeng; Tonellato, Peter J

    2017-12-01

    Prediction of onset and progression of cognitive decline and dementia is important both for understanding the underlying disease processes and for planning health care for populations at risk. Predictors identified in research studies are typically accessed at one point in time. In this manuscript, we argue that an accurate model for predicting cognitive status over relatively long periods requires inclusion of time-varying components that are sequentially assessed at multiple time points (e.g., in multiple follow-up visits). We developed a pilot model to test the feasibility of using either estimated or observed risk factors to predict cognitive status. We developed two models, the first using a sequential estimation of risk factors originally obtained from 8 years prior, then improved by optimization. This model can predict how cognition will change over relatively long time periods. The second model uses observed rather than estimated time-varying risk factors and, as expected, results in better prediction. This model can predict when newly observed data are acquired in a follow-up visit. Performances of both models that are evaluated in10-fold cross-validation and various patient subgroups show supporting evidence for these pilot models. Each model consists of multiple base prediction units (BPUs), which were trained using the same set of data. The difference in usage and function between the two models is the source of input data: either estimated or observed data. In the next step of model refinement, we plan to integrate the two types of data together to flexibly predict dementia status and changes over time, when some time-varying predictors are measured only once and others are measured repeatedly. Computationally, both data provide upper and lower bounds for predictive performance. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Parsimonious model for blood glucose level monitoring in type 2 diabetes patients.

    PubMed

    Zhao, Fang; Ma, Yan Fen; Wen, Jing Xiao; DU, Yan Fang; Li, Chun Lin; Li, Guang Wei

    2014-07-01

    To establish the parsimonious model for blood glucose monitoring in patients with type 2 diabetes receiving oral hypoglycemic agent treatment. One hundred and fifty-nine adult Chinese type 2 diabetes patients were randomized to receive rapid-acting or sustained-release gliclazide therapy for 12 weeks. Their blood glucose levels were measured at 10 time points in a 24 h period before and after treatment, and the 24 h mean blood glucose levels were measured. Contribution of blood glucose levels to the mean blood glucose level and HbA1c was assessed by multiple regression analysis. The correlation coefficients of blood glucose level measured at 10 time points to the daily MBG were 0.58-0.74 and 0.59-0.79, respectively, before and after treatment (P<0.0001). The multiple stepwise regression analysis showed that the blood glucose levels measured at 6 of the 10 time points could explain 95% and 97% of the changes in MBG before and after treatment. The three blood glucose levels, which were measured at fasting, 2 h after breakfast and before dinner, of the 10 time points could explain 84% and 86% of the changes in MBG before and after treatment, but could only explain 36% and 26% of the changes in HbA1c before and after treatment, and they had a poorer correlation with the HbA1c than with the 24 h MBG. The blood glucose levels measured at fasting, 2 h after breakfast and before dinner truly reflected the change 24 h blood glucose level, suggesting that they are appropriate for the self-monitoring of blood glucose levels in diabetes patients receiving oral anti-diabetes therapy. Copyright © 2014 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.

  19. Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation

    PubMed Central

    Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu

    2015-01-01

    To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401

  20. Meta-Analysis of Effect Sizes Reported at Multiple Time Points Using General Linear Mixed Model.

    PubMed

    Musekiwa, Alfred; Manda, Samuel O M; Mwambi, Henry G; Chen, Ding-Geng

    2016-01-01

    Meta-analysis of longitudinal studies combines effect sizes measured at pre-determined time points. The most common approach involves performing separate univariate meta-analyses at individual time points. This simplistic approach ignores dependence between longitudinal effect sizes, which might result in less precise parameter estimates. In this paper, we show how to conduct a meta-analysis of longitudinal effect sizes where we contrast different covariance structures for dependence between effect sizes, both within and between studies. We propose new combinations of covariance structures for the dependence between effect size and utilize a practical example involving meta-analysis of 17 trials comparing postoperative treatments for a type of cancer, where survival is measured at 6, 12, 18 and 24 months post randomization. Although the results from this particular data set show the benefit of accounting for within-study serial correlation between effect sizes, simulations are required to confirm these results.

  1. Feature-based attention to unconscious shapes and colors.

    PubMed

    Schmidt, Filipp; Schmidt, Thomas

    2010-08-01

    Two experiments employed feature-based attention to modulate the impact of completely masked primes on subsequent pointing responses. Participants processed a color cue to select a pair of possible pointing targets out of multiple targets on the basis of their color, and then pointed to the one of those two targets with a prespecified shape. All target pairs were preceded by prime pairs triggering either the correct or the opposite response. The time interval between cue and primes was varied to modulate the time course of feature-based attentional selection. In a second experiment, the roles of color and shape were switched. Pointing trajectories showed large priming effects that were amplified by feature-based attention, indicating that attention modulated the earliest phases of motor output. Priming effects as well as their attentional modulation occurred even though participants remained unable to identify the primes, indicating distinct processes underlying visual awareness, attention, and response control.

  2. Multiple Time-Point 68Ga-PSMA I&T PET/CT for Characterization of Primary Prostate Cancer: Value of Early Dynamic and Delayed Imaging.

    PubMed

    Schmuck, Sebastian; Mamach, Martin; Wilke, Florian; von Klot, Christoph A; Henkenberens, Christoph; Thackeray, James T; Sohns, Jan M; Geworski, Lilli; Ross, Tobias L; Wester, Hans-Juergen; Christiansen, Hans; Bengel, Frank M; Derlin, Thorsten

    2017-06-01

    The aims of this study were to gain mechanistic insights into prostate cancer biology using dynamic imaging and to evaluate the usefulness of multiple time-point Ga-prostate-specific membrane antigen (PSMA) I&T PET/CT for the assessment of primary prostate cancer before prostatectomy. Twenty patients with prostate cancer underwent Ga-PSMA I&T PET/CT before prostatectomy. The PET protocol consisted of early dynamic pelvic imaging, followed by static scans at 60 and 180 minutes postinjection (p.i.). SUVs, time-activity curves, quantitative analysis based on a 2-tissue compartment model, Patlak analysis, histopathology, and Gleason grading were compared between prostate cancer and benign prostate gland. Primary tumors were identified on both early dynamic and delayed imaging in 95% of patients. Tracer uptake was significantly higher in prostate cancer compared with benign prostate tissue at any time point (P ≤ 0.0003) and increased over time. Consequently, the tumor-to-nontumor ratio within the prostate gland improved over time (2.8 at 10 minutes vs 17.1 at 180 minutes p.i.). Tracer uptake at both 60 and 180 minutes p.i. was significantly higher in patients with higher Gleason scores (P < 0.01). The influx rate (Ki) was higher in prostate cancer than in reference prostate gland (0.055 [r = 0.998] vs 0.017 [r = 0.996]). Primary prostate cancer is readily identified on early dynamic and static delayed Ga-PSMA ligand PET images. The tumor-to-nontumor ratio in the prostate gland improves over time, supporting a role of delayed imaging for optimal visualization of prostate cancer.

  3. Alcohol consumption patterns among vocational school students in central Thailand.

    PubMed

    Chaveepojnkamjorn, Wisit

    2012-11-01

    The objective of this study was to evaluate alcohol consumption patterns among vocational school students in central Thailand. We conducted a cross sectional study among 1,803 vocational students (80.4 % aged < 17 years) in central Thailand using a self-administered questionnaire which consisted of 2 parts: sociodemographic factors and alcohol drinking behavior from December 2007 to February 2008. Descriptive statistics, a chi-square test and multiple logistic regression were used to analyze the data. The results of this study showed 40.9% of male students and 20.9% of female students drank alcoholic beverages. Multiple logistic regression analysis revealed 2 factors were associated with alcohol consumption among male subjects: field of study (OR 1.5, 95% CI 1.1-2.0), and GPA (OR < 2 = 1.8; 95% CI 1.2-2.7; OR > 3 = 0.6; 95% CI 0.4-0.9). The three most popular venues for drinking were at parties (43.1%), at home/in the dormitory (34.9%) and in bars or saloons near the school (20.9%). Fifty-three point two percent of males drinks alcohol 1-2 times per month and time, 47% drank > 2 times per month. Nearly 78% of female students drink alcohol 1-2 times per month and 22% drink alcohol > 2 time per month. Forty point nine percent of male students consumed 1-2 drinks per time and 36% consumed more than 4 drinks per time. Fifty point four percent of females drank 2 drinks per month. One-third of male students said they engaged in binge drinking in a 2-week period and 14% of girls said they binge drank in a 2-week period. Alcohol consumption is a significant problem among Thai vocational school students. Measures for managing this problem are discussed.

  4. Multivariate spatiotemporal visualizations for mobile devices in Flyover Country

    NASA Astrophysics Data System (ADS)

    Loeffler, S.; Thorn, R.; Myrbo, A.; Roth, R.; Goring, S. J.; Williams, J.

    2017-12-01

    Visualizing and interacting with complex multivariate and spatiotemporal datasets on mobile devices is challenging due to their smaller screens, reduced processing power, and limited data connectivity. Pollen data require visualizing pollen assemblages spatially, temporally, and across multiple taxa to understand plant community dynamics through time. Drawing from cartography, information visualization, and paleoecology, we have created new mobile-first visualization techniques that represent multiple taxa across many sites and enable user interaction. Using pollen datasets from the Neotoma Paleoecology Database as a case study, the visualization techniques allow ecological patterns and trends to be quickly understood on a mobile device compared to traditional pollen diagrams and maps. This flexible visualization system can be used for datasets beyond pollen, with the only requirements being point-based localities and multiple variables changing through time or depth.

  5. Identification of high-permeability subsurface structures with multiple point geostatistics and normal score ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Zovi, Francesco; Camporese, Matteo; Hendricks Franssen, Harrie-Jan; Huisman, Johan Alexander; Salandin, Paolo

    2017-05-01

    Alluvial aquifers are often characterized by the presence of braided high-permeable paleo-riverbeds, which constitute an interconnected preferential flow network whose localization is of fundamental importance to predict flow and transport dynamics. Classic geostatistical approaches based on two-point correlation (i.e., the variogram) cannot describe such particular shapes. In contrast, multiple point geostatistics can describe almost any kind of shape using the empirical probability distribution derived from a training image. However, even with a correct training image the exact positions of the channels are uncertain. State information like groundwater levels can constrain the channel positions using inverse modeling or data assimilation, but the method should be able to handle non-Gaussianity of the parameter distribution. Here the normal score ensemble Kalman filter (NS-EnKF) was chosen as the inverse conditioning algorithm to tackle this issue. Multiple point geostatistics and NS-EnKF have already been tested in synthetic examples, but in this study they are used for the first time in a real-world case study. The test site is an alluvial unconfined aquifer in northeastern Italy with an extension of approximately 3 km2. A satellite training image showing the braid shapes of the nearby river and electrical resistivity tomography (ERT) images were used as conditioning data to provide information on channel shape, size, and position. Measured groundwater levels were assimilated with the NS-EnKF to update the spatially distributed groundwater parameters (hydraulic conductivity and storage coefficients). Results from the study show that the inversion based on multiple point geostatistics does not outperform the one with a multiGaussian model and that the information from the ERT images did not improve site characterization. These results were further evaluated with a synthetic study that mimics the experimental site. The synthetic results showed that only for a much larger number of conditioning piezometric heads, multiple point geostatistics and ERT could improve aquifer characterization. This shows that state of the art stochastic methods need to be supported by abundant and high-quality subsurface data.

  6. Set membership experimental design for biological systems.

    PubMed

    Marvel, Skylar W; Williams, Cranos M

    2012-03-21

    Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models.

  7. Set membership experimental design for biological systems

    PubMed Central

    2012-01-01

    Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models. PMID:22436240

  8. Multistability of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays.

    PubMed

    Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde

    2015-11-01

    The problem of coexistence and dynamical behaviors of multiple equilibrium points is addressed for a class of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays. By virtue of the fixed point theorem, nonsmooth analysis theory and other analytical tools, some sufficient conditions are established to guarantee that such n-dimensional memristive Cohen-Grossberg neural networks can have 5(n) equilibrium points, among which 3(n) equilibrium points are locally exponentially stable. It is shown that greater storage capacity can be achieved by neural networks with the non-monotonic activation functions introduced herein than the ones with Mexican-hat-type activation function. In addition, unlike most existing multistability results of neural networks with monotonic activation functions, those obtained 3(n) locally stable equilibrium points are located both in saturated regions and unsaturated regions. The theoretical findings are verified by an illustrative example with computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Advances in quantum cascade lasers for security and crime-fighting

    NASA Astrophysics Data System (ADS)

    Normand, Erwan L.; Stokes, Robert J.; Hay, Kenneth; Foulger, Brian; Lewis, Colin

    2010-10-01

    Advances in the application of Quantum Cascade Lasers (QCL) to trace gas detection will be presented. The solution is real time (~1 μsec per scan), is insensitive to turbulence and vibration, and performs multiple measurements in one sweep. The QCL provides a large dynamic range, which is a linear response from ppt to % level. The concentration can be derived with excellent immunity from cross interference. Point sensing sensors developed by Cascade for home made and commercial explosives operate by monitoring key constituents in real time and matching this to a spatial event (i.e. sniffer device placed close to an object or person walking through portal (overt or covert). Programmable signature detection capability allows for detection of multiple chemical compounds along the most likely array of explosive chemical formulation. The advantages of configuration as "point sensing" or "stand off" will be discussed. In addition to explosives this method is highly applicable to the detection of mobile drugs labs through volatile chemical release.

  10. Symmetry, Hopf bifurcation, and the emergence of cluster solutions in time delayed neural networks.

    PubMed

    Wang, Zhen; Campbell, Sue Ann

    2017-11-01

    We consider the networks of N identical oscillators with time delayed, global circulant coupling, modeled by a system of delay differential equations with Z N symmetry. We first study the existence of Hopf bifurcations induced by the coupling time delay and then use symmetric Hopf bifurcation theory to determine how these bifurcations lead to different patterns of symmetric cluster oscillations. We apply our results to a case study: a network of FitzHugh-Nagumo neurons with diffusive coupling. For this model, we derive the asymptotic stability, global asymptotic stability, absolute instability, and stability switches of the equilibrium point in the plane of coupling time delay (τ) and excitability parameter (a). We investigate the patterns of cluster oscillations induced by the time delay and determine the direction and stability of the bifurcating periodic orbits by employing the multiple timescales method and normal form theory. We find that in the region where stability switching occurs, the dynamics of the system can be switched from the equilibrium point to any symmetric cluster oscillation, and back to equilibrium point as the time delay is increased.

  11. Grey-Theory-Based Optimization Model of Emergency Logistics Considering Time Uncertainty.

    PubMed

    Qiu, Bao-Jian; Zhang, Jiang-Hua; Qi, Yuan-Tao; Liu, Yang

    2015-01-01

    Natural disasters occur frequently in recent years, causing huge casualties and property losses. Nowadays, people pay more and more attention to the emergency logistics problems. This paper studies the emergency logistics problem with multi-center, multi-commodity, and single-affected-point. Considering that the path near the disaster point may be damaged, the information of the state of the paths is not complete, and the travel time is uncertainty, we establish the nonlinear programming model that objective function is the maximization of time-satisfaction degree. To overcome these drawbacks: the incomplete information and uncertain time, this paper firstly evaluates the multiple roads of transportation network based on grey theory and selects the reliable and optimal path. Then simplify the original model under the scenario that the vehicle only follows the optimal path from the emergency logistics center to the affected point, and use Lingo software to solve it. The numerical experiments are presented to show the feasibility and effectiveness of the proposed method.

  12. Grey-Theory-Based Optimization Model of Emergency Logistics Considering Time Uncertainty

    PubMed Central

    Qiu, Bao-Jian; Zhang, Jiang-Hua; Qi, Yuan-Tao; Liu, Yang

    2015-01-01

    Natural disasters occur frequently in recent years, causing huge casualties and property losses. Nowadays, people pay more and more attention to the emergency logistics problems. This paper studies the emergency logistics problem with multi-center, multi-commodity, and single-affected-point. Considering that the path near the disaster point may be damaged, the information of the state of the paths is not complete, and the travel time is uncertainty, we establish the nonlinear programming model that objective function is the maximization of time-satisfaction degree. To overcome these drawbacks: the incomplete information and uncertain time, this paper firstly evaluates the multiple roads of transportation network based on grey theory and selects the reliable and optimal path. Then simplify the original model under the scenario that the vehicle only follows the optimal path from the emergency logistics center to the affected point, and use Lingo software to solve it. The numerical experiments are presented to show the feasibility and effectiveness of the proposed method. PMID:26417946

  13. Effect of perception irregularity on chain-reaction crash in low visibility

    NASA Astrophysics Data System (ADS)

    Nagatani, Takashi

    2015-06-01

    We present the dynamic model of the chain-reaction crash to take into account the irregularity of the perception-reaction time. When a driver brakes according to taillights of the forward vehicle, the perception-reaction time varies from driver to driver. We study the effect of the perception irregularity on the chain-reaction crash (multiple-vehicle collision) in low-visibility condition. The first crash may induce more collisions. We investigate how the first collision induces the chain-reaction crash numerically. We derive, analytically, the transition points and the region maps for the chain-reaction crash in traffic flow of vehicles with irregular perception times. We clarify the effect of the perception irregularity on the multiple-vehicle collision.

  14. Multiple-, But Not Single-, Dose of Parecoxib Reduces Shoulder Pain after Gynecologic Laparoscopy

    PubMed Central

    Zhang, Hufei; Shu, Haihua; Yang, Lu; Cao, Minghui; Zhang, Jingjun; Liu, Kexuan; Xiao, Liangcan; Zhang, Xuyu

    2012-01-01

    Background: The aim of this study was to investigate effect of single- and multiple-dose of parecoxib on shoulder pain after gynecologic laparoscopy. Methods: 126 patients requiring elective gynecologic laparoscopy were randomly allocated to three groups. Group M (multiple-dose): receiving parecoxib 40mg at 30min before the end of surgery, at 8 and 20hr after surgery, respectively; Group S (single-dose): receiving parecoxib 40mg at 30min before the end of surgery and normal saline at the corresponding time points; Group C (control): receiving normal saline at the same three time points. The shoulder pain was evaluated, both at rest and with motion, at postoperative 6, 24 and 48hr. The impact of shoulder pain on patients' recovery (activity, mood, walking and sleep) was also evaluated. Meanwhile, rescue analgesics and complications were recorded. Results: The overall incidence of shoulder pain in group M (37.5%) was lower than that in group C (61.9%) (difference=-24.4%; 95% CI: 3.4~45.4%; P=0.023). Whereas, single-dose regimen (61.0%) showed no significant reduction (difference with control=-0.9%; 95% CI: -21.9~20.0%; P=0.931). Moreover, multiple-dose regimen reduced the maximal intensity of shoulder pain and the impact for activity and mood in comparison to the control. Multiple-dose of parecoxib decreased the consumption of rescue analgesics. The complications were similar among all groups and no severe complications were observed. Conclusions: Multiple-, but not single-, dose of parecoxib may attenuate the incidence and intensity of shoulder pain and thereby improve patients' quality of recovery following gynecologic laparoscopy. PMID:23136538

  15. Understanding cracking failures of coatings: A fracture mechanics approach

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Ryong

    A fracture mechanics analysis of coating (paint) cracking was developed. A strain energy release rate (G(sub c)) expression due to the formation of a new crack in a coating was derived for bending and tension loadings in terms of the moduli, thicknesses, Poisson's ratios, load, residual strain, etc. Four-point bending and instrumented impact tests were used to determine the in-situ fracture toughness of coatings as functions of increasing baking (drying) time. The system used was a thin coating layer on a thick substrate layer. The substrates included steel, aluminum, polycarbonate, acrylonitrile-butadiene-styrene (ABS), and Noryl. The coatings included newly developed automotive paints. The four-point bending configuration promoted nice transversed multiple coating cracks on both steel and polymeric substrates. The crosslinked type automotive coatings on steel substrates showed big cracks without microcracks. When theoretical predictions for energy release rate were compared to experimental data for coating/steel substrate samples with multiple cracking, the agreement was good. Crosslinked type coatings on polymeric substrates showed more cracks than theory predicted and the G(sub c)'s were high. Solvent evaporation type coatings on polymeric substrates showed clean multiple cracking and the G(sub c)'s were higher than those obtained by tension analysis of tension experiments with the same substrates. All the polymeric samples showed surface embrittlement after long baking times using four-point bending tests. The most apparent surface embrittlement was observed in the acrylonitrile-butadiene-styrene (ABS) substrate system. The impact properties of coatings as a function of baking time were also investigated. These experiments were performed using an instrumented impact tester. There was a rapid decrease in G(sub c) at short baking times and convergence to a constant value at long baking times. The surface embrittlement conditions and an embrittlement toughness were found upon impact loading. This analysis provides a basis for a quantitative approach to measuring coating toughness.

  16. Multiple ECG Fiducial Points-Based Random Binary Sequence Generation for Securing Wireless Body Area Networks.

    PubMed

    Zheng, Guanglou; Fang, Gengfa; Shankaran, Rajan; Orgun, Mehmet A; Zhou, Jie; Qiao, Li; Saleem, Kashif

    2017-05-01

    Generating random binary sequences (BSes) is a fundamental requirement in cryptography. A BS is a sequence of N bits, and each bit has a value of 0 or 1. For securing sensors within wireless body area networks (WBANs), electrocardiogram (ECG)-based BS generation methods have been widely investigated in which interpulse intervals (IPIs) from each heartbeat cycle are processed to produce BSes. Using these IPI-based methods to generate a 128-bit BS in real time normally takes around half a minute. In order to improve the time efficiency of such methods, this paper presents an ECG multiple fiducial-points based binary sequence generation (MFBSG) algorithm. The technique of discrete wavelet transforms is employed to detect arrival time of these fiducial points, such as P, Q, R, S, and T peaks. Time intervals between them, including RR, RQ, RS, RP, and RT intervals, are then calculated based on this arrival time, and are used as ECG features to generate random BSes with low latency. According to our analysis on real ECG data, these ECG feature values exhibit the property of randomness and, thus, can be utilized to generate random BSes. Compared with the schemes that solely rely on IPIs to generate BSes, this MFBSG algorithm uses five feature values from one heart beat cycle, and can be up to five times faster than the solely IPI-based methods. So, it achieves a design goal of low latency. According to our analysis, the complexity of the algorithm is comparable to that of fast Fourier transforms. These randomly generated ECG BSes can be used as security keys for encryption or authentication in a WBAN system.

  17. Pointright: a system to redirect mouse and keyboard control among multiple machines

    DOEpatents

    Johanson, Bradley E [Palo Alto, CA; Winograd, Terry A [Stanford, CA; Hutchins, Gregory M [Mountain View, CA

    2008-09-30

    The present invention provides a software system, PointRight, that allows for smooth and effortless control of pointing and input devices among multiple displays. With PointRight, a single free-floating mouse and keyboard can be used to control multiple screens. When the cursor reaches the edge of a screen it seamlessly moves to the adjacent screen and keyboard control is simultaneously redirected to the appropriate machine. Laptops may also redirect their keyboard and pointing device, and multiple pointers are supported simultaneously. The system automatically reconfigures itself as displays go on, go off, or change the machine they display.

  18. A Multiple Ant Colony Metahuristic for the Air Refueling Tanker Assignment Problem

    DTIC Science & Technology

    2002-03-01

    Problem The tanker assignment problem can be modeled as a job shop scheduling problem ( JSSP ). The JSSP is made up of n jobs, composed of m ordered...points) to be processed on all the machines (tankers). The problem with using JSSP is that the tanker assignment problem has multiple objectives... JSSP will minimize the time it takes for all jobs, but this may take an inordinate number of tankers. Thus using JSSP alone is not necessarily a good

  19. Interaction of pulsating and spinning waves in condensed phase combustion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booty, M.R.; Margolis, S.B.; Matkowsky, B.J.

    1986-10-01

    The authors employ a nonlinear stability analysis in the neighborhood of a multiple bifurcation point to describe the interaction of pulsating and spinning modes of condensed phase combustion. Such phenomena occur in the synthesis of refractory materials. In particular, they consider the propagation of combustion waves in a long thermally insulated cylindrical sample and show that steady, planar combustion is stable for a modified activation energy/melting parameter less than a critical value. Above this critical value primary bifurcation states, corresponding to time-periodic pulsating and spinning modes of combustion, emanate from the steadily propagating solution. By varying the sample radius, themore » authors split a multiple bifurcation point to obtain bifurcation diagrams which exhibit secondary, tertiary, and quarternary branching to various types of quasi-periodic combustion waves.« less

  20. Cooperative path following control of multiple nonholonomic mobile robots.

    PubMed

    Cao, Ke-Cai; Jiang, Bin; Yue, Dong

    2017-11-01

    Cooperative path following control problem of multiple nonholonomic mobile robots has been considered in this paper. Based on the framework of decomposition, the cooperative path following problem has been transformed into path following problem and cooperative control problem; Then cascaded theory of non-autonomous system has been employed in the design of controllers without resorting to feedback linearization. One time-varying coordinate transformation based on dilation has been introduced to solve the uncontrollable problem of nonholonomic robots when the whole group's reference converges to stationary point. Cooperative path following controllers for nonholonomic robots have been proposed under persistent reference or reference target that converges to stationary point respectively. Simulation results using Matlab have illustrated the effectiveness of the obtained theoretical results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Hydrogenation and interesterification effects on the oxidative stability and melting point of soybean oil.

    PubMed

    Daniels, Roger L; Kim, Hyun Jung; Min, David B

    2006-08-09

    Soybean oil with an iodine value of 136 was hydrogenated to have iodine values of 126 and 117. The soybean oils with iodine values of 136, 126, and 117 were randomly interesterified using sodium methoxide. The oxidative stabilities of the hydrogenated and/or interesterified soybean oils were evaluated by measuring the headspace oxygen content by gas chromatography, and the induction time was measured using Rancimat. The melting points of the oils were evaluated by differential scanning calorimetry. Duncan's multiple range test of the headspace oxygen and induction time showed that hydrogenation increased the headspace oxygen content and induction time at alpha = 0.05. Interesterification decreased the headspace oxygen and the induction time for the soybean oils with iodine values of 136, 126, and 117 at alpha = 0.05. Hydrogenation increased the melting points as the iodine value decreased from 136 and 126 to 117 at alpha = 0.05. The random interesterification increased the melting points of soybean oils with iodine values of 136, 126, and 117 at alpha = 0.05. The combined effects of hydrogenation and interesterification increased the oxidative stability of soybean oil at alpha = 0.05 and the melting point at alpha = 0.01. The optimum combination of hydrogenation and random interesterification can improve the oxidative stability and increase the melting point to expand the application of soybean oil in foods.

  2. A new paper-based platform technology for point-of-care diagnostics.

    PubMed

    Gerbers, Roman; Foellscher, Wilke; Chen, Hong; Anagnostopoulos, Constantine; Faghri, Mohammad

    2014-10-21

    Currently, the Lateral flow Immunoassays (LFIAs) are not able to perform complex multi-step immunodetection tests because of their inability to introduce multiple reagents in a controlled manner to the detection area autonomously. In this research, a point-of-care (POC) paper-based lateral flow immunosensor was developed incorporating a novel microfluidic valve technology. Layers of paper and tape were used to create a three-dimensional structure to form the fluidic network. Unlike the existing LFIAs, multiple directional valves are embedded in the test strip layers to control the order and the timing of mixing for the sample and multiple reagents. In this paper, we report a four-valve device which autonomously directs three different fluids to flow sequentially over the detection area. As proof of concept, a three-step alkaline phosphatase based Enzyme-Linked ImmunoSorbent Assay (ELISA) protocol with Rabbit IgG as the model analyte was conducted to prove the suitability of the device for immunoassays. The detection limit of about 4.8 fm was obtained.

  3. Parallelization of Program to Optimize Simulated Trajectories (POST3D)

    NASA Technical Reports Server (NTRS)

    Hammond, Dana P.; Korte, John J. (Technical Monitor)

    2001-01-01

    This paper describes the parallelization of the Program to Optimize Simulated Trajectories (POST3D). POST3D uses a gradient-based optimization algorithm that reaches an optimum design point by moving from one design point to the next. The gradient calculations required to complete the optimization process, dominate the computational time and have been parallelized using a Single Program Multiple Data (SPMD) on a distributed memory NUMA (non-uniform memory access) architecture. The Origin2000 was used for the tests presented.

  4. Message survival and decision dynamics in a class of reactive complex systems subject to external fields

    NASA Astrophysics Data System (ADS)

    Rodriguez Lucatero, C.; Schaum, A.; Alarcon Ramos, L.; Bernal-Jaquez, R.

    2014-07-01

    In this study, the dynamics of decisions in complex networks subject to external fields are studied within a Markov process framework using nonlinear dynamical systems theory. A mathematical discrete-time model is derived using a set of basic assumptions regarding the convincement mechanisms associated with two competing opinions. The model is analyzed with respect to the multiplicity of critical points and the stability of extinction states. Sufficient conditions for extinction are derived in terms of the convincement probabilities and the maximum eigenvalues of the associated connectivity matrices. The influences of exogenous (e.g., mass media-based) effects on decision behavior are analyzed qualitatively. The current analysis predicts: (i) the presence of fixed-point multiplicity (with a maximum number of four different fixed points), multi-stability, and sensitivity with respect to the process parameters; and (ii) the bounded but significant impact of exogenous perturbations on the decision behavior. These predictions were verified using a set of numerical simulations based on a scale-free network topology.

  5. Estimating Time to Event From Longitudinal Categorical Data: An Analysis of Multiple Sclerosis Progression.

    PubMed

    Mandel, Micha; Gauthier, Susan A; Guttmann, Charles R G; Weiner, Howard L; Betensky, Rebecca A

    2007-12-01

    The expanded disability status scale (EDSS) is an ordinal score that measures progression in multiple sclerosis (MS). Progression is defined as reaching EDSS of a certain level (absolute progression) or increasing of one point of EDSS (relative progression). Survival methods for time to progression are not adequate for such data since they do not exploit the EDSS level at the end of follow-up. Instead, we suggest a Markov transitional model applicable for repeated categorical or ordinal data. This approach enables derivation of covariate-specific survival curves, obtained after estimation of the regression coefficients and manipulations of the resulting transition matrix. Large sample theory and resampling methods are employed to derive pointwise confidence intervals, which perform well in simulation. Methods for generating survival curves for time to EDSS of a certain level, time to increase of EDSS of at least one point, and time to two consecutive visits with EDSS greater than three are described explicitly. The regression models described are easily implemented using standard software packages. Survival curves are obtained from the regression results using packages that support simple matrix calculation. We present and demonstrate our method on data collected at the Partners MS center in Boston, MA. We apply our approach to progression defined by time to two consecutive visits with EDSS greater than three, and calculate crude (without covariates) and covariate-specific curves.

  6. Riemannian multi-manifold modeling and clustering in brain networks

    NASA Astrophysics Data System (ADS)

    Slavakis, Konstantinos; Salsabilian, Shiva; Wack, David S.; Muldoon, Sarah F.; Baidoo-Williams, Henry E.; Vettel, Jean M.; Cieslak, Matthew; Grafton, Scott T.

    2017-08-01

    This paper introduces Riemannian multi-manifold modeling in the context of brain-network analytics: Brainnetwork time-series yield features which are modeled as points lying in or close to a union of a finite number of submanifolds within a known Riemannian manifold. Distinguishing disparate time series amounts thus to clustering multiple Riemannian submanifolds. To this end, two feature-generation schemes for brain-network time series are put forth. The first one is motivated by Granger-causality arguments and uses an auto-regressive moving average model to map low-rank linear vector subspaces, spanned by column vectors of appropriately defined observability matrices, to points into the Grassmann manifold. The second one utilizes (non-linear) dependencies among network nodes by introducing kernel-based partial correlations to generate points in the manifold of positivedefinite matrices. Based on recently developed research on clustering Riemannian submanifolds, an algorithm is provided for distinguishing time series based on their Riemannian-geometry properties. Numerical tests on time series, synthetically generated from real brain-network structural connectivity matrices, reveal that the proposed scheme outperforms classical and state-of-the-art techniques in clustering brain-network states/structures.

  7. Multistability of neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays.

    PubMed

    Nie, Xiaobing; Zheng, Wei Xing

    2015-05-01

    This paper is concerned with the problem of coexistence and dynamical behaviors of multiple equilibrium points for neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays. The fixed point theorem and other analytical tools are used to develop certain sufficient conditions that ensure that the n-dimensional discontinuous neural networks with time-varying delays can have at least 5(n) equilibrium points, 3(n) of which are locally stable and the others are unstable. The importance of the derived results is that it reveals that the discontinuous neural networks can have greater storage capacity than the continuous ones. Moreover, different from the existing results on multistability of neural networks with discontinuous activation functions, the 3(n) locally stable equilibrium points obtained in this paper are located in not only saturated regions, but also unsaturated regions, due to the non-monotonic structure of discontinuous activation functions. A numerical simulation study is conducted to illustrate and support the derived theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Time course of the acute effects of core stabilisation exercise on seated postural control.

    PubMed

    Lee, Jordan B; Brown, Stephen H M

    2017-09-20

    Core stabilisation exercises are often promoted for purposes ranging from general fitness to high-performance athletics, and the prevention and rehabilitation of back troubles. These exercises, when performed properly, may have the potential to enhance torso postural awareness and control, yet the potential for achieving immediate gains has not been completely studied. Fourteen healthy young participants performed a single bout of non-fatiguing core stabilisation exercise that consisted of repeated sets of 2 isometric exercises, the side bridge and the four-point contralateral arm-and-leg extension. Seated postural control, using an unstable balance platform on top of a force plate, was assessed before and after exercise, including multiple time points within a 20-minute follow-up period. Nine standard postural control variables were calculated at each time point, including sway displacement and velocity ranges, root mean squares and cumulative path length. Statistical analysis showed that none of the postural control variables were significantly different at any time point following completion of core stabilisation exercise. Thus, we conclude that a single bout of acute core stabilisation exercise is insufficient to immediately improve seated trunk postural control in young healthy individuals.

  9. Sampling Error in a Particulate Mixture: An Analytical Chemistry Experiment.

    ERIC Educational Resources Information Center

    Kratochvil, Byron

    1980-01-01

    Presents an undergraduate experiment demonstrating sampling error. Selected as the sampling system is a mixture of potassium hydrogen phthalate and sucrose; using a self-zeroing, automatically refillable buret to minimize titration time of multiple samples and employing a dilute back-titrant to obtain high end-point precision. (CS)

  10. 32 CFR 525.4 - Entry authorization (policy).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... single or multiple entries. (4) Captains of ships and/or marine vessels planning to enter Kwajalein... of passengers (include list when practicable). (vi) Purpose of flight. (vii) Plan of flight route, including the point of origin of flight and its designation and estimated date and times of arrival and...

  11. 32 CFR 525.4 - Entry authorization (policy).

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... single or multiple entries. (4) Captains of ships and/or marine vessels planning to enter Kwajalein... of passengers (include list when practicable). (vi) Purpose of flight. (vii) Plan of flight route, including the point of origin of flight and its designation and estimated date and times of arrival and...

  12. Development of a short version of the modified Yale Preoperative Anxiety Scale.

    PubMed

    Jenkins, Brooke N; Fortier, Michelle A; Kaplan, Sherrie H; Mayes, Linda C; Kain, Zeev N

    2014-09-01

    The modified Yale Preoperative Anxiety Scale (mYPAS) is the current "criterion standard" for assessing child anxiety during induction of anesthesia and has been used in >100 studies. This observational instrument covers 5 items and is typically administered at 4 perioperative time points. Application of this complex instrument in busy operating room (OR) settings, however, presents a challenge. In this investigation, we examined whether the instrument could be modified and made easier to use in OR settings. This study used qualitative methods, principal component analyses, Cronbach αs, and effect sizes to create the mYPAS-Short Form (mYPAS-SF) and reduce time points of assessment. Data were obtained from multiple patients (N = 3798; Mage = 5.63) who were recruited in previous investigations using the mYPAS over the past 15 years. After qualitative analysis, the "use of parent" item was eliminated due to content overlap with other items. The reduced item set accounted for 82% or more of the variance in child anxiety and produced the Cronbach α of at least 0.92. To reduce the number of time points of assessment, a minimum Cohen d effect size criterion of 0.48 change in mYPAS score across time points was used. This led to eliminating the walk to the OR and entrance to the OR time points. Reducing the mYPAS to 4 items, creating the mYPAS-SF that can be administered at 2 time points, retained the accuracy of the measure while allowing the instrument to be more easily used in clinical research settings.

  13. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras

    PubMed Central

    Morris, Mark; Sellers, William I.

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778

  14. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras.

    PubMed

    Peyer, Kathrin E; Morris, Mark; Sellers, William I

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints.

  15. On the use of multiple-point statistics to improve groundwater flow modeling in karst aquifers: A case study from the Hydrogeological Experimental Site of Poitiers, France

    NASA Astrophysics Data System (ADS)

    Le Coz, Mathieu; Bodin, Jacques; Renard, Philippe

    2017-02-01

    Limestone aquifers often exhibit complex groundwater flow behaviors resulting from depositional heterogeneities and post-lithification fracturing and karstification. In this study, multiple-point statistics (MPS) was applied to reproduce karst features and to improve groundwater flow modeling. For this purpose, MPS realizations were used in a numerical flow model to simulate the responses to pumping test experiments observed at the Hydrogeological Experimental Site of Poitiers, France. The main flow behaviors evident in the field data were simulated, particularly (i) the early-time inflection of the drawdown signal at certain observation wells and (ii) the convex behavior of the drawdown curves at intermediate times. In addition, it was shown that the spatial structure of the karst features at various scales is critical with regard to the propagation of the depletion wave induced by pumping. Indeed, (i) the spatial shape of the cone of depression is significantly affected by the karst proportion in the vicinity of the pumping well, and (ii) early-time inflection of the drawdown signal occurs only at observation wells crossing locally well-developed karst features.

  16. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  17. Relationship of Systemic Cytokine Concentrations to Cognitive Function over Two Years in Women with Early Stage Breast Cancer

    PubMed Central

    Lyon, Debra E.; Cohen, Ronald; Chen, Huaihou; Kelly, Debra L.; McCain, Nancy L.; Starkweather, Angela; Sturgill, Jamie; Jackson-Cook, Colleen K.

    2016-01-01

    Cancer and its treatment are frequently associated with cancer-related cognitive impairment (CRCI). While CRCI has been linked to chemotherapy, there is increasing evidence that the condition may start prior to treatment and for some, remain unresolved after active treatment and into survivorship. Although the pathophysiology of the condition is complex, alterations in systemic cytokines, signaling molecules activated in response to infection or injury that trigger inflammation, are a possible mechanism linked to cognitive dysfunction in breast cancer and other conditions. Given the conflicting results in the literature, the lack of focus on domain-specific cognitive testing, and the need for a longer time period given the multiple modalities of standard treatments for early-stage breast cancer, this longitudinal study was conducted to address these gaps. Methods We assessed 75 women with early-stage breast cancer at five points over two years, starting prior to the initial chemotherapy through 24 months after chemotherapy initiation. Measures included a validated computerized evaluation of domain-specific cognitive functioning and a 17-plex panel of plasma cytokines. Linear mixed-effects models were applied to test the relationships of clinical variables and cytokine concentrations to each cognitive domain. Results: Levels and patterns of cytokine concentrations varied over time: six of the 17 cytokines (IL-6, IL-12, IL-17, G-CSF, MIPS-1β, and MCP-1) had the most variability. Some cytokine levels (e.g., IL-6) increased during chemotherapy but then decreased subsequently, while others (e.g., IL-17) consistently declined from baseline over time. There were multiple relationships among cytokines and cognition, which varied over time. At baseline, elevated concentrations of G-CSF and reduced concentrations of IL-17 were associated with faster psychomotor speed. At the second time-point (prior to the mid-chemotherapy), multiple cytokines had significant associations with psychomotor speed, complex attention, executive function, verbal memory, cognitive flexibility, composite memory and visual memory. Six months after chemotherapy initiation and at the one-year point, there were multiple, significant relationships among cytokines and multiple cognitive. At two years, fewer significant relationships were noted; however, lower concentrations of IL-7, a hematopoietic cytokine, were associated with better psychomotor speed, complex attention, and memory (composite, verbal and visual). MCP-1 was inversely associated with psychomotor speed and complex attention and higher levels of MIP-1β were related to better complex attention. Conclusion Levels and patterns of cytokines changed over time and demonstrated associations with domain-specific cognitive functioning that varied over time. The observed associations between cytokines and cognitive performance provides evidence that not only prototypical cytokines (i.e. IL-6, TNF-α, and IL1-β) but also cytokines from multiple classes may contribute to the inflammatory environment that is associated with cognitive dysfunction. Future studies to better delineate the cytokine changes, both individually and in networks, are needed to precisely assess a mechanistic link between cytokines and cognitive function in women receiving treatments for breast cancer. PMID:27890459

  18. Challenges in early clinical development of adjuvanted vaccines.

    PubMed

    Della Cioppa, Giovanni; Jonsdottir, Ingileif; Lewis, David

    2015-06-08

    A three-step approach to the early development of adjuvanted vaccine candidates is proposed, the goal of which is to allow ample space for exploratory and hypothesis-generating human experiments and to select dose(s) and dosing schedule(s) to bring into full development. Although the proposed approach is more extensive than the traditional early development program, the authors suggest that by addressing key questions upfront the overall time, size and cost of development will be reduced and the probability of public health advancement enhanced. The immunogenicity end-points chosen for early development should be critically selected: an established immunological parameter with a well characterized assay should be selected as primary end-point for dose and schedule finding; exploratory information-rich end-points should be limited in number and based on pre-defined hypothesis generating plans, including system biology and pathway analyses. Building a pharmacodynamic profile is an important aspect of early development: to this end, multiple early (within 24h) and late (up to one year) sampling is necessary, which can be accomplished by sampling subgroups of subjects at different time points. In most cases the final target population, even if vulnerable, should be considered for inclusion in early development. In order to obtain the multiple formulations necessary for the dose and schedule finding, "bed-side mixing" of various components of the vaccine is often necessary: this is a complex and underestimated area that deserves serious research and logistical support. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    NASA Astrophysics Data System (ADS)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  20. An efficient algorithm for the generalized Foldy-Lax formulation

    NASA Astrophysics Data System (ADS)

    Huang, Kai; Li, Peijun; Zhao, Hongkai

    2013-02-01

    Consider the scattering of a time-harmonic plane wave incident on a two-scale heterogeneous medium, which consists of scatterers that are much smaller than the wavelength and extended scatterers that are comparable to the wavelength. In this work we treat those small scatterers as isotropic point scatterers and use a generalized Foldy-Lax formulation to model wave propagation and capture multiple scattering among point scatterers and extended scatterers. Our formulation is given as a coupled system, which combines the original Foldy-Lax formulation for the point scatterers and the regular boundary integral equation for the extended obstacle scatterers. The existence and uniqueness of the solution for the formulation is established in terms of physical parameters such as the scattering coefficient and the separation distances. Computationally, an efficient physically motivated Gauss-Seidel iterative method is proposed to solve the coupled system, where only a linear system of algebraic equations for point scatterers or a boundary integral equation for a single extended obstacle scatterer is required to solve at each step of iteration. The convergence of the iterative method is also characterized in terms of physical parameters. Numerical tests for the far-field patterns of scattered fields arising from uniformly or randomly distributed point scatterers and single or multiple extended obstacle scatterers are presented.

  1. Creative use of pilot points to address site and regional scale heterogeneity in a variable-density model

    USGS Publications Warehouse

    Dausman, Alyssa M.; Doherty, John; Langevin, Christian D.

    2010-01-01

    Pilot points for parameter estimation were creatively used to address heterogeneity at both the well field and regional scales in a variable-density groundwater flow and solute transport model designed to test multiple hypotheses for upward migration of fresh effluent injected into a highly transmissive saline carbonate aquifer. Two sets of pilot points were used within in multiple model layers, with one set of inner pilot points (totaling 158) having high spatial density to represent hydraulic conductivity at the site, while a second set of outer points (totaling 36) of lower spatial density was used to represent hydraulic conductivity further from the site. Use of a lower spatial density outside the site allowed (1) the total number of pilot points to be reduced while maintaining flexibility to accommodate heterogeneity at different scales, and (2) development of a model with greater areal extent in order to simulate proper boundary conditions that have a limited effect on the area of interest. The parameters associated with the inner pilot points were log transformed hydraulic conductivity multipliers of the conductivity field obtained by interpolation from outer pilot points. The use of this dual inner-outer scale parameterization (with inner parameters constituting multipliers for outer parameters) allowed smooth transition of hydraulic conductivity from the site scale, where greater spatial variability of hydraulic properties exists, to the regional scale where less spatial variability was necessary for model calibration. While the model is highly parameterized to accommodate potential aquifer heterogeneity, the total number of pilot points is kept at a minimum to enable reasonable calibration run times.

  2. Electronic Imaging

    DTIC Science & Technology

    1991-11-01

    Tilted Rough Disc," Donald J. Schertler and Nicholas George "Image Deblurring for Multiple-Point Impulse Responses," Bryan J. Stossel and Nicholas George...Rough Disc Donald J. Schertler Nicholas George Image Deblurring for Multiple-Point Impulse Bryan J. Stossel Responses Nicholas George z 0 zw V) w LU 0...number of impulses present in the degradation. IMAGE DEBLURRING FOR MULTIPLE-POINT IMPULSE RESPONSESt Bryan J. Stossel Nicholas George Institute of Optics

  3. Model for Bi-objective emergency rescue vehicle routing optimization

    NASA Astrophysics Data System (ADS)

    Yang, Yuhang

    2017-03-01

    Vehicle routing problem is an important research topic in management science. In this paper, one vehicle can rescue multiple disaster points and two optimization objectives are rescue time and rescue effect. Rescue effect is expressed as the ratio of unloaded material to arrival time when rescue vehicles participate in rescue every time. In this paper, the corresponding emergency rescue model is established and the effectiveness of the model is verified by simulated annealing algorithm. It can provide the basis for practical decision-making.

  4. Concrete thawing studied by single-point ramped imaging.

    PubMed

    Prado, P J; Balcom, B J; Beyea, S D; Armstrong, R L; Bremner, T W

    1997-12-01

    A series of two-dimensional images of proton distribution in a hardened concrete sample has been obtained during the thawing process (from -50 degrees C up to 11 degrees C). The SPRITE sequence is optimal for this study given the characteristic short relaxation times of water in this porous media (T2* < 200 micros and T1 < 3.6 ms). The relaxation parameters of the sample were determined in order to optimize the time efficiency of the sequence, permitting a 4-scan 64 x 64 acquisition in under 3 min. The image acquisition is fast on the time scale of the temperature evolution of the specimen. The frozen water distribution is quantified through a position based study of the image contrast. A multiple point acquisition method is presented and the signal sensitivity improvement is discussed.

  5. Optimal inverse functions created via population-based optimization.

    PubMed

    Jennings, Alan L; Ordóñez, Raúl

    2014-06-01

    Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.

  6. Efficient use of retention time for the analysis of 302 drugs in equine plasma by liquid chromatography-MS/MS with scheduled multiple reaction monitoring and instant library searching for doping control.

    PubMed

    Liu, Ying; Uboh, Cornelius E; Soma, Lawrence R; Li, Xiaoqing; Guan, Fuyu; You, Youwen; Chen, Jin-Wen

    2011-09-01

    Multiple drug target analysis (MDTA) used in doping control is more efficient than single drug target analysis (SDTA). The number of drugs with the potential for abuse is so extensive that full coverage is not possible with SDTA. To address this problem, a liquid chromatography tandem mass spectrometric method was developed for simultaneous analysis of 302 drugs using a scheduled multiple reaction monitoring (s-MRM) algorithm. With a known retention time of an analyte, the s-MRM algorithm monitors each MRM transition only around its expected retention time. Analytes were recovered from plasma by liquid-liquid extraction. Information-dependent acquisition (IDA) functionality was used to combine s-MRM with enhanced product ion (EPI) scans within the same chromatographic analysis. An EPI spectrum library was also generated for rapid identification of analytes. Analysis time for the 302 drugs was 7 min. Scheduled MRM improved the quality of the chromatograms, signal response, reproducibility, and enhanced signal-to-noise ratio (S/N), resulting in more data points. Reduction in total cycle time from 2.4 s in conventional MRM (c-MRM) to 1 s in s-MRM allowed completion of the EPI scan at the same time. The speed for screening and identification of multiple drugs in equine plasma for doping control analysis was greatly improved by this method.

  7. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    NASA Astrophysics Data System (ADS)

    Lockhart, M.; Henzlova, D.; Croft, S.; Cutler, T.; Favalli, A.; McGahee, Ch.; Parker, R.

    2018-01-01

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli(DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory and implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. The current paper discusses and presents the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. In order to assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. The DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.

  8. A Semiparametric Change-Point Regression Model for Longitudinal Observations.

    PubMed

    Xing, Haipeng; Ying, Zhiliang

    2012-12-01

    Many longitudinal studies involve relating an outcome process to a set of possibly time-varying covariates, giving rise to the usual regression models for longitudinal data. When the purpose of the study is to investigate the covariate effects when experimental environment undergoes abrupt changes or to locate the periods with different levels of covariate effects, a simple and easy-to-interpret approach is to introduce change-points in regression coefficients. In this connection, we propose a semiparametric change-point regression model, in which the error process (stochastic component) is nonparametric and the baseline mean function (functional part) is completely unspecified, the observation times are allowed to be subject-specific, and the number, locations and magnitudes of change-points are unknown and need to be estimated. We further develop an estimation procedure which combines the recent advance in semiparametric analysis based on counting process argument and multiple change-points inference, and discuss its large sample properties, including consistency and asymptotic normality, under suitable regularity conditions. Simulation results show that the proposed methods work well under a variety of scenarios. An application to a real data set is also given.

  9. Pilot study of a point-of-use decision support tool for cancer clinical trials eligibility.

    PubMed

    Breitfeld, P P; Weisburd, M; Overhage, J M; Sledge, G; Tierney, W M

    1999-01-01

    Many adults with cancer are not enrolled in clinical trials because caregivers do not have the time to match the patient's clinical findings with varying eligibility criteria associated with multiple trials for which the patient might be eligible. The authors developed a point-of-use portable decision support tool (DS-TRIEL) to automate this matching process. The support tool consists of a hand-held computer with a programmable relational database. A two-level hierarchic decision framework was used for the identification of eligible subjects for two open breast cancer clinical trials. The hand-held computer also provides protocol consent forms and schemas to further help the busy oncologist. This decision support tool and the decision framework on which it is based could be used for multiple trials and different cancer sites.

  10. Pilot Study of a Point-of-use Decision Support Tool for Cancer Clinical Trials Eligibility

    PubMed Central

    Breitfeld, Philip P.; Weisburd, Marina; Overhage, J. Marc; Sledge, George; Tierney, William M.

    1999-01-01

    Many adults with cancer are not enrolled in clinical trials because caregivers do not have the time to match the patient's clinical findings with varying eligibility criteria associated with multiple trials for which the patient might be eligible. The authors developed a point-of-use portable decision support tool (DS-TRIEL) to automate this matching process. The support tool consists of a hand-held computer with a programmable relational database. A two-level hierarchic decision framework was used for the identification of eligible subjects for two open breast cancer clinical trials. The hand-held computer also provides protocol consent forms and schemas to further help the busy oncologist. This decision support tool and the decision framework on which it is based could be used for multiple trials and different cancer sites. PMID:10579605

  11. Unraveling multiple changes in complex climate time series using Bayesian inference

    NASA Astrophysics Data System (ADS)

    Berner, Nadine; Trauth, Martin H.; Holschneider, Matthias

    2016-04-01

    Change points in time series are perceived as heterogeneities in the statistical or dynamical characteristics of observations. Unraveling such transitions yields essential information for the understanding of the observed system. The precise detection and basic characterization of underlying changes is therefore of particular importance in environmental sciences. We present a kernel-based Bayesian inference approach to investigate direct as well as indirect climate observations for multiple generic transition events. In order to develop a diagnostic approach designed to capture a variety of natural processes, the basic statistical features of central tendency and dispersion are used to locally approximate a complex time series by a generic transition model. A Bayesian inversion approach is developed to robustly infer on the location and the generic patterns of such a transition. To systematically investigate time series for multiple changes occurring at different temporal scales, the Bayesian inversion is extended to a kernel-based inference approach. By introducing basic kernel measures, the kernel inference results are composed into a proxy probability to a posterior distribution of multiple transitions. Thus, based on a generic transition model a probability expression is derived that is capable to indicate multiple changes within a complex time series. We discuss the method's performance by investigating direct and indirect climate observations. The approach is applied to environmental time series (about 100 a), from the weather station in Tuscaloosa, Alabama, and confirms documented instrumentation changes. Moreover, the approach is used to investigate a set of complex terrigenous dust records from the ODP sites 659, 721/722 and 967 interpreted as climate indicators of the African region of the Plio-Pleistocene period (about 5 Ma). The detailed inference unravels multiple transitions underlying the indirect climate observations coinciding with established global climate events.

  12. The goal of ape pointing.

    PubMed

    Halina, Marta; Liebal, Katja; Tomasello, Michael

    2018-01-01

    Captive great apes regularly use pointing gestures in their interactions with humans. However, the precise function of this gesture is unknown. One possibility is that apes use pointing primarily to direct attention (as in "please look at that"); another is that they point mainly as an action request (such as "can you give that to me?"). We investigated these two possibilities here by examining how the looking behavior of recipients affects pointing in chimpanzees (Pan troglodytes) and bonobos (Pan paniscus). Upon pointing to food, subjects were faced with a recipient who either looked at the indicated object (successful-look) or failed to look at the indicated object (failed-look). We predicted that, if apes point primarily to direct attention, subjects would spend more time pointing in the failed-look condition because the goal of their gesture had not been met. Alternatively, we expected that, if apes point primarily to request an object, subjects would not differ in their pointing behavior between the successful-look and failed-look conditions because these conditions differed only in the looking behavior of the recipient. We found that subjects did differ in their pointing behavior across the successful-look and failed-look conditions, but contrary to our prediction subjects spent more time pointing in the successful-look condition. These results suggest that apes are sensitive to the attentional states of gestural recipients, but their adjustments are aimed at multiple goals. We also found a greater number of individuals with a strong right-hand than left-hand preference for pointing.

  13. Community Destruction and Traumatic Stress in Post-Tsunami Indonesia

    ERIC Educational Resources Information Center

    Frankenberg, Elizabeth; Nobles, Jenna; Sumantri, Cecep

    2012-01-01

    How are individuals affected when the communities they live in change for the worse? This question is central to understanding neighborhood effects, but few study designs generate estimates that can be interpreted causally. We address issues of inference through a natural experiment, examining post-traumatic stress at multiple time points in a…

  14. Does Familism Lead to Increased Parental Monitoring?: Protective Factors for Coping with Risky Behaviors

    ERIC Educational Resources Information Center

    Romero, Andrea J.; Ruiz, Myrna

    2007-01-01

    We examined coping with risky behaviors (cigarettes, alcohol/drugs, yelling/ hitting, and anger), familism (family proximity and parental closeness) and parental monitoring (knowledge and discipline) in a sample of 56 adolescents (11-15 years old) predominantly of Mexican descent at two time points. Multiple linear regression analysis indicated…

  15. Sex-Specific Associations between Umbilical Cord Blood Testosterone Levels and Language Delay in Early Childhood

    ERIC Educational Resources Information Center

    Whitehouse, Andrew J. O.; Mattes, Eugen; Maybery, Murray T.; Sawyer, Michael G.; Jacoby, Peter; Keelan, Jeffrey A.; Hickey, Martha

    2012-01-01

    Background: Preliminary evidence suggests that prenatal testosterone exposure may be associated with language delay. However, no study has examined a large sample of children at multiple time-points. Methods: Umbilical cord blood samples were obtained at 861 births and analysed for bioavailable testosterone (BioT) concentrations. When…

  16. Pulse Voltammetry in Single Cells Using Platinum Microelectrodes

    DTIC Science & Technology

    1991-11-22

    E. and the range for Ed in multiple pulse voltammetry can be chosen from examination of voltammograms obtained by cyclic voltammetry or lin-ir sweep ... voltametry [3,13]. As pointed out by Sinru et al. [14) the potential and time of each pulse has a direct effect on the nature of the voltammetry

  17. Multipoint Multimedia Conferencing System with Group Awareness Support and Remote Management

    ERIC Educational Resources Information Center

    Osawa, Noritaka; Asai, Kikuo

    2008-01-01

    A multipoint, multimedia conferencing system called FocusShare is described that uses IPv6/IPv4 multicasting for real-time collaboration, enabling video, audio, and group awareness information to be shared. Multiple telepointers provide group awareness information and make it easy to share attention and intention. In addition to pointing with the…

  18. Interruptions and Failure in Higher Education: Evidence from ISEG-UTL

    ERIC Educational Resources Information Center

    Chagas, Margarida; Fernandaes, Graca Leao

    2011-01-01

    Failure in higher education (HE) is the outcome of multiple time-dependent determinants. Interruptions in students' individual school trajectories are one of them, and that is why research on this topic has been attracting much attention these days. From an individual point of view, it is expected that interruptions in school trajectory, whatever…

  19. A multiple pointing-mount control strategy for space platforms

    NASA Technical Reports Server (NTRS)

    Johnson, C. D.

    1992-01-01

    A new disturbance-adaptive control strategy for multiple pointing-mount space platforms is proposed and illustrated by consideration of a simplified 3-link dynamic model of a multiple pointing-mount space platform. Simulation results demonstrate the effectiveness of the new platform control strategy. The simulation results also reveal a system 'destabilization phenomena' that can occur if the set of individual platform-mounted experiment controllers are 'too responsive.'

  20. Effectiveness of an audience response system in teaching pharmacology to baccalaureate nursing students.

    PubMed

    Vana, Kimberly D; Silva, Graciela E; Muzyka, Diann; Hirani, Lorraine M

    2011-06-01

    It has been proposed that students' use of an audience response system, commonly called clickers, may promote comprehension and retention of didactic material. Whether this method actually improves students' grades, however, is still not determined. The purpose of this study was to evaluate whether a lecture format utilizing multiple-choice PowerPoint slides and an audience response system was more effective than a lecture format using only multiple-choice PowerPoint slides in the comprehension and retention of pharmacological knowledge in baccalaureate nursing students. The study also assessed whether the additional use of clickers positively affected students' satisfaction with their learning. Results from 78 students who attended lecture classes with multiple-choice PowerPoint slides plus clickers were compared with those of 55 students who utilized multiple-choice PowerPoint slides only. Test scores between these two groups were not significantly different. A satisfaction questionnaire showed that 72.2% of the control students did not desire the opportunity to use clickers. Of the group utilizing the clickers, 92.3% recommend the use of this system in future courses. The use of multiple-choice PowerPoint slides and an audience response system did not seem to improve the students' comprehension or retention of pharmacological knowledge as compared with those who used solely multiple-choice PowerPoint slides.

  1. Human detection and motion analysis at security points

    NASA Astrophysics Data System (ADS)

    Ozer, I. Burak; Lv, Tiehan; Wolf, Wayne H.

    2003-08-01

    This paper presents a real-time video surveillance system for the recognition of specific human activities. Specifically, the proposed automatic motion analysis is used as an on-line alarm system to detect abnormal situations in a campus environment. A smart multi-camera system developed at Princeton University is extended for use in smart environments in which the camera detects the presence of multiple persons as well as their gestures and their interaction in real-time.

  2. Assisting People with Developmental Disabilities Improve Their Collaborative Pointing Efficiency with a Multiple Cursor Automatic Pointing Assistive Program

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang; Cheng, Hsiao-Fen; Li, Chia-Chun; Shih, Ching-Tien; Chiang, Ming-Shan

    2010-01-01

    This study evaluated whether four persons (two groups) with developmental disabilities would be able to improve their collaborative pointing performance through a Multiple Cursor Automatic Pointing Assistive Program (MCAPAP) with a newly developed mouse driver (i.e., a new mouse driver replaces standard mouse driver, and is able to…

  3. The use of multiple time point dynamic positron emission tomography/computed tomography in patients with oral/head and neck cancer does not predictably identify metastatic cervical lymph nodes.

    PubMed

    Carlson, Eric R; Schaefferkoetter, Josh; Townsend, David; McCoy, J Michael; Campbell, Paul D; Long, Misty

    2013-01-01

    To determine whether the time course of 18-fluorine fluorodeoxyglucose (18F-FDG) activity in multiple consecutively obtained 18F-FDG positron emission tomography (PET)/computed tomography (CT) scans predictably identifies metastatic cervical adenopathy in patients with oral/head and neck cancer. It is hypothesized that the activity will increase significantly over time only in those lymph nodes harboring metastatic cancer. A prospective cohort study was performed whereby patients with oral/head and neck cancer underwent consecutive imaging at 9 time points with PET/CT from 60 to 115 minutes after injection with (18)F-FDG. The primary predictor variable was the status of the lymph nodes based on dynamic PET/CT imaging. Metastatic lymph nodes were defined as those that showed an increase greater than or equal to 10% over the baseline standard uptake values. The primary outcome variable was the pathologic status of the lymph node. A total of 2,237 lymph nodes were evaluated histopathologically in the 83 neck dissections that were performed in 74 patients. A total of 119 lymph nodes were noted to have hypermetabolic activity on the 90-minute (static) portion of the study and were able to be assessed by time points. When we compared the PET/CT time point (dynamic) data with the histopathologic analysis of the lymph nodes, the sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were 60.3%, 70.5%, 66.0%, 65.2%, and 65.5%, respectively. The use of dynamic PET/CT imaging does not permit the ablative surgeon to depend only on the results of the PET/CT study to determine which patients will benefit from neck dissection. As such, we maintain that surgeons should continue to rely on clinical judgment and maintain a low threshold for executing neck dissection in patients with oral/head and neck cancer, including those patients with N0 neck designations. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  4. Research on fully distributed optical fiber sensing security system localization algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Xu; Hou, Jiacheng; Liu, Kun; Liu, Tiegen

    2013-12-01

    A new fully distributed optical fiber sensing and location technology based on the Mach-Zehnder interferometers is studied. In this security system, a new climbing point locating algorithm based on short-time average zero-crossing rate is presented. By calculating the zero-crossing rates of the multiple grouped data separately, it not only utilizes the advantages of the frequency analysis method to determine the most effective data group more accurately, but also meets the requirement of the real-time monitoring system. Supplemented with short-term energy calculation group signal, the most effective data group can be quickly picked out. Finally, the accurate location of the climbing point can be effectively achieved through the cross-correlation localization algorithm. The experimental results show that the proposed algorithm can realize the accurate location of the climbing point and meanwhile the outside interference noise of the non-climbing behavior can be effectively filtered out.

  5. Method and apparatus for fiber optic multiple scattering suppression

    NASA Technical Reports Server (NTRS)

    Ackerson, Bruce J. (Inventor)

    2000-01-01

    The instant invention provides a method and apparatus for use in laser induced dynamic light scattering which attenuates the multiple scattering component in favor of the single scattering component. The preferred apparatus utilizes two light detectors that are spatially and/or angularly separated and which simultaneously record the speckle pattern from a single sample. The recorded patterns from the two detectors are then cross correlated in time to produce one point on a composite single/multiple scattering function curve. By collecting and analyzing cross correlation measurements that have been taken at a plurality of different spatial/angular positions, the signal representative of single scattering may be differentiated from the signal representative of multiple scattering, and a near optimum detector separation angle for use in taking future measurements may be determined.

  6. An Adaptive Dynamic Pointing Assistance Program to Help People with Multiple Disabilities Improve Their Computer Pointing Efficiency with Hand Swing through a Standard Mouse

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang; Shih, Ching-Tien; Wu, Hsiao-Ling

    2010-01-01

    The latest research adopted software technology to redesign the mouse driver, and turned a mouse into a useful pointing assistive device for people with multiple disabilities who cannot easily or possibly use a standard mouse, to improve their pointing performance through a new operation method, Extended Dynamic Pointing Assistive Program (EDPAP),…

  7. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications

    PubMed Central

    Moussa, Adel; El-Sheimy, Naser; Habib, Ayman

    2017-01-01

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research. PMID:29057847

  8. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.

    PubMed

    Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman

    2017-10-18

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.

  9. THEMIS Global Mosaics

    NASA Astrophysics Data System (ADS)

    Gorelick, N. S.; Christensen, P. R.

    2005-12-01

    We have developed techniques to make seamless, controlled global mosaics from the more than 50,000 multi-spectral infrared images of the Mars returned by the THEMIS instrument aboard the Mars Odyssey spacecraft. These images cover more than 95% of the surface at 100m/pixel resolution at both day and night local times. Uncertainties in the position and pointing of the spacecraft, varying local time, and imaging artifacts make creating well-registered mosaics from these datasets a challenging task. In preparation for making global mosaics, many full-resolution regional mosaics have been made. These mosaics typically cover an area 10x10 degrees or smaller, and are constructed from only a few hundred images. To make regional mosaics, individual images are geo-rectified using the USGS ISIS software. This dead-reckoning is sufficient to approximate position to within 400m in cases where the SPICE information was downlinked. Further coregistration of images is handled in two ways: grayscale differences minimization in overlapping regions through integer pixel shifting, or through automatic tie-point generation using a radial symmetry transformation (RST). The RST identifies points within an image that exhibit 4-way symmetry. Martian craters tend to to be very radially symmetric, and the RST can pin-point a crater center to sub-pixel accuracy in both daytime and nighttime images, independent of lighting, time of day, or seasonal effects. Additionally, the RST works well on visible-light images, and in a 1D application, on MOLA tracks, to provide precision tie-points across multiple data sets. The RST often finds many points of symmetry that aren't related to surface features. These "false-hits" are managed using a clustering algorithm that identifies constellations of points that occur in multiple images, independent of scaling or other affine transformations. This technique is able to make use of data in which the "good" tie-points comprise even less than 1% of total candidate tie-points. Once tie-points have been identified, the individual images are warped into their final shape and position, and then mosaiced and blended. To make seamless mosaics, each image can be level adjusted to normalize its values using histogram-fitting, but in most cases a linear contrast stretch to a fixed standard deviation is sufficient, although it destroys the absolute radiometry of the mosaic. For very large mosaics, using a high-pass/low-pass separation, and blending the two pieces separately before recombining them has also provided positive results.

  10. Blow-up solutions for L 2 supercritical gKdV equations with exactly k blow-up points

    NASA Astrophysics Data System (ADS)

    Lan, Yang

    2017-08-01

    In this paper we consider the slightly L 2-supercritical gKdV equations \\partialt u+(uxx+u\\vert u\\vert p-1)_x=0 , with the nonlinearity 5 and 0<\\varepsilon\\ll 1 . In the previous work of the author, we know that there exists a stable self-similar blow-up dynamics for slightly L 2-supercritical gKdV equations. Such solutions can be viewed as solutions with a single blow-up point. In this paper we will prove the existence of solutions with multiple blow-up points, and give a description of the formation of the singularity near the blow-up time.

  11. Measuring Multiple Resistances Using Single-Point Excitation

    NASA Technical Reports Server (NTRS)

    Hall, Dan; Davies, Frank

    2009-01-01

    In a proposed method of determining the resistances of individual DC electrical devices connected in a series or parallel string, no attempt would be made to perform direct measurements on individual devices. Instead, (1) the devices would be instrumented by connecting reactive circuit components in parallel and/or in series with the devices, as appropriate; (2) a pulse or AC voltage excitation would be applied at a single point on the string; and (3) the transient or AC steady-state current response of the string would be measured at that point only. Each reactive component(s) associated with each device would be distinct in order to associate a unique time-dependent response with that device.

  12. Nanophotothermolysis of multiple scattered cancer cells with carbon nanotubes guided by time-resolved infrared thermal imaging

    PubMed Central

    Biris, Alexandru S.; Boldor, Dorin; Palmer, Jason; Monroe, William T.; Mahmood, Meena; Dervishi, Enkeleda; Xu, Yang; Li, Zhongrui; Galanzha, Ekaterina I.; Zharov, Vladimir P.

    2016-01-01

    Nanophotothermolysis with long laser pulses for treatment of scattered cancer cells and their clusters is introduced with the main focus on real-time monitoring of temperature dynamics inside and around individual cancer cells labeled with carbon nanotubes. This technique utilizes advanced time- and spatially-resolved thermal radiometry imaging for the visualization of laser-induced temperature distribution in multiple-point absorbing targets. The capability of this approach was demonstrated for monitoring of thermal effects under long laser exposure (from millisecond to seconds, wavelength 1064 nm, maximum power 1 W) of cervical cancer HeLa cells labeled with carbon nanotubes in vitro. The applications are discussed with a focus on the nanophotothermolysis of small tumors, tumor margins, or micrometastases under the guidance of near-IR and microwave radiometry. PMID:19405720

  13. Body sway, aim point fluctuation and performance in rifle shooters: inter- and intra-individual analysis.

    PubMed

    Ball, Kevin A; Best, Russell J; Wrigley, Tim V

    2003-07-01

    In this study, we examined the relationships between body sway, aim point fluctuation and performance in rifle shooting on an inter- and intra-individual basis. Six elite shooters performed 20 shots under competition conditions. For each shot, body sway parameters and four aim point fluctuation parameters were quantified for the time periods 5 s to shot, 3 s to shot and 1 s to shot. Three parameters were used to indicate performance. An AMTI LG6-4 force plate was used to measure body sway parameters, while a SCATT shooting analysis system was used to measure aim point fluctuation and shooting performance. Multiple regression analysis indicated that body sway was related to performance for four shooters. Also, body sway was related to aim point fluctuation for all shooters. These relationships were specific to the individual, with the strength of association, parameters of importance and time period of importance different for different shooters. Correlation analysis of significant regressions indicated that, as body sway increased, performance decreased and aim point fluctuation increased for most relationships. We conclude that body sway and aim point fluctuation are important in elite rifle shooting and performance errors are highly individual-specific at this standard. Individual analysis should be a priority when examining elite sports performance.

  14. Multiple types of motives don't multiply the motivation of West Point cadets.

    PubMed

    Wrzesniewski, Amy; Schwartz, Barry; Cong, Xiangyu; Kane, Michael; Omar, Audrey; Kolditz, Thomas

    2014-07-29

    Although people often assume that multiple motives for doing something will be more powerful and effective than a single motive, research suggests that different types of motives for the same action sometimes compete. More specifically, research suggests that instrumental motives, which are extrinsic to the activities at hand, can weaken internal motives, which are intrinsic to the activities at hand. We tested whether holding both instrumental and internal motives yields negative outcomes in a field context in which various motives occur naturally and long-term educational and career outcomes are at stake. We assessed the impact of the motives of over 10,000 West Point cadets over the period of a decade on whether they would become commissioned officers, extend their officer service beyond the minimum required period, and be selected for early career promotions. For each outcome, motivation internal to military service itself predicted positive outcomes; a relationship that was negatively affected when instrumental motives were also in evidence. These results suggest that holding multiple motives damages persistence and performance in educational and occupational contexts over long periods of time.

  15. Multiple types of motives don't multiply the motivation of West Point cadets

    PubMed Central

    Wrzesniewski, Amy; Schwartz, Barry; Cong, Xiangyu; Kane, Michael; Omar, Audrey; Kolditz, Thomas

    2014-01-01

    Although people often assume that multiple motives for doing something will be more powerful and effective than a single motive, research suggests that different types of motives for the same action sometimes compete. More specifically, research suggests that instrumental motives, which are extrinsic to the activities at hand, can weaken internal motives, which are intrinsic to the activities at hand. We tested whether holding both instrumental and internal motives yields negative outcomes in a field context in which various motives occur naturally and long-term educational and career outcomes are at stake. We assessed the impact of the motives of over 10,000 West Point cadets over the period of a decade on whether they would become commissioned officers, extend their officer service beyond the minimum required period, and be selected for early career promotions. For each outcome, motivation internal to military service itself predicted positive outcomes; a relationship that was negatively affected when instrumental motives were also in evidence. These results suggest that holding multiple motives damages persistence and performance in educational and occupational contexts over long periods of time. PMID:24982165

  16. Consistent and reproducible positioning in longitudinal imaging for phenotyping genetically modified swine

    NASA Astrophysics Data System (ADS)

    Hammond, Emily; Dilger, Samantha K. N.; Stoyles, Nicholas; Judisch, Alexandra; Morgan, John; Sieren, Jessica C.

    2015-03-01

    Recent growth of genetic disease models in swine has presented the opportunity to advance translation of developed imaging protocols, while characterizing the genotype to phenotype relationship. Repeated imaging with multiple clinical modalities provides non-invasive detection, diagnosis, and monitoring of disease to accomplish these goals; however, longitudinal scanning requires repeatable and reproducible positioning of the animals. A modular positioning unit was designed to provide a fixed, stable base for the anesthetized animal through transit and imaging. Post ventilation and sedation, animals were placed supine in the unit and monitored for consistent vitals. Comprehensive imaging was performed with a computed tomography (CT) chest-abdomen-pelvis scan at each screening time point. Longitudinal images were rigidly registered, accounting for rotation, translation, and anisotropic scaling, and the skeleton was isolated using a basic thresholding algorithm. Assessment of alignment was quantified via eleven pairs of corresponding points on the skeleton with the first time point as the reference. Results were obtained with five animals over five screening time points. The developed unit aided in skeletal alignment within an average of 13.13 +/- 6.7 mm for all five subjects providing a strong foundation for developing qualitative and quantitative methods of disease tracking.

  17. Quantitative Detection of Nucleoside Analogues by Multi-enzyme Biosensors using Time-Resolved Kinetic Measurements.

    PubMed

    Muthu, Pravin; Lutz, Stefan

    2016-04-05

    Fast, simple and cost-effective methods for detecting and quantifying pharmaceutical agents in patients are highly sought after to replace equipment and labor-intensive analytical procedures. The development of new diagnostic technology including portable detection devices also enables point-of-care by non-specialists in resource-limited environments. We have focused on the detection and dose monitoring of nucleoside analogues used in viral and cancer therapies. Using deoxyribonucleoside kinases (dNKs) as biosensors, our chemometric model compares observed time-resolved kinetics of unknown analytes to known substrate interactions across multiple enzymes. The resulting dataset can simultaneously identify and quantify multiple nucleosides and nucleoside analogues in complex sample mixtures. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Communication and cooperation in underwater acoustic networks

    NASA Astrophysics Data System (ADS)

    Yerramalli, Srinivas

    In this thesis, we present a study of several problems related to underwater point to point communications and network formation. We explore techniques to improve the achievable data rate on a point to point link using better physical layer techniques and then study sensor cooperation which improves the throughput and reliability in an underwater network. Robust point-to-point communications in underwater networks has become increasingly critical in several military and civilian applications related to underwater communications. We present several physical layer signaling and detection techniques tailored to the underwater channel model to improve the reliability of data detection. First, a simplified underwater channel model in which the time scale distortion on each path is assumed to be the same (single scale channel model in contrast to a more general multi scale model). A novel technique, which exploits the nature of OFDM signaling and the time scale distortion, called Partial FFT Demodulation is derived. It is observed that this new technique has some unique interference suppression properties and performs better than traditional equalizers in several scenarios of interest. Next, we consider the multi scale model for the underwater channel and assume that single scale processing is performed at the receiver. We then derive optimized front end pre-processing techniques to reduce the interference caused during single scale processing of signals transmitted on a multi-scale channel. We then propose an improvised channel estimation technique using dictionary optimization methods for compressive sensing and show that significant performance gains can be obtained using this technique. In the next part of this thesis, we consider the problem of sensor node cooperation among rational nodes whose objective is to improve their individual data rates. We first consider the problem of transmitter cooperation in a multiple access channel and investigate the stability of the grand coalition of transmitters using tools from cooperative game theory and show that the grand coalition in both the asymptotic regimes of high and low SNR. Towards studying the problem of receiver cooperation for a broadcast channel, we propose a game theoretic model for the broadcast channel and then derive a game theoretic duality between the multiple access and the broadcast channel and show that how the equilibria of the broadcast channel are related to the multiple access channel and vice versa.

  19. Optimal two-stage dynamic treatment regimes from a classification perspective with censored survival data.

    PubMed

    Hager, Rebecca; Tsiatis, Anastasios A; Davidian, Marie

    2018-05-18

    Clinicians often make multiple treatment decisions at key points over the course of a patient's disease. A dynamic treatment regime is a sequence of decision rules, each mapping a patient's observed history to the set of available, feasible treatment options at each decision point, and thus formalizes this process. An optimal regime is one leading to the most beneficial outcome on average if used to select treatment for the patient population. We propose a method for estimation of an optimal regime involving two decision points when the outcome of interest is a censored survival time, which is based on maximizing a locally efficient, doubly robust, augmented inverse probability weighted estimator for average outcome over a class of regimes. By casting this optimization as a classification problem, we exploit well-studied classification techniques such as support vector machines to characterize the class of regimes and facilitate implementation via a backward iterative algorithm. Simulation studies of performance and application of the method to data from a sequential, multiple assignment randomized clinical trial in acute leukemia are presented. © 2018, The International Biometric Society.

  20. Metabolic effects of pulmonary obstruction on myocardial functioning: a pilot study using multiple time-point 18F-FDG-PET imaging.

    PubMed

    Choi, Grace G; Han, Yuchi; Weston, Brian; Ciftci, Esra; Werner, Thomas J; Torigian, Drew; Salavati, Ali; Alavi, Abass

    2015-01-01

    The aim of this study was to evaluate fluorine-18 fluorodeoxyglucose (18F-FDG) uptake in the right ventricle (RV) of patients with chronic obstructive pulmonary disease (COPD) and to characterize the variability of 18F-FDG uptake in the RV at different time points following radiotracer administration using PET/computerized tomography (CT). Impaired RV systolic function, RV hypertrophy, and RV dilation are associated with increases in mean pulmonary arterial pressure in patients with COPD. Metabolic changes in the RV using 18F-FDG-PET images 2 and 3 h after tracer injection have not yet been investigated. Twenty-five patients with clinical suspicion of lung cancer underwent 18F-FDG-PET/CT imaging at 1, 2, and 3 h after tracer injection. Standardized uptake values (SUVs) and volumes of RV were recorded from transaxial sections to quantify the metabolic activity. The SUV of RV was higher in patients with COPD stages 1-3 as compared with that in patients with COPD stage 0. RV SUV was inversely correlated with FEV1/FVC pack-years of smoking at 1 h after 18F-FDG injection. In the majority of patients, 18F-FDG activity in RV decreased over time. There was no significant difference in the RV myocardial free wall and chamber volume on the basis of COPD status. The severity of lung obstruction and pack-years of smoking correlate with the level of 18F-FDG uptake in the RV myocardium, suggesting that there may be metabolic changes in the RV associated with lung obstruction that can be detected noninvasively using 18F-FDG-PET/CT. Multiple time-point images of the RV did not yield any additional value in this study.

  1. Joint classification and contour extraction of large 3D point clouds

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2017-08-01

    We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.

  2. Peripheral immunophenotype and viral promoter variants during the asymptomatic phase of feline immunodeficiency virus infection.

    PubMed

    Murphy, B; Hillman, C; McDonnel, S

    2014-01-22

    Feline immunodeficiency virus (FIV)-infected cats enter a clinically asymptomatic phase during chronic infection. Despite the lack of overt clinical disease, the asymptomatic phase is characterized by persistent immunologic impairment. In the peripheral blood obtained from cats experimentally infected with FIV-C for approximately 5 years, we identified a persistent inversion of the CD4/CD8 ratio. We cloned and sequenced the FIV-C long terminal repeat containing the viral promoter from cells infected with the inoculating virus and from in vivo-derived peripheral blood mononuclear cells and CD4 T cells isolated at multiple time points throughout the asymptomatic phase. Relative to the inoculating virus, viral sequences amplified from cells isolated from all of the infected animals demonstrated multiple single nucleotide mutations and a short deletion within the viral U3, R and U5 regions. A transcriptionally inactivating proviral mutation in the U3 promoter AP-1 site was identified at multiple time points from all of the infected animals but not within cell-associated viral RNA. In contrast, no mutations were identified within the sequence of the viral dUTPase gene amplified from PBMC isolated at approximately 5 years post-infection relative to the inoculating sequence. The possible implications of these mutations to viral pathogenesis are discussed. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Longitudinal Multiple Sclerosis Lesion Segmentation: Resource & Challenge

    PubMed Central

    Carass, Aaron; Roy, Snehashis; Jog, Amod; Cuzzocreo, Jennifer L.; Magrath, Elizabeth; Gherman, Adrian; Button, Julia; Nguyen, James; Prados, Ferran; Sudre, Carole H.; Cardoso, Manuel Jorge; Cawley, Niamh; Ciccarelli, Olga; Wheeler-Kingshott, Claudia A. M.; Ourselin, Sébastien; Catanese, Laurence; Deshpande, Hrishikesh; Maurel, Pierre; Commowick, Olivier; Barillot, Christian; Tomas-Fernandez, Xavier; Warfield, Simon K.; Vaidya, Suthirth; Chunduru, Abhijith; Muthuganapathy, Ramanathan; Krishnamurthi, Ganapathy; Jesson, Andrew; Arbel, Tal; Maier, Oskar; Handels, Heinz; Iheme, Leonardo O.; Unay, Devrim; Jain, Saurabh; Sima, Diana M.; Smeets, Dirk; Ghafoorian, Mohsen; Platel, Bram; Birenbaum, Ariel; Greenspan, Hayit; Bazin, Pierre-Louis; Calabresi, Peter A.; Crainiceanu, Ciprian M.; Ellingsen, Lotta M.; Reich, Daniel S.; Prince, Jerry L.; Pham, Dzung L.

    2017-01-01

    In conjunction with the ISBI 2015 conference, we organized a longitudinal lesion segmentation challenge providing training and test data to registered participants. The training data consisted of five subjects with a mean of 4.4 time-points, and test data of fourteen subjects with a mean of 4.4 time-points. All 82 data sets had the white matter lesions associated with multiple sclerosis delineated by two human expert raters. Eleven teams submitted results using state-of-the-art lesion segmentation algorithms to the challenge, with ten teams presenting their results at the conference. We present a quantitative evaluation comparing the consistency of the two raters as well as exploring the performance of the eleven submitted results in addition to three other lesion segmentation algorithms. The challenge presented three unique opportunities: 1) the sharing of a rich data set; 2) collaboration and comparison of the various avenues of research being pursued in the community; and 3) a review and refinement of the evaluation metrics currently in use. We report on the performance of the challenge participants, as well as the construction and evaluation of a consensus delineation. The image data and manual delineations will continue to be available for download, through an evaluation website1 as a resource for future researchers in the area. This data resource provides a platform to compare existing methods in a fair and consistent manner to each other and multiple manual raters. PMID:28087490

  4. Dynamic changes in global microRNAome and transcriptome reveal complex miRNA-mRNA regulated host response to Japanese Encephalitis Virus in microglial cells

    PubMed Central

    Kumari, Bharti; Jain, Pratistha; Das, Shaoli; Ghosal, Suman; Hazra, Bibhabasu; Trivedi, Ashish Chandra; Basu, Anirban; Chakrabarti, Jayprokas; Vrati, Sudhanshu; Banerjee, Arup

    2016-01-01

    Microglia cells in the brain play essential role during Japanese Encephalitis Virus (JEV) infection and may lead to change in microRNA (miRNA) and mRNA profile. These changes may together control disease outcome. Using Affymetrix microarray platform, we profiled cellular miRNA and mRNA expression at multiple time points during viral infection in human microglial (CHME3) cells. In silico analysis of microarray data revealed a phased pattern of miRNAs expression, associated with JEV replication and provided unique signatures of infection. Target prediction and pathway enrichment analysis identified anti correlation between differentially expressed miRNA and the gene expression at multiple time point which ultimately affected diverse signaling pathways including Notch signaling pathways in microglia. Activation of Notch pathway during JEV infection was demonstrated in vitro and in vivo. The expression of a subset of miRNAs that target multiple genes in Notch signaling pathways were suppressed and their overexpression could affect JEV induced immune response. Further analysis provided evidence for the possible presence of cellular competing endogenous RNA (ceRNA) associated with innate immune response. Collectively, our data provide a uniquely comprehensive view of the changes in the host miRNAs induced by JEV during cellular infection and identify Notch pathway in modulating microglia mediated inflammation. PMID:26838068

  5. Dynamic changes in global microRNAome and transcriptome reveal complex miRNA-mRNA regulated host response to Japanese Encephalitis Virus in microglial cells.

    PubMed

    Kumari, Bharti; Jain, Pratistha; Das, Shaoli; Ghosal, Suman; Hazra, Bibhabasu; Trivedi, Ashish Chandra; Basu, Anirban; Chakrabarti, Jayprokas; Vrati, Sudhanshu; Banerjee, Arup

    2016-02-03

    Microglia cells in the brain play essential role during Japanese Encephalitis Virus (JEV) infection and may lead to change in microRNA (miRNA) and mRNA profile. These changes may together control disease outcome. Using Affymetrix microarray platform, we profiled cellular miRNA and mRNA expression at multiple time points during viral infection in human microglial (CHME3) cells. In silico analysis of microarray data revealed a phased pattern of miRNAs expression, associated with JEV replication and provided unique signatures of infection. Target prediction and pathway enrichment analysis identified anti correlation between differentially expressed miRNA and the gene expression at multiple time point which ultimately affected diverse signaling pathways including Notch signaling pathways in microglia. Activation of Notch pathway during JEV infection was demonstrated in vitro and in vivo. The expression of a subset of miRNAs that target multiple genes in Notch signaling pathways were suppressed and their overexpression could affect JEV induced immune response. Further analysis provided evidence for the possible presence of cellular competing endogenous RNA (ceRNA) associated with innate immune response. Collectively, our data provide a uniquely comprehensive view of the changes in the host miRNAs induced by JEV during cellular infection and identify Notch pathway in modulating microglia mediated inflammation.

  6. Using multiple travel paths to estimate daily travel distance in arboreal, group-living primates.

    PubMed

    Steel, Ruth Irene

    2015-01-01

    Primate field studies often estimate daily travel distance (DTD) in order to estimate energy expenditure and/or test foraging hypotheses. In group-living species, the center of mass (CM) method is traditionally used to measure DTD; a point is marked at the group's perceived center of mass at a set time interval or upon each move, and the distance between consecutive points is measured and summed. However, for groups using multiple travel paths, the CM method potentially creates a central path that is shorter than the individual paths and/or traverses unused areas. These problems may compromise tests of foraging hypotheses, since distance and energy expenditure could be underestimated. To better understand the magnitude of these potential biases, I designed and tested the multiple travel paths (MTP) method, in which DTD was calculated by recording all travel paths taken by the group's members, weighting each path's distance based on its proportional use by the group, and summing the weighted distances. To compare the MTP and CM methods, DTD was calculated using both methods in three groups of Udzungwa red colobus monkeys (Procolobus gordonorum; group size 30-43) for a random sample of 30 days between May 2009 and March 2010. Compared to the CM method, the MTP method provided significantly longer estimates of DTD that were more representative of the actual distance traveled and the areas used by a group. The MTP method is more time-intensive and requires multiple observers compared to the CM method. However, it provides greater accuracy for testing ecological and foraging models.

  7. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    DOE PAGES

    Lockhart, M.; Henzlova, D.; Croft, S.; ...

    2017-09-20

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less

  8. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lockhart, M.; Henzlova, D.; Croft, S.

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less

  9. Gaussian process surrogates for failure detection: A Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Wang, Hongqiao; Lin, Guang; Li, Jinglai

    2016-05-01

    An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.

  10. Interaction of pulsating and spinning waves in nonadiabatic flame propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booty, M.R.; Margolis, S.B.; Matkowsky, B.J.

    1987-12-01

    The authors consider nonadiabatic premixed flame propagation in a long cylindrical channel. A steadily propagating planar flame exists for heat losses below a critical value. It is stable provided that the Lewis number and the volumetric heat loss coefficient are sufficiently small. At critical values of these parameters, bifurcated states, corresponding to time-periodic pulsating cellular flames, emanate from the steadily propagating solution. The authors analyze the problem in a neighborhood of a multiple primary bifurcation point. By varying the radius of the channel, they split the multiple bifurcation point and show that various types of stable periodic and quasi-periodic pulsatingmore » flames can arise as secondary, tertiary, and quaternary bifurcations. Their analysis describes several types of spinning and pulsating flame propagation which have been experimentally observed in nonadiabatic flames, and also describes additional quasi-periodic modes of burning which have yet to be documented experimentally.« less

  11. SIFT optimization and automation for matching images from multiple temporal sources

    NASA Astrophysics Data System (ADS)

    Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio

    2017-05-01

    Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.

  12. Real-time multiple human perception with color-depth cameras on a mobile robot.

    PubMed

    Zhang, Hao; Reardon, Christopher; Parker, Lynne E

    2013-10-01

    The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an accurate system for real-time 3-D perception of humans by a mobile robot.

  13. The Relation between Binge Drinking and Academic Performance: Considering the Mediating Effects of Academic Involvement

    ERIC Educational Resources Information Center

    An, Brian P.; Loes, Chad N.; Trolian, Teniell L.

    2017-01-01

    Using longitudinal data from multiple institutions, we focused on the relation between binge drinking and academic performance. Binge drinking exerts a negative influence on grade point average, even after accounting for a host of precollege confounding variables. Furthermore, the number of times a student binge drinks in college is less…

  14. Dynamic Debates: An Analysis of Group Polarization over Time on Twitter

    ERIC Educational Resources Information Center

    Yardi, Sarita; Boyd, Danah

    2010-01-01

    The principle of homophily says that people associate with other groups of people who are mostly like themselves. Many online communities are structured around groups of socially similar individuals. On Twitter, however, people are exposed to multiple, diverse points of view through the public timeline. The authors captured 30,000 tweets about the…

  15. NLS Handbook, 2005. National Longitudinal Surveys

    ERIC Educational Resources Information Center

    Bureau of Labor Statistics, 2006

    2006-01-01

    The National Longitudinal Surveys (NLS), sponsored by the U.S. Bureau of Labor Statistics (BLS), are a set of surveys designed to gather information at multiple points in time on the labor market experiences of groups of men and women. Each of the cohorts has been selected to represent all people living in the United States at the initial…

  16. Development of a Whole-Body Haptic Sensor with Multiple Supporting Points and Its Application to a Manipulator

    NASA Astrophysics Data System (ADS)

    Hanyu, Ryosuke; Tsuji, Toshiaki

    This paper proposes a whole-body haptic sensing system that has multiple supporting points between the body frame and the end-effector. The system consists of an end-effector and multiple force sensors. Using this mechanism, the position of a contact force on the surface can be calculated without any sensor array. A haptic sensing system with a single supporting point structure has previously been developed by the present authors. However, the system has drawbacks such as low stiffness and low strength. Therefore, in this study, a mechanism with multiple supporting points was proposed and its performance was verified. In this paper, the basic concept of the mechanism is first introduced. Next, an evaluation of the proposed method, performed by conducting some experiments, is presented.

  17. Simultaneous multiple view high resolution surface geometry acquisition using structured light and mirrors.

    PubMed

    Basevi, Hector R A; Guggenheim, James A; Dehghani, Hamid; Styles, Iain B

    2013-03-25

    Knowledge of the surface geometry of an imaging subject is important in many applications. This information can be obtained via a number of different techniques, including time of flight imaging, photogrammetry, and fringe projection profilometry. Existing systems may have restrictions on instrument geometry, require expensive optics, or require moving parts in order to image the full surface of the subject. An inexpensive generalised fringe projection profilometry system is proposed that can account for arbitrarily placed components and use mirrors to expand the field of view. It simultaneously acquires multiple views of an imaging subject, producing a cloud of points that lie on its surface, which can then be processed to form a three dimensional model. A prototype of this system was integrated into an existing Diffuse Optical Tomography and Bioluminescence Tomography small animal imaging system and used to image objects including a mouse-shaped plastic phantom, a mouse cadaver, and a coin. A surface mesh generated from surface capture data of the mouse-shaped plastic phantom was compared with ideal surface points provided by the phantom manufacturer, and 50% of points were found to lie within 0.1mm of the surface mesh, 82% of points were found to lie within 0.2mm of the surface mesh, and 96% of points were found to lie within 0.4mm of the surface mesh.

  18. Alternative methods for CYP2D6 phenotyping: comparison of dextromethorphan metabolic ratios from AUC, single point plasma, and urine.

    PubMed

    Chen, Rui; Wang, Haotian; Shi, Jun; Hu, Pei

    2016-05-01

    CYP2D6 is a high polymorphic enzyme. Determining its phenotype before CYP2D6 substrate treatment can avoid dose-dependent adverse events or therapeutic failures. Alternative phenotyping methods of CYP2D6 were compared to aluate the appropriate and precise time points for phenotyping after single-dose and ultiple-dose of 30-mg controlled-release (CR) dextromethorphan (DM) and to explore the antimodes for potential sampling methods. This was an open-label, single and multiple-dose study. 21 subjects were assigned to receive a single dose of CR DM 30 mg orally, followed by a 3-day washout period prior to oral administration of CR DM 30 mg every 12 hours for 6 days. Metabolic ratios (MRs) from AUC∞ after single dosing and from AUC0-12h at steady state were taken as the gold standard. The correlations of metabolic ratios of DM to dextrorphan (MRDM/DX) values based on different phenotyping methods were assessed. Linear regression formulas were derived to calculate the antimodes for potential sample methods. In the single-dose part of the study statistically significant correlations were found between MRDM/DX from AUC∞ and from serial plasma points from 1 to 30 hours or from urine (all p-values < 0.001). In the multiple-dose part, statistically significant correlations were found between MRDM/DX from AUC0-12h on day 6 and MRDM/DX from serial plasma points from 0 to 36 hours after the last dosing (all p-values < 0.001). Based on reported urinary antimode and linear regression analysis, the antimodes of AUC and plasma points were derived to profile the trend of antimodes as the drug concentrations changed. MRDM/DX from plasma points had good correlations with MRDM/DX from AUC. Plasma points from 1 to 30 hours after single dose of 30-mg CR DM and any plasma point at steady state after multiple doses of CR DM could potentially be used for phenotyping of CYP2D6.

  19. The goal of ape pointing

    PubMed Central

    Liebal, Katja; Tomasello, Michael

    2018-01-01

    Captive great apes regularly use pointing gestures in their interactions with humans. However, the precise function of this gesture is unknown. One possibility is that apes use pointing primarily to direct attention (as in “please look at that”); another is that they point mainly as an action request (such as “can you give that to me?”). We investigated these two possibilities here by examining how the looking behavior of recipients affects pointing in chimpanzees (Pan troglodytes) and bonobos (Pan paniscus). Upon pointing to food, subjects were faced with a recipient who either looked at the indicated object (successful-look) or failed to look at the indicated object (failed-look). We predicted that, if apes point primarily to direct attention, subjects would spend more time pointing in the failed-look condition because the goal of their gesture had not been met. Alternatively, we expected that, if apes point primarily to request an object, subjects would not differ in their pointing behavior between the successful-look and failed-look conditions because these conditions differed only in the looking behavior of the recipient. We found that subjects did differ in their pointing behavior across the successful-look and failed-look conditions, but contrary to our prediction subjects spent more time pointing in the successful-look condition. These results suggest that apes are sensitive to the attentional states of gestural recipients, but their adjustments are aimed at multiple goals. We also found a greater number of individuals with a strong right-hand than left-hand preference for pointing. PMID:29694358

  20. Development of Universal Controller Architecture for SiC Based Power Electronic Building Blocks

    DTIC Science & Technology

    2017-10-30

    time control and control network routing and the other for non -real time instrumentation and monitoring. The two subsystems are isolated and share...directly to the processor without any software intervention. We use a non -real time I Gb/s Ethernet interface for monitoring and control of the module...NOTC1 802.lW Spanning tree Prot. 76.96 184.0 107.04 Multiple point Private Line l NOTC1 203.2 382.3 179.1 N/ A Non applicable 1 No traffic control at

  1. Multiple magma emplacement and its effect on the superficial deformation: hints from analogue models

    NASA Astrophysics Data System (ADS)

    Montanari, Domenico; Bonini, Marco; Corti, Giacomo; del Ventisette, Chiara

    2017-04-01

    To test the effect exerted by multiple magma emplacement on the deformation pattern, we have run analogue models with synchronous, as well as diachronous magma injection from different, aligned inlets. The distance between injection points, as well as the activation in time of injection points was varied for each model. Our model results show how the position and activation in time of injection points (which reproduce multiple magma batches in nature) strongly influence model evolution. In the case of synchronous injection at different inlets, the intrusions and associated surface deformation were elongated. Forced folds and annular bounding reverse faults were quite elliptical, and with the main axis of the elongated dome trending sub-parallel to the direction of the magma input points. Model results also indicate that the injection from multiple aligned sources could reproduce the same features of systems associated with planar feeder dikes, thereby suggesting that caution should be taken when trying to infer the feeding areas on the basis of the deformation features observed at the surface or in seismic profiles. Diachronous injection from different injection points showed that the deformation observed at surface does not necessarily reflect the location and/or geometry of their feeders. Most notably, these experiments suggest that coeval magma injection from different sources favor the lateral migration of magma rather than the vertical growth, promoting the development of laterally interconnected intrusions. Recently, some authors (Magee et al., 2014, 2016; Schofield et al., 2015) have suggested that, based on seismic reflection data analysis, interconnected sills and inclined sheets can facilitate the transport of magma over great vertical distances and laterally for large distances. Intrusions and volcanoes fed by sill complexes may thus be laterally offset significantly from the melt source. Our model results strongly support these findings, by reproducing in the laboratory a strong lateral magma migration, and suggesting a possible mechanism. The models also confirmed that lateral magma migration could take place with little or no accompanying surface deformation. The research leading to these results has received funding from the European Community's Seventh Framework Programme under grant agreement No. 608553 (Project IMAGE). References: Magee et al., 2014. Basin Research, v. 26, p. 85-105, doi: 10 .1111 /bre.12044. Magee et al., 2016. Geosphere, v. 12, p. 809-841, ISSN: 1553-040X. Schofield et al., 2015. Basin Research, v. 29, p. 41-63, doi:10.1111/bre.12164.

  2. Bladder dysfunction in multiple sclerosis: a 6-year follow-up study.

    PubMed

    Kisic Tepavcevic, Darija; Pekmezovic, Tatjana; Dujmovic Basuroski, Irena; Mesaros, Sarlota; Drulovic, Jelena

    2017-03-01

    Bladder dysfunction (BD) is the most common autonomic disturbance in multiple sclerosis, but often overlooked and undertreated. The purpose of this longitudinal study was to explore the changes in the frequency of BD symptoms in MS cohort after a period of 3 and 6 years of follow-up, as well as to investigate the correlations between the presence of BD symptoms and both clinical characteristics and the health-related quality of life (HRQoL) at each subsequent point of estimation. The study population comprises a cohort of 93 patients with MS (McDonald's criteria, 2001). At each time point (baseline, and at the 3- and 6-year follow-up) of estimation, Expanded Disability Status Scale, Hamilton Rating Scale for Depression, Fatigue Severity Scale, Szasz Sexual Functioning Scale and HRQoL (measured by MSQoL-54) were assessed. The proportion of patients with at least one symptom of BD significantly increased over time, for both men and women (from 48.1% at baseline to 51.9% after 3 years and to 71.4% after 6 years of follow-up for males and from 45.5% at baseline to 50.0% after 3 years and to 66.7% after 6 years of follow-up for females). The most common BD problem was urgency of urination. The presence of BD was statistically significantly associated with higher level of physical disability, sexual dysfunction and HRQoL at each point of follow-up, for both men and women. Our results suggested outstanding frequency of BD in patients with MS, with increasing tendency over time.

  3. Prediction of Therapy Tumor-Absorbed Dose Estimates in I-131 Radioimmunotherapy Using Tracer Data Via a Mixed-Model Fit to Time Activity

    PubMed Central

    Koral, Kenneth F.; Avram, Anca M.; Kaminski, Mark S.; Dewaraja, Yuni K.

    2012-01-01

    Abstract Background For individualized treatment planning in radioimmunotherapy (RIT), correlations must be established between tracer-predicted and therapy-delivered absorbed doses. The focus of this work was to investigate this correlation for tumors. Methods The study analyzed 57 tumors in 19 follicular lymphoma patients treated with I-131 tositumomab and imaged with SPECT/CT multiple times after tracer and therapy administrations. Instead of the typical least-squares fit to a single tumor's measured time-activity data, estimation was accomplished via a biexponential mixed model in which the curves from multiple subjects were jointly estimated. The tumor-absorbed dose estimates were determined by patient-specific Monte Carlo calculation. Results The mixed model gave realistic tumor time-activity fits that showed the expected uptake and clearance phases even with noisy data or missing time points. Correlation between tracer and therapy tumor-residence times (r=0.98; p<0.0001) and correlation between tracer-predicted and therapy-delivered mean tumor-absorbed doses (r=0.86; p<0.0001) were very high. The predicted and delivered absorbed doses were within±25% (or within±75 cGy) for 80% of tumors. Conclusions The mixed-model approach is feasible for fitting tumor time-activity data in RIT treatment planning when individual least-squares fitting is not possible due to inadequate sampling points. The good correlation between predicted and delivered tumor doses demonstrates the potential of using a pretherapy tracer study for tumor dosimetry-based treatment planning in RIT. PMID:22947086

  4. Comparison of two stand-alone CADe systems at multiple operating points

    NASA Astrophysics Data System (ADS)

    Sahiner, Berkman; Chen, Weijie; Pezeshk, Aria; Petrick, Nicholas

    2015-03-01

    Computer-aided detection (CADe) systems are typically designed to work at a given operating point: The device displays a mark if and only if the level of suspiciousness of a region of interest is above a fixed threshold. To compare the standalone performances of two systems, one approach is to select the parameters of the systems to yield a target false-positive rate that defines the operating point, and to compare the sensitivities at that operating point. Increasingly, CADe developers offer multiple operating points, which necessitates the comparison of two CADe systems involving multiple comparisons. To control the Type I error, multiple-comparison correction is needed for keeping the family-wise error rate (FWER) less than a given alpha-level. The sensitivities of a single modality at different operating points are correlated. In addition, the sensitivities of the two modalities at the same or different operating points are also likely to be correlated. It has been shown in the literature that when test statistics are correlated, well-known methods for controlling the FWER are conservative. In this study, we compared the FWER and power of three methods, namely the Bonferroni, step-up, and adjusted step-up methods in comparing the sensitivities of two CADe systems at multiple operating points, where the adjusted step-up method uses the estimated correlations. Our results indicate that the adjusted step-up method has a substantial advantage over other the two methods both in terms of the FWER and power.

  5. HABIT, a Randomized Feasibility Trial to Increase Hydroxyurea Adherence, Suggests Improved Health-Related Quality of Life in Youths with Sickle Cell Disease.

    PubMed

    Smaldone, Arlene; Findley, Sally; Manwani, Deepa; Jia, Haomiao; Green, Nancy S

    2018-06-01

    To examine the effect of a community health worker (CHW) intervention, augmented by tailored text messages, on adherence to hydroxyurea therapy in youths with sickle cell disease, as well as on generic and disease-specific health-related quality of life (HrQL) and youth-parent self-management responsibility concordance. We conducted a 2-site randomized controlled feasibility study (Hydroxyurea Adherence for Personal Best in Sickle Cell Treatment [HABIT]) with 2:1 intervention allocation. Youths and parents participated as dyads. Intervention dyads received CHW visits and text message reminders. Data were analyzed using descriptive statistics, the Wilcoxon signed-rank test, and growth models adjusting for group assignment, time, and multiple comparisons. Changes in outcomes from 0 to 6 months were compared with their respective minimal clinically important differences. A total of 28 dyads (mean age of youths, 14.3 ± 2.6 years; 50% Hispanic) participated (18 in the intervention group, 10 in the control group), with 10.7% attrition. Accounting for group assignment, time, and multiple comparisons, at 6 months intervention youths reported improved generic HrQL total score (9.8 points; 95% CI, 0.4-19.2) and Emotions subscale score (15.0 points; 95% CI, 1.6-28.4); improved disease-specific subscale scores for Worry I (30.0 points; 95% CI, 8.5-51.5), Emotions (37.0 points, 95% CI, 9.4-64.5), and Communication I (17.8 points; 95% CI, 0.5-35.1); and 3-month dyad self-management responsibility concordance (3.5 points; 95% CI, -0.2 to 7.1). There were no differences in parent proxy-reported HrQL measures at 6 months. These findings add to research examining effects of behavioral interventions on HrQL outcomes in youths with sickle cell disease. ClinicalTrials.gov: NCT02029742. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. DTWscore: differential expression and cell clustering analysis for time-series single-cell RNA-seq data.

    PubMed

    Wang, Zhuo; Jin, Shuilin; Liu, Guiyou; Zhang, Xiurui; Wang, Nan; Wu, Deliang; Hu, Yang; Zhang, Chiping; Jiang, Qinghua; Xu, Li; Wang, Yadong

    2017-05-23

    The development of single-cell RNA sequencing has enabled profound discoveries in biology, ranging from the dissection of the composition of complex tissues to the identification of novel cell types and dynamics in some specialized cellular environments. However, the large-scale generation of single-cell RNA-seq (scRNA-seq) data collected at multiple time points remains a challenge to effective measurement gene expression patterns in transcriptome analysis. We present an algorithm based on the Dynamic Time Warping score (DTWscore) combined with time-series data, that enables the detection of gene expression changes across scRNA-seq samples and recovery of potential cell types from complex mixtures of multiple cell types. The DTWscore successfully classify cells of different types with the most highly variable genes from time-series scRNA-seq data. The study was confined to methods that are implemented and available within the R framework. Sample datasets and R packages are available at https://github.com/xiaoxiaoxier/DTWscore .

  7. Online coupled camera pose estimation and dense reconstruction from video

    DOEpatents

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  8. The evaluation of small-sided games as a talent identification tool in highly trained prepubertal soccer players.

    PubMed

    Fenner, Jonathan S J; Iga, John; Unnithan, Viswanath

    2016-10-01

    The aim of this study was to evaluate physiological and technical attributes of prepubertal soccer players during multiple small-sided games (SSGs), and determine if SSGs can act as a talent identification tool. Sixteen highly trained U10 soccer players participated and separated into two groups of eight. Each group played six small-sided (4 vs. 4) matches of 5-min duration. Each player was awarded total points for the match result and goals scored. A game technical scoring chart was used to rate each player's performance during each game. Time-motion characteristics were measured using micromechanical devices. Total points had a very large significant relationship with game technical scoring chart (r = 0.758, P < 0.001). High-speed running distance had a significantly large correlation with game technical scoring chart (r = 0.547, P < 0.05). Total distance covered had a significant and moderate correlation with game technical scoring chart (r = 0.545, P < 0.05) and total points (r = 0.438, P < 0.05). The results demonstrated a large agreement between the highest-rated players and success in multiple SSGs, possibly due to higher-rated players covering larger distances in total and at high speed. Consequently, multiple SSG could be used to identify the more talented prepubertal soccer players.

  9. Graph transformation method for calculating waiting times in Markov chains.

    PubMed

    Trygubenko, Semen A; Wales, David J

    2006-06-21

    We describe an exact approach for calculating transition probabilities and waiting times in finite-state discrete-time Markov processes. All the states and the rules for transitions between them must be known in advance. We can then calculate averages over a given ensemble of paths for both additive and multiplicative properties in a nonstochastic and noniterative fashion. In particular, we can calculate the mean first-passage time between arbitrary groups of stationary points for discrete path sampling databases, and hence extract phenomenological rate constants. We present a number of examples to demonstrate the efficiency and robustness of this approach.

  10. Extremal values of the sojourn time

    NASA Astrophysics Data System (ADS)

    Astaburuaga, M. A.; Cortés, V. H.; Duclos, P.

    2010-11-01

    Consider a self-adjoint operator H on a separable Hilbert space \\ {H} with non-trivial absolutely continuous component. We study the general properties of the real-valued functional, \\tau _{H}(\\psi )=\\int _{{\\ R}}|(e^{-itH}\\psi,\\psi )|^2\\,dt, which in quantum mechanics represents the sojourn time (or life time) of an initial state \\psi \\in \\ {H}. We characterize the critical points of the sojourn time, τX, of the operator multiplication by x in L^2({\\ R}), and prove that it attains a global maximum in the unit sphere of the Sobolev space \\ {W}^{1,2}({\\ R}).

  11. Historical dynamics in ecosystem service bundles.

    PubMed

    Renard, Delphine; Rhemtulla, Jeanine M; Bennett, Elena M

    2015-10-27

    Managing multiple ecosystem services (ES), including addressing trade-offs between services and preventing ecological surprises, is among the most pressing areas for sustainability research. These challenges require ES research to go beyond the currently common approach of snapshot studies limited to one or two services at a single point in time. We used a spatiotemporal approach to examine changes in nine ES and their relationships from 1971 to 2006 across 131 municipalities in a mixed-use landscape in Quebec, Canada. We show how an approach that incorporates time and space can improve our understanding of ES dynamics. We found an increase in the provision of most services through time; however, provision of ES was not uniformly enhanced at all locations. Instead, each municipality specialized in providing a bundle (set of positively correlated ES) dominated by just a few services. The trajectory of bundle formation was related to changes in agricultural policy and global trends; local biophysical and socioeconomic characteristics explained the bundles' increasing spatial clustering. Relationships between services varied through time, with some provisioning and cultural services shifting from a trade-off or no relationship in 1971 to an apparent synergistic relationship by 2006. By implementing a spatiotemporal perspective on multiple services, we provide clear evidence of the dynamic nature of ES interactions and contribute to identifying processes and drivers behind these changing relationships. Our study raises questions about using snapshots of ES provision at a single point in time to build our understanding of ES relationships in complex and dynamic social-ecological systems.

  12. MICA: Multiple interval-based curve alignment

    NASA Astrophysics Data System (ADS)

    Mann, Martin; Kahle, Hans-Peter; Beck, Matthias; Bender, Bela Johannes; Spiecker, Heinrich; Backofen, Rolf

    2018-01-01

    MICA enables the automatic synchronization of discrete data curves. To this end, characteristic points of the curves' shapes are identified. These landmarks are used within a heuristic curve registration approach to align profile pairs by mapping similar characteristics onto each other. In combination with a progressive alignment scheme, this enables the computation of multiple curve alignments. Multiple curve alignments are needed to derive meaningful representative consensus data of measured time or data series. MICA was already successfully applied to generate representative profiles of tree growth data based on intra-annual wood density profiles or cell formation data. The MICA package provides a command-line and graphical user interface. The R interface enables the direct embedding of multiple curve alignment computation into larger analyses pipelines. Source code, binaries and documentation are freely available at https://github.com/BackofenLab/MICA

  13. Robust sensor fault detection and isolation of gas turbine engines subjected to time-varying parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Pourbabaee, Bahareh; Meskin, Nader; Khorasani, Khashayar

    2016-08-01

    In this paper, a novel robust sensor fault detection and isolation (FDI) strategy using the multiple model-based (MM) approach is proposed that remains robust with respect to both time-varying parameter uncertainties and process and measurement noise in all the channels. The scheme is composed of robust Kalman filters (RKF) that are constructed for multiple piecewise linear (PWL) models that are constructed at various operating points of an uncertain nonlinear system. The parameter uncertainty is modeled by using a time-varying norm bounded admissible structure that affects all the PWL state space matrices. The robust Kalman filter gain matrices are designed by solving two algebraic Riccati equations (AREs) that are expressed as two linear matrix inequality (LMI) feasibility conditions. The proposed multiple RKF-based FDI scheme is simulated for a single spool gas turbine engine to diagnose various sensor faults despite the presence of parameter uncertainties, process and measurement noise. Our comparative studies confirm the superiority of our proposed FDI method when compared to the methods that are available in the literature.

  14. [Current aspects of therapy conversion for multiple sclerosis].

    PubMed

    Kolber, P; Luessi, F; Meuth, S G; Klotz, L; Korn, T; Trebst, C; Tackenberg, B; Kieseier, B; Kümpfel, T; Fleischer, V; Tumani, H; Wildemann, B; Lang, M; Flachenecker, P; Meier, U; Brück, W; Limmroth, V; Haghikia, A; Hartung, H-P; Stangel, M; Hohlfeld, R; Hemmer, B; Gold, R; Wiendl, H; Zipp, F

    2015-10-01

    In recent years the approval of new substances has led to a substantial increase in the number of course-modifying immunotherapies available for multiple sclerosis. Therapy conversion therefore represents an increasing challenge. The treatment options sometimes show complex adverse effect profiles and necessitate a long-term and comprehensive monitoring. This article presents an overview of therapy conversion of immunotherapies for multiple sclerosis in accordance with the recommendations of the Disease-Related Competence Network for Multiple Sclerosis and the German Multiple Sclerosis Society as well as the guidelines on diagnostics and therapy for multiple sclerosis of the German Society of Neurology and the latest research results. At the present point in time it should be noted that no studies have been carried out for most of the approaches for therapy conversion given here; however, the recommendations are based on theoretical considerations and therefore correspond to recommendations at the level of expert consensus, which is currently essential for the clinical daily routine.

  15. Lung function in type 2 diabetes: the Normative Aging Study.

    PubMed

    Litonjua, Augusto A; Lazarus, Ross; Sparrow, David; Demolles, Debbie; Weiss, Scott T

    2005-12-01

    Cross-sectional studies have noted that subjects with diabetes have lower lung function than non-diabetic subjects. We conducted this analysis to determine whether diabetic subjects have different rates of lung function change compared with non-diabetic subjects. We conducted a nested case-control analysis in 352 men who developed diabetes and 352 non-diabetic subjects in a longitudinal observational study of aging in men. We assessed lung function among cases and controls at three time points: Time0, prior to meeting the definition of diabetes; Time1, the point when the definition of diabetes was met; and Time2, the most recent follow-up exam. Cases had lower forced expiratory volume in 1s (FEV1) and forced vital capacity (FVC) at all time points, even with adjustment for age, height, weight, and smoking. In multiple linear regression models adjusting for relevant covariates, there were no differences in rates of FEV1 or FVC change over time between cases and controls. Men who are predisposed to develop diabetes have decreased lung function many years prior to the diagnosis, compared with men who do not develop diabetes. This decrement in lung function remains after the development of diabetes. We postulate that mechanisms involved in the insulin resistant state contribute to the diminished lung function observed in our subjects.

  16. Computer-assisted 3D kinematic analysis of all leg joints in walking insects.

    PubMed

    Bender, John A; Simpson, Elaine M; Ritzmann, Roy E

    2010-10-26

    High-speed video can provide fine-scaled analysis of animal behavior. However, extracting behavioral data from video sequences is a time-consuming, tedious, subjective task. These issues are exacerbated where accurate behavioral descriptions require analysis of multiple points in three dimensions. We describe a new computer program written to assist a user in simultaneously extracting three-dimensional kinematics of multiple points on each of an insect's six legs. Digital video of a walking cockroach was collected in grayscale at 500 fps from two synchronized, calibrated cameras. We improved the legs' visibility by painting white dots on the joints, similar to techniques used for digitizing human motion. Compared to manual digitization of 26 points on the legs over a single, 8-second bout of walking (or 106,496 individual 3D points), our software achieved approximately 90% of the accuracy with 10% of the labor. Our experimental design reduced the complexity of the tracking problem by tethering the insect and allowing it to walk in place on a lightly oiled glass surface, but in principle, the algorithms implemented are extensible to free walking. Our software is free and open-source, written in the free language Python and including a graphical user interface for configuration and control. We encourage collaborative enhancements to make this tool both better and widely utilized.

  17. Pointing with Power or Creating with Chalk

    ERIC Educational Resources Information Center

    Rudow, Sasha R.; Finck, Joseph E.

    2015-01-01

    This study examines the attitudes of students on the use of PowerPoint and chalk/white boards in college science lecture classes. Students were asked to complete a survey regarding their experiences with PowerPoint and chalk/white boards in their science classes. Both multiple-choice and short answer questions were used. The multiple-choice…

  18. Modeling a Single SEP Event from Multiple Vantage Points Using the iPATH Model

    NASA Astrophysics Data System (ADS)

    Hu, Junxiang; Li, Gang; Fu, Shuai; Zank, Gary; Ao, Xianzhi

    2018-02-01

    Using the recently extended 2D improved Particle Acceleration and Transport in the Heliosphere (iPATH) model, we model an example gradual solar energetic particle event as observed at multiple locations. Protons and ions that are energized via the diffusive shock acceleration mechanism are followed at a 2D coronal mass ejection-driven shock where the shock geometry varies across the shock front. The subsequent transport of energetic particles, including cross-field diffusion, is modeled by a Monte Carlo code that is based on a stochastic differential equation method. Time intensity profiles and particle spectra at multiple locations and different radial distances, separated in longitudes, are presented. The results shown here are relevant to the upcoming Parker Solar Probe mission.

  19. Realtime control of multiple-focus phased array heating patterns based on noninvasive ultrasound thermography.

    PubMed

    Casper, Andrew; Liu, Dalong; Ebbini, Emad S

    2012-01-01

    A system for the realtime generation and control of multiple-focus ultrasound phased-array heating patterns is presented. The system employs a 1-MHz, 64-element array and driving electronics capable of fine spatial and temporal control of the heating pattern. The driver is integrated with a realtime 2-D temperature imaging system implemented on a commercial scanner. The coordinates of the temperature control points are defined on B-mode guidance images from the scanner, together with the temperature set points and controller parameters. The temperature at each point is controlled by an independent proportional, integral, and derivative controller that determines the focal intensity at that point. Optimal multiple-focus synthesis is applied to generate the desired heating pattern at the control points. The controller dynamically reallocates the power available among the foci from the shared power supply upon reaching the desired temperature at each control point. Furthermore, anti-windup compensation is implemented at each control point to improve the system dynamics. In vitro experiments in tissue-mimicking phantom demonstrate the robustness of the controllers for short (2-5 s) and longer multiple-focus high-intensity focused ultrasound exposures. Thermocouple measurements in the vicinity of the control points confirm the dynamics of the temperature variations obtained through noninvasive feedback. © 2011 IEEE

  20. Kinematic Validation of a Multi-Kinect v2 Instrumented 10-Meter Walkway for Quantitative Gait Assessments.

    PubMed

    Geerse, Daphne J; Coolen, Bert H; Roerdink, Melvyn

    2015-01-01

    Walking ability is frequently assessed with the 10-meter walking test (10MWT), which may be instrumented with multiple Kinect v2 sensors to complement the typical stopwatch-based time to walk 10 meters with quantitative gait information derived from Kinect's 3D body point's time series. The current study aimed to evaluate a multi-Kinect v2 set-up for quantitative gait assessments during the 10MWT against a gold-standard motion-registration system by determining between-systems agreement for body point's time series, spatiotemporal gait parameters and the time to walk 10 meters. To this end, the 10MWT was conducted at comfortable and maximum walking speed, while 3D full-body kinematics was concurrently recorded with the multi-Kinect v2 set-up and the Optotrak motion-registration system (i.e., the gold standard). Between-systems agreement for body point's time series was assessed with the intraclass correlation coefficient (ICC). Between-systems agreement was similarly determined for the gait parameters' walking speed, cadence, step length, stride length, step width, step time, stride time (all obtained for the intermediate 6 meters) and the time to walk 10 meters, complemented by Bland-Altman's bias and limits of agreement. Body point's time series agreed well between the motion-registration systems, particularly so for body points in motion. For both comfortable and maximum walking speeds, the between-systems agreement for the time to walk 10 meters and all gait parameters except step width was high (ICC ≥ 0.888), with negligible biases and narrow limits of agreement. Hence, body point's time series and gait parameters obtained with a multi-Kinect v2 set-up match well with those derived with a gold standard in 3D measurement accuracy. Future studies are recommended to test the clinical utility of the multi-Kinect v2 set-up to automate 10MWT assessments, thereby complementing the time to walk 10 meters with reliable spatiotemporal gait parameters obtained objectively in a quick, unobtrusive and patient-friendly manner.

  1. A unified procedure for meta-analytic evaluation of surrogate end points in randomized clinical trials

    PubMed Central

    Dai, James Y.; Hughes, James P.

    2012-01-01

    The meta-analytic approach to evaluating surrogate end points assesses the predictiveness of treatment effect on the surrogate toward treatment effect on the clinical end point based on multiple clinical trials. Definition and estimation of the correlation of treatment effects were developed in linear mixed models and later extended to binary or failure time outcomes on a case-by-case basis. In a general regression setting that covers nonnormal outcomes, we discuss in this paper several metrics that are useful in the meta-analytic evaluation of surrogacy. We propose a unified 3-step procedure to assess these metrics in settings with binary end points, time-to-event outcomes, or repeated measures. First, the joint distribution of estimated treatment effects is ascertained by an estimating equation approach; second, the restricted maximum likelihood method is used to estimate the means and the variance components of the random treatment effects; finally, confidence intervals are constructed by a parametric bootstrap procedure. The proposed method is evaluated by simulations and applications to 2 clinical trials. PMID:22394448

  2. Integrating the ACR Appropriateness Criteria Into the Radiology Clerkship: Comparison of Didactic Format and Group-Based Learning.

    PubMed

    Stein, Marjorie W; Frank, Susan J; Roberts, Jeffrey H; Finkelstein, Malka; Heo, Moonseong

    2016-05-01

    The aim of this study was to determine whether group-based or didactic teaching is more effective to teach ACR Appropriateness Criteria to medical students. An identical pretest, posttest, and delayed multiple-choice test was used to evaluate the efficacy of the two teaching methods. Descriptive statistics comparing test scores were obtained. On the posttest, the didactic group gained 12.5 points (P < .0001), and the group-based learning students gained 16.3 points (P < .0001). On the delayed test, the didactic group gained 14.4 points (P < .0001), and the group-based learning students gained 11.8 points (P < .001). The gains in scores on both tests were statistically significant for both groups. However, the differences in scores were not statistically significant comparing the two educational methods. Compared with didactic lectures, group-based learning is more enjoyable, time efficient, and equally efficacious. The choice of educational method can be individualized for each institution on the basis of group size, time constraints, and faculty availability. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  3. Large-Scale Point-Cloud Visualization through Localized Textured Surface Reconstruction.

    PubMed

    Arikan, Murat; Preiner, Reinhold; Scheiblauer, Claus; Jeschke, Stefan; Wimmer, Michael

    2014-09-01

    In this paper, we introduce a novel scene representation for the visualization of large-scale point clouds accompanied by a set of high-resolution photographs. Many real-world applications deal with very densely sampled point-cloud data, which are augmented with photographs that often reveal lighting variations and inaccuracies in registration. Consequently, the high-quality representation of the captured data, i.e., both point clouds and photographs together, is a challenging and time-consuming task. We propose a two-phase approach, in which the first (preprocessing) phase generates multiple overlapping surface patches and handles the problem of seamless texture generation locally for each patch. The second phase stitches these patches at render-time to produce a high-quality visualization of the data. As a result of the proposed localization of the global texturing problem, our algorithm is more than an order of magnitude faster than equivalent mesh-based texturing techniques. Furthermore, since our preprocessing phase requires only a minor fraction of the whole data set at once, we provide maximum flexibility when dealing with growing data sets.

  4. Advanced Optimal Extraction for the Spitzer/IRS

    NASA Astrophysics Data System (ADS)

    Lebouteiller, V.; Bernard-Salas, J.; Sloan, G. C.; Barry, D. J.

    2010-02-01

    We present new advances in the spectral extraction of pointlike sources adapted to the Infrared Spectrograph (IRS) on board the Spitzer Space Telescope. For the first time, we created a supersampled point-spread function of the low-resolution modules. We describe how to use the point-spread function to perform optimal extraction of a single source and of multiple sources within the slit. We also examine the case of the optimal extraction of one or several sources with a complex background. The new algorithms are gathered in a plug-in called AdOpt which is part of the SMART data analysis software.

  5. Analysis of the Yule-Nielsen effect with the multiple-path point spread function in a frequency-modulated halftone.

    PubMed

    Rogers, Geoffrey

    2018-06-01

    The Yule-Nielsen effect is an influence on halftone color caused by the diffusion of light within the paper upon which the halftone ink is printed. The diffusion can be characterized by a point spread function. In this paper, a point spread function for paper is derived using the multiple-path model of reflection. This model treats the interaction of light with turbid media as a random walk. Using the multiple-path point spread function, a general expression is derived for the average reflectance of light from a frequency-modulated halftone, in which dot size is constant and the number of dots is varied, with the arrangement of dots random. It is also shown that the line spread function derived from the multiple-path model has the form of a Lorentzian function.

  6. Consistency of multi-time Dirac equations with general interaction potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deckert, Dirk-André, E-mail: deckert@math.lmu.de; Nickel, Lukas, E-mail: nickel@math.lmu.de

    In 1932, Dirac proposed a formulation in terms of multi-time wave functions as candidate for relativistic many-particle quantum mechanics. A well-known consistency condition that is necessary for existence of solutions strongly restricts the possible interaction types between the particles. It was conjectured by Petrat and Tumulka that interactions described by multiplication operators are generally excluded by this condition, and they gave a proof of this claim for potentials without spin-coupling. Under suitable assumptions on the differentiability of possible solutions, we show that there are potentials which are admissible, give an explicit example, however, show that none of them fulfills themore » physically desirable Poincaré invariance. We conclude that in this sense, Dirac’s multi-time formalism does not allow to model interaction by multiplication operators, and briefly point out several promising approaches to interacting models one can instead pursue.« less

  7. Global robust stability of bidirectional associative memory neural networks with multiple time delays.

    PubMed

    Senan, Sibel; Arik, Sabri

    2007-10-01

    This correspondence presents a sufficient condition for the existence, uniqueness, and global robust asymptotic stability of the equilibrium point for bidirectional associative memory neural networks with discrete time delays. The results impose constraint conditions on the network parameters of the neural system independently of the delay parameter, and they are applicable to all bounded continuous nonmonotonic neuron activation functions. Some numerical examples are given to compare our results with the previous robust stability results derived in the literature.

  8. The Importance and Role of Intracluster Correlations in Planning Cluster Trials

    PubMed Central

    Preisser, John S.; Reboussin, Beth A.; Song, Eun-Young; Wolfson, Mark

    2008-01-01

    There is increasing recognition of the critical role of intracluster correlations of health behavior outcomes in cluster intervention trials. This study examines the estimation, reporting, and use of intracluster correlations in planning cluster trials. We use an estimating equations approach to estimate the intracluster correlations corresponding to the multiple-time-point nested cross-sectional design. Sample size formulae incorporating 2 types of intracluster correlations are examined for the purpose of planning future trials. The traditional intracluster correlation is the correlation among individuals within the same community at a specific time point. A second type is the correlation among individuals within the same community at different time points. For a “time × condition” analysis of a pretest–posttest nested cross-sectional trial design, we show that statistical power considerations based upon a posttest-only design generally are not an adequate substitute for sample size calculations that incorporate both types of intracluster correlations. Estimation, reporting, and use of intracluster correlations are illustrated for several dichotomous measures related to underage drinking collected as part of a large nonrandomized trial to enforce underage drinking laws in the United States from 1998 to 2004. PMID:17879427

  9. Multifocal Fluorescence Microscope for Fast Optical Recordings of Neuronal Action Potentials

    PubMed Central

    Shtrahman, Matthew; Aharoni, Daniel B.; Hardy, Nicholas F.; Buonomano, Dean V.; Arisaka, Katsushi; Otis, Thomas S.

    2015-01-01

    In recent years, optical sensors for tracking neural activity have been developed and offer great utility. However, developing microscopy techniques that have several kHz bandwidth necessary to reliably capture optically reported action potentials (APs) at multiple locations in parallel remains a significant challenge. To our knowledge, we describe a novel microscope optimized to measure spatially distributed optical signals with submillisecond and near diffraction-limit resolution. Our design uses a spatial light modulator to generate patterned illumination to simultaneously excite multiple user-defined targets. A galvanometer driven mirror in the emission path streaks the fluorescence emanating from each excitation point during the camera exposure, using unused camera pixels to capture time varying fluorescence at rates that are ∼1000 times faster than the camera’s native frame rate. We demonstrate that this approach is capable of recording Ca2+ transients resulting from APs in neurons labeled with the Ca2+ sensor Oregon Green Bapta-1 (OGB-1), and can localize the timing of these events with millisecond resolution. Furthermore, optically reported APs can be detected with the voltage sensitive dye DiO-DPA in multiple locations within a neuron with a signal/noise ratio up to ∼40, resolving delays in arrival time along dendrites. Thus, the microscope provides a powerful tool for photometric measurements of dynamics requiring submillisecond sampling at multiple locations. PMID:25650920

  10. Optical multi-point measurements of the acoustic particle velocity with frequency modulated Doppler global velocimetry.

    PubMed

    Fischer, Andreas; König, Jörg; Haufe, Daniel; Schlüssler, Raimund; Büttner, Lars; Czarske, Jürgen

    2013-08-01

    To reduce the noise of machines such as aircraft engines, the development and propagation of sound has to be investigated. Since the applicability of microphones is limited due to their intrusiveness, contactless measurement techniques are required. For this reason, the present study describes an optical method based on the Doppler effect and its application for acoustic particle velocity (APV) measurements. While former APV measurements with Doppler techniques are point measurements, the applied system is capable of simultaneous measurements at multiple points. In its current state, the system provides linear array measurements of one component of the APV demonstrated by multi-tone experiments with tones up to 17 kHz for the first time.

  11. Thermosolutal convection during directional solidification. II - Flow transitions

    NASA Technical Reports Server (NTRS)

    Mcfadden, G. B.; Coriell, S. R.

    1987-01-01

    The influence of thermosolutal convection on solute segregation in crystals grown by vertical directional solidification of binary metallic alloys or semiconductors is studied. Finite differences are used in a two-dimensional time-dependent model which assumes a planar crystal-melt interface to obtain numerical results. It is assumed that the configuration is periodic in the horizontal direction. Consideration is given to the possibility of multiple flow states sharing the same period. The results are represented in bifurcation diagrams of the nonlinear states associated with the critical points of linear theory. Variations of the solutal Rayleigh number can lead to the occurrence of multiple steady states, time-periodic states, and quasi-periodic states. This case is compared to that of thermosolutal convection with linear vertical gradients and stress-free boundaries.

  12. Imputation of Test Scores in the National Education Longitudinal Study of 1988 (NELS:88). Working Paper Series.

    ERIC Educational Resources Information Center

    Bokossa, Maxime C.; Huang, Gary G.

    This report describes the imputation procedures used to deal with missing data in the National Education Longitudinal Study of 1988 (NELS:88), the only current National Center for Education Statistics (NCES) dataset that contains scores from cognitive tests given the same set of students at multiple time points. As is inevitable, cognitive test…

  13. Hopelessness as a Predictor of Attempted Suicide among First Admission Patients with Psychosis: A 10-Year Cohort Study

    ERIC Educational Resources Information Center

    Klonsky, E. David; Kotov, Roman; Bakst, Shelly; Rabinowitz, Jonathan; Bromet, Evelyn J.

    2012-01-01

    Little is known about the longitudinal relationship of hopelessness to attempted suicide in psychotic disorders. This study addresses this gap by assessing hopelessness and attempted suicide at multiple time-points over 10 years in a first-admission cohort with psychosis (n = 414). Approximately one in five participants attempted suicide during…

  14. A two-layer multiple-time-scale turbulence model and grid independence study

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.; Chen, C.-P.

    1989-01-01

    A two-layer multiple-time-scale turbulence model is presented. The near-wall model is based on the classical Kolmogorov-Prandtl turbulence hypothesis and the semi-empirical logarithmic law of the wall. In the two-layer model presented, the computational domain of the conservation of mass equation and the mean momentum equation penetrated up to the wall, where no slip boundary condition has been prescribed; and the near wall boundary of the turbulence equations has been located at the fully turbulent region, yet very close to the wall, where the standard wall function method has been applied. Thus, the conservation of mass constraint can be satisfied more rigorously in the two-layer model than in the standard wall function method. In most of the two-layer turbulence models, the number of grid points to be used inside the near-wall layer posed the issue of computational efficiency. The present finite element computational results showed that the grid independent solutions were obtained with as small as two grid points, i.e., one quadratic element, inside the near wall layer. Comparison of the computational results obtained by using the two-layer model and those obtained by using the wall function method is also presented.

  15. Extracting similar terms from multiple EMR-based semantic embeddings to support chart reviews.

    PubMed

    Cheng Ye, M S; Fabbri, Daniel

    2018-05-21

    Word embeddings project semantically similar terms into nearby points in a vector space. When trained on clinical text, these embeddings can be leveraged to improve keyword search and text highlighting. In this paper, we present methods to refine the selection process of similar terms from multiple EMR-based word embeddings, and evaluate their performance quantitatively and qualitatively across multiple chart review tasks. Word embeddings were trained on each clinical note type in an EMR. These embeddings were then combined, weighted, and truncated to select a refined set of similar terms to be used in keyword search and text highlighting. To evaluate their quality, we measured the similar terms' information retrieval (IR) performance using precision-at-K (P@5, P@10). Additionally a user study evaluated users' search term preferences, while a timing study measured the time to answer a question from a clinical chart. The refined terms outperformed the baseline method's information retrieval performance (e.g., increasing the average P@5 from 0.48 to 0.60). Additionally, the refined terms were preferred by most users, and reduced the average time to answer a question. Clinical information can be more quickly retrieved and synthesized when using semantically similar term from multiple embeddings. Copyright © 2018. Published by Elsevier Inc.

  16. Indicator saturation: a novel approach to detect multiple breaks in geodetic time series.

    NASA Astrophysics Data System (ADS)

    Jackson, L. P.; Pretis, F.; Williams, S. D. P.

    2016-12-01

    Geodetic time series can record long term trends, quasi-periodic signals at a variety of time scales from days to decades, and sudden breaks due to natural or anthropogenic causes. The causes of breaks range from instrument replacement to earthquakes to unknown (i.e. no attributable cause). Furthermore, breaks can be permanent or short-lived and range at least two orders of magnitude in size (mm to 100's mm). To account for this range of possible signal-characteristics requires a flexible time series method that can distinguish between true and false breaks, outliers and time-varying trends. One such method, Indicator Saturation (IS) comes from the field of econometrics where analysing stochastic signals in these terms is a common problem. The IS approach differs from alternative break detection methods by considering every point in the time series as a break until it is demonstrated statistically that it is not. A linear model is constructed with a break function at every point in time, and all but statistically significant breaks are removed through a general-to-specific model selection algorithm for more variables than observations. The IS method is flexible because it allows multiple breaks of different forms (e.g. impulses, shifts in the mean, and changing trends) to be detected, while simultaneously modelling any underlying variation driven by additional covariates. We apply the IS method to identify breaks in a suite of synthetic GPS time series used for the Detection of Offsets in GPS Experiments (DOGEX). We optimise the method to maximise the ratio of true-positive to false-positive detections, which improves estimates of errors in the long term rates of land motion currently required by the GPS community.

  17. The Multiple Control of Verbal Behavior

    PubMed Central

    Michael, Jack; Palmer, David C; Sundberg, Mark L

    2011-01-01

    Amid the novel terms and original analyses in Skinner's Verbal Behavior, the importance of his discussion of multiple control is easily missed, but multiple control of verbal responses is the rule rather than the exception. In this paper we summarize and illustrate Skinner's analysis of multiple control and introduce the terms convergent multiple control and divergent multiple control. We point out some implications for applied work and discuss examples of the role of multiple control in humor, poetry, problem solving, and recall. Joint control and conditional discrimination are discussed as special cases of multiple control. We suggest that multiple control is a useful analytic tool for interpreting virtually all complex behavior, and we consider the concepts of derived relations and naming as cases in point. PMID:22532752

  18. A comparison of multiple indicator kriging and area-to-point Poisson kriging for mapping patterns of herbivore species abundance in Kruger National Park, South Africa

    PubMed Central

    Kerry, Ruth; Goovaerts, Pierre; Smit, Izak P.J.; Ingram, Ben R.

    2015-01-01

    Kruger National Park (KNP), South Africa, provides protected habitats for the unique animals of the African savannah. For the past 40 years, annual aerial surveys of herbivores have been conducted to aid management decisions based on (1) the spatial distribution of species throughout the park and (2) total species populations in a year. The surveys are extremely time consuming and costly. For many years, the whole park was surveyed, but in 1998 a transect survey approach was adopted. This is cheaper and less time consuming but leaves gaps in the data spatially. Also the distance method currently employed by the park only gives estimates of total species populations but not their spatial distribution. We compare the ability of multiple indicator kriging and area-to-point Poisson kriging to accurately map species distribution in the park. A leave-one-out cross-validation approach indicates that multiple indicator kriging makes poor estimates of the number of animals, particularly the few large counts, as the indicator variograms for such high thresholds are pure nugget. Poisson kriging was applied to the prediction of two types of abundance data: spatial density and proportion of a given species. Both Poisson approaches had standardized mean absolute errors (St. MAEs) of animal counts at least an order of magnitude lower than multiple indicator kriging. The spatial density, Poisson approach (1), gave the lowest St. MAEs for the most abundant species and the proportion, Poisson approach (2), did for the least abundant species. Incorporating environmental data into Poisson approach (2) further reduced St. MAEs. PMID:25729318

  19. A comparison of multiple indicator kriging and area-to-point Poisson kriging for mapping patterns of herbivore species abundance in Kruger National Park, South Africa.

    PubMed

    Kerry, Ruth; Goovaerts, Pierre; Smit, Izak P J; Ingram, Ben R

    Kruger National Park (KNP), South Africa, provides protected habitats for the unique animals of the African savannah. For the past 40 years, annual aerial surveys of herbivores have been conducted to aid management decisions based on (1) the spatial distribution of species throughout the park and (2) total species populations in a year. The surveys are extremely time consuming and costly. For many years, the whole park was surveyed, but in 1998 a transect survey approach was adopted. This is cheaper and less time consuming but leaves gaps in the data spatially. Also the distance method currently employed by the park only gives estimates of total species populations but not their spatial distribution. We compare the ability of multiple indicator kriging and area-to-point Poisson kriging to accurately map species distribution in the park. A leave-one-out cross-validation approach indicates that multiple indicator kriging makes poor estimates of the number of animals, particularly the few large counts, as the indicator variograms for such high thresholds are pure nugget. Poisson kriging was applied to the prediction of two types of abundance data: spatial density and proportion of a given species. Both Poisson approaches had standardized mean absolute errors (St. MAEs) of animal counts at least an order of magnitude lower than multiple indicator kriging. The spatial density, Poisson approach (1), gave the lowest St. MAEs for the most abundant species and the proportion, Poisson approach (2), did for the least abundant species. Incorporating environmental data into Poisson approach (2) further reduced St. MAEs.

  20. A comparative study of Conroy and Monte Carlo methods applied to multiple quadratures and multiple scattering

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Fluellen, A.

    1978-01-01

    An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.

  1. A novel in vitro image-based assay identifies new drug leads for giardiasis.

    PubMed

    Hart, Christopher J S; Munro, Taylah; Andrews, Katherine T; Ryan, John H; Riches, Andrew G; Skinner-Adams, Tina S

    2017-04-01

    Giardia duodenalis is an intestinal parasite that causes giardiasis, a widespread human gastrointestinal disease. Treatment of giardiasis relies on a small arsenal of compounds that can suffer from limitations including side-effects, variable treatment efficacy and parasite drug resistance. Thus new anti-Giardia drug leads are required. The search for new compounds with anti-Giardia activity currently depends on assays that can be labour-intensive, expensive and restricted to measuring activity at a single time-point. Here we describe a new in vitro assay to assess anti-Giardia activity. This image-based assay utilizes the Perkin-Elmer Operetta ® and permits automated assessment of parasite growth at multiple time points without cell-staining. Using this new approach, we assessed the "Malaria Box" compound set for anti-Giardia activity. Three compounds with sub-μM activity (IC 50 0.6-0.9 μM) were identified as potential starting points for giardiasis drug discovery. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Multi-ray medical ultrasound simulation without explicit speckle modelling.

    PubMed

    Tuzer, Mert; Yazıcı, Abdulkadir; Türkay, Rüştü; Boyman, Michael; Acar, Burak

    2018-05-04

    To develop a medical ultrasound (US) simulation method using T1-weighted magnetic resonance images (MRI) as the input that offers a compromise between low-cost ray-based and high-cost realistic wave-based simulations. The proposed method uses a novel multi-ray image formation approach with a virtual phased array transducer probe. A domain model is built from input MR images. Multiple virtual acoustic rays are emerged from each element of the linear transducer array. Reflected and transmitted acoustic energy at discrete points along each ray is computed independently. Simulated US images are computed by fusion of the reflected energy along multiple rays from multiple transducers, while phase delays due to differences in distances to transducers are taken into account. A preliminary implementation using GPUs is presented. Preliminary results show that the multi-ray approach is capable of generating view point-dependent realistic US images with an inherent Rician distributed speckle pattern automatically. The proposed simulator can reproduce the shadowing artefacts and demonstrates frequency dependence apt for practical training purposes. We also have presented preliminary results towards the utilization of the method for real-time simulations. The proposed method offers a low-cost near-real-time wave-like simulation of realistic US images from input MR data. It can further be improved to cover the pathological findings using an improved domain model, without any algorithmic updates. Such a domain model would require lesion segmentation or manual embedding of virtual pathologies for training purposes.

  3. Positive contraction mappings for classical and quantum Schrödinger systems

    NASA Astrophysics Data System (ADS)

    Georgiou, Tryphon T.; Pavon, Michele

    2015-03-01

    The classical Schrödinger bridge seeks the most likely probability law for a diffusion process, in path space, that matches marginals at two end points in time; the likelihood is quantified by the relative entropy between the sought law and a prior. Jamison proved that the new law is obtained through a multiplicative functional transformation of the prior. This transformation is characterised by an automorphism on the space of endpoints probability measures, which has been studied by Fortet, Beurling, and others. A similar question can be raised for processes evolving in a discrete time and space as well as for processes defined over non-commutative probability spaces. The present paper builds on earlier work by Pavon and Ticozzi and begins by establishing solutions to Schrödinger systems for Markov chains. Our approach is based on the Hilbert metric and shows that the solution to the Schrödinger bridge is provided by the fixed point of a contractive map. We approach, in a similar manner, the steering of a quantum system across a quantum channel. We are able to establish existence of quantum transitions that are multiplicative functional transformations of a given Kraus map for the cases where the marginals are either uniform or pure states. As in the Markov chain case, and for uniform density matrices, the solution of the quantum bridge can be constructed from the fixed point of a certain contractive map. For arbitrary marginal densities, extensive numerical simulations indicate that iteration of a similar map leads to fixed points from which we can construct a quantum bridge. For this general case, however, a proof of convergence remains elusive.

  4. Hormone therapy use and physical quality of life in postmenopausal women with multiple sclerosis.

    PubMed

    Bove, Riley; White, Charles C; Fitzgerald, Kathryn C; Chitnis, Tanuja; Chibnik, Lori; Ascherio, Alberto; Munger, Kassandra L

    2016-10-04

    To determine the association between hormone therapy (HT) and physical quality of life (QOL) in postmenopausal women with multiple sclerosis (MS). We included female participants from the prospective Nurses' Health Study, with a diagnosis of definite or probable MS, who had completed a physical functioning assessment (PF10; subscale of the 36-Item Short-Form Health Survey QOL survey) at a time point between 3 and 10 years after their final menstrual period (early postmenopause). We assessed the association between HT use at this time point (never vs at least 12 months of systemic estrogen with/without progestin) and both PF10 and the 36-Item Short-Form Health Survey Physical Component Scale. We used a linear regression model adjusting for age, MS duration, menopause type and duration, and further for additional covariates (only ancestry was significant). Among 95 participants meeting all inclusion criteria at their first postmenopausal assessment, 61 reported HT use and 34 reported none. HT users differed from non-HT users in MS duration (p = 0.02) and menopause type (p = 0.01) but no other clinical or demographic characteristics. HT users had average PF10 scores that were 23 points higher than non-HT users (adjusted p = 0.004) and average Physical Component Scale scores that were 9.1 points higher in the 59 women with these available (adjusted p = 0.02). Longer duration of HT use was also associated with higher PF10 scores (p = 0.02, adjusted p = 0.06). Systemic HT use was associated with better physical QOL in postmenopausal women with MS in this observational study. Further studies are necessary to investigate causality. © 2016 American Academy of Neurology.

  5. STAR 3 randomized controlled trial to compare sensor-augmented insulin pump therapy with multiple daily injections in the treatment of type 1 diabetes: research design, methods, and baseline characteristics of enrolled subjects.

    PubMed

    Davis, Stephen N; Horton, Edward S; Battelino, Tadej; Rubin, Richard R; Schulman, Kevin A; Tamborlane, William V

    2010-04-01

    Sensor-augmented pump therapy (SAPT) integrates real-time continuous glucose monitoring (RT-CGM) with continuous subcutaneous insulin infusion (CSII) and offers an alternative to multiple daily injections (MDI). Previous studies provide evidence that SAPT may improve clinical outcomes among people with type 1 diabetes. Sensor-Augmented Pump Therapy for A1c Reduction (STAR) 3 is a multicenter randomized controlled trial comparing the efficacy of SAPT to that of MDI in subjects with type 1 diabetes. Subjects were randomized to either continue with MDI or transition to SAPT for 1 year. Subjects in the MDI cohort were allowed to transition to SAPT for 6 months after completion of the study. SAPT subjects who completed the study were also allowed to continue for 6 months. The primary end point was the difference between treatment groups in change in hemoglobin A1c (HbA1c) percentage from baseline to 1 year of treatment. Secondary end points included percentage of subjects with HbA1c < or =7% and without severe hypoglycemia, as well as area under the curve of time spent in normal glycemic ranges. Tertiary end points include percentage of subjects with HbA1c < or =7%, key safety end points, user satisfaction, and responses on standardized assessments. A total of 495 subjects were enrolled, and the baseline characteristics similar between the SAPT and MDI groups. Study completion is anticipated in June 2010. Results of this randomized controlled trial should help establish whether an integrated RT-CGM and CSII system benefits patients with type 1 diabetes more than MDI.

  6. Defining toxicological tipping points in neuronal network development.

    PubMed

    Frank, Christopher L; Brown, Jasmine P; Wallace, Kathleen; Wambaugh, John F; Shah, Imran; Shafer, Timothy J

    2018-02-02

    Measuring electrical activity of neural networks by microelectrode array (MEA) has recently shown promise for screening level assessments of chemical toxicity on network development and function. Important aspects of interneuronal communication can be quantified from a single MEA recording, including individual firing rates, coordinated bursting, and measures of network synchrony, providing rich datasets to evaluate chemical effects. Further, multiple recordings can be made from the same network, including during the formation of these networks in vitro. The ability to perform multiple recording sessions over the in vitro development of network activity may provide further insight into developmental effects of neurotoxicants. In the current study, a recently described MEA-based screen of 86 compounds in primary rat cortical cultures over 12 days in vitro was revisited to establish a framework that integrates all available primary measures of electrical activity from MEA recordings into a composite metric for deviation from normal activity (total scalar perturbation). Examining scalar perturbations over time and increasing concentration of compound allowed for definition of critical concentrations or "tipping points" at which the neural networks switched from recovery to non-recovery trajectories for 42 compounds. These tipping point concentrations occurred at predominantly lower concentrations than those causing overt cell viability loss or disrupting individual network parameters, suggesting tipping points may be a more sensitive measure of network functional loss. Comparing tipping points for six compounds with plasma concentrations known to cause developmental neurotoxicity in vivo demonstrated strong concordance and suggests there is potential for using tipping points for chemical prioritization. Published by Elsevier Inc.

  7. Alternatives for jet engine control

    NASA Technical Reports Server (NTRS)

    Leake, R. J.; Sain, M. K.

    1978-01-01

    General goals of the research were classified into two categories. The first category involves the use of modern multivariable frequency domain methods for control of engine models in the neighborhood of a quiescent point. The second category involves the use of nonlinear modelling and optimization techniques for control of engine models over a more extensive part of the flight envelope. In the frequency domain category, works were published in the areas of low-interaction design, polynomial design, and multiple setpoint studies. A number of these ideas progressed to the point at which they are starting to attract practical interest. In the nonlinear category, advances were made both in engine modelling and in the details associated with software for determination of time optimal controls. Nonlinear models for a two spool turbofan engine were expanded and refined; and a promising new approach to automatic model generation was placed under study. A two time scale scheme was developed to do two-dimensional dynamic programming, and an outward spiral sweep technique has greatly speeded convergence times in time optimal calculations.

  8. Implementation of real-time nonuniformity correction with multiple NUC tables using FPGA in an uncooled imaging system

    NASA Astrophysics Data System (ADS)

    Oh, Gyong Jin; Kim, Lyang-June; Sheen, Sue-Ho; Koo, Gyou-Phyo; Jin, Sang-Hun; Yeo, Bo-Yeon; Lee, Jong-Ho

    2009-05-01

    This paper presents a real time implementation of Non Uniformity Correction (NUC). Two point correction and one point correction with shutter were carried out in an uncooled imaging system which will be applied to a missile application. To design a small, light weight and high speed imaging system for a missile system, SoPC (System On a Programmable Chip) which comprises of FPGA and soft core (Micro-blaze) was used. Real time NUC and generation of control signals are implemented using FPGA. Also, three different NUC tables were made to make the operating time shorter and to reduce the power consumption in a large range of environment temperature. The imaging system consists of optics and four electronics boards which are detector interface board, Analog to Digital converter board, Detector signal generation board and Power supply board. To evaluate the imaging system, NETD was measured. The NETD was less than 160mK in three different environment temperatures.

  9. Floating-Point Modules Targeted for Use with RC Compilation Tools

    NASA Technical Reports Server (NTRS)

    Sahin, Ibrahin; Gloster, Clay S.

    2000-01-01

    Reconfigurable Computing (RC) has emerged as a viable computing solution for computationally intensive applications. Several applications have been mapped to RC system and in most cases, they provided the smallest published execution time. Although RC systems offer significant performance advantages over general-purpose processors, they require more application development time than general-purpose processors. This increased development time of RC systems provides the motivation to develop an optimized module library with an assembly language instruction format interface for use with future RC system that will reduce development time significantly. In this paper, we present area/performance metrics for several different types of floating point (FP) modules that can be utilized to develop complex FP applications. These modules are highly pipelined and optimized for both speed and area. Using these modules, and example application, FP matrix multiplication, is also presented. Our results and experiences show, that with these modules, 8-10X speedup over general-purpose processors can be achieved.

  10. Fast, axis-agnostic, dynamically summarized storage and retrieval for mass spectrometry data.

    PubMed

    Handy, Kyle; Rosen, Jebediah; Gillan, André; Smith, Rob

    2017-01-01

    Mass spectrometry, a popular technique for elucidating the molecular contents of experimental samples, creates data sets comprised of millions of three-dimensional (m/z, retention time, intensity) data points that correspond to the types and quantities of analyzed molecules. Open and commercial MS data formats are arranged by retention time, creating latency when accessing data across multiple m/z. Existing MS storage and retrieval methods have been developed to overcome the limitations of retention time-based data formats, but do not provide certain features such as dynamic summarization and storage and retrieval of point meta-data (such as signal cluster membership), precluding efficient viewing applications and certain data-processing approaches. This manuscript describes MzTree, a spatial database designed to provide real-time storage and retrieval of dynamically summarized standard and augmented MS data with fast performance in both m/z and RT directions. Performance is reported on real data with comparisons against related published retrieval systems.

  11. Comparison of plan quality and delivery time between volumetric arc therapy (RapidArc) and Gamma Knife radiosurgery for multiple cranial metastases.

    PubMed

    Thomas, Evan M; Popple, Richard A; Wu, Xingen; Clark, Grant M; Markert, James M; Guthrie, Barton L; Yuan, Yu; Dobelbower, Michael C; Spencer, Sharon A; Fiveash, John B

    2014-10-01

    Volumetric modulated arc therapy (VMAT) has been shown to be feasible for radiosurgical treatment of multiple cranial lesions with a single isocenter. To investigate whether equivalent radiosurgical plan quality and reduced delivery time could be achieved in VMAT for patients with multiple intracranial targets previously treated with Gamma Knife (GK) radiosurgery. We identified 28 GK treatments of multiple metastases. These were replanned for multiarc and single-arc, single-isocenter VMAT (RapidArc) in Eclipse. The prescription for all targets was standardized to 18 Gy. Each plan was normalized for 100% prescription dose to 99% to 100% of target volume. Plan quality was analyzed by target conformity (Radiation Therapy Oncology Group and Paddick conformity indices [CIs]), dose falloff (area under the dose-volume histogram curve), as well as the V4.5, V9, V12, and V18 isodose volumes. Other end points included beam-on and treatment time. Compared with GK, multiarc VMAT improved median plan conformity (CIVMAT = 1.14, CIGK = 1.65; P < .001) with no significant difference in median dose falloff (P = .269), 12 Gy isodose volume (P = .500), or low isodose spill (P = .49). Multiarc VMAT plans were associated with markedly reduced treatment time. A predictive model of the 12 Gy isodose volume as a function of tumor number and volume was also developed. For multiple target stereotactic radiosurgery, 4-arc VMAT produced clinically equivalent conformity, dose falloff, 12 Gy isodose volume, and low isodose spill, and reduced treatment time compared with GK. Because of its similar plan quality and increased delivery efficiency, single-isocenter VMAT radiosurgery may constitute an attractive alternative to multi-isocenter radiosurgery for some patients.

  12. Hydrodynamic Limit of Multiple SLE

    NASA Astrophysics Data System (ADS)

    Hotta, Ikkei; Katori, Makoto

    2018-04-01

    Recently del Monaco and Schleißinger addressed an interesting problem whether one can take the limit of multiple Schramm-Loewner evolution (SLE) as the number of slits N goes to infinity. When the N slits grow from points on the real line R in a simultaneous way and go to infinity within the upper half plane H, an ordinary differential equation describing time evolution of the conformal map g_t(z) was derived in the N → ∞ limit, which is coupled with a complex Burgers equation in the inviscid limit. It is well known that the complex Burgers equation governs the hydrodynamic limit of the Dyson model defined on R studied in random matrix theory, and when all particles start from the origin, the solution of this Burgers equation is given by the Stieltjes transformation of the measure which follows a time-dependent version of Wigner's semicircle law. In the present paper, first we study the hydrodynamic limit of the multiple SLE in the case that all slits start from the origin. We show that the time-dependent version of Wigner's semicircle law determines the time evolution of the SLE hull, K_t \\subset H\\cup R, in this hydrodynamic limit. Next we consider the situation such that a half number of the slits start from a>0 and another half of slits start from -a < 0, and determine the multiple SLE in the hydrodynamic limit. After reporting these exact solutions, we will discuss the universal long-term behavior of the multiple SLE and its hull K_t in the hydrodynamic limit.

  13. 1SXPS: A Deep Swift X-Ray Telescope Point Source Catalog with Light Curves and Spectra

    NASA Technical Reports Server (NTRS)

    Evans, P. A.; Osborne, J. P.; Beardmore, A. P.; Page, K. L.; Willingale, R.; Mountford, C. J.; Pagani, C.; Burrows, D. N.; Kennea, J. A.; Perri, M.; hide

    2013-01-01

    We present the 1SXPS (Swift-XRT point source) catalog of 151,524 X-ray point sources detected by the Swift-XRT in 8 yr of operation. The catalog covers 1905 sq deg distributed approximately uniformly on the sky. We analyze the data in two ways. First we consider all observations individually, for which we have a typical sensitivity of approximately 3 × 10(exp -13) erg cm(exp -2) s(exp -1) (0.3-10 keV). Then we co-add all data covering the same location on the sky: these images have a typical sensitivity of approximately 9 × 10(exp -14) erg cm(exp -2) s(exp -1) (0.3-10 keV). Our sky coverage is nearly 2.5 times that of 3XMM-DR4, although the catalog is a factor of approximately 1.5 less sensitive. The median position error is 5.5 (90% confidence), including systematics. Our source detection method improves on that used in previous X-ray Telescope (XRT) catalogs and we report greater than 68,000 new X-ray sources. The goals and observing strategy of the Swift satellite allow us to probe source variability on multiple timescales, and we find approximately 30,000 variable objects in our catalog. For every source we give positions, fluxes, time series (in four energy bands and two hardness ratios), estimates of the spectral properties, spectra and spectral fits for the brightest sources, and variability probabilities in multiple energy bands and timescales.

  14. Retrieval of Surface Ozone from UV-MFRSR Irradiances using Deep Learning

    NASA Astrophysics Data System (ADS)

    Chen, M.; Sun, Z.; Davis, J.; Zempila, M.; Liu, C.; Gao, W.

    2017-12-01

    High concentration of surface ozone is harmful to humans and plants. USDA UV-B Monitoring and Research Program (UVMRP) uses Ultraviolet (UV) version of Multi-Filter Rotating Shadowband Radiometer (UV-MFRSR) to measure direct, diffuse, and total irradiances every three minutes at seven UV channels (i.e. 300, 305, 311, 317, 325, 332, and 368 nm channels with 2 nm full width at half maximum). Based on the wavelength dependency of aerosol optical depths, there have been plenty of literatures exploring retrieval methods of total column ozone from UV-MFRSR measurements. However, few has explored the retrieval of surface ozone. The total column ozone is the integral of the multiplication of ozone concentration (varying by height and time) and cross section (varying by wavelength and temperature) over height. Because of the distinctive values of ozone cross section in the UV region, the irradiances at seven UV channels have the potential to resolve the ozone concentration at multiple vertical layers. If the UV irradiances at multiple time points are considered together, the uncertainty or the vertical resolution of ozone concentrations can be further improved. In this study, the surface ozone amounts at the UVMRP station located at Billings, Oklahoma are estimated from the adjacent (i.e. within 200 miles) US Environmental Protection Agency (EPA) surface ozone observations using the spatial analysis technique. Then, the (direct normal) irradiances of UVMRP at one or more time points as inputs and the corresponding estimated surface ozone from EPA as outputs are fed into a pre-trained (dense) deep neural network (DNN) to explore the hidden non-linear relationship between them. This process could improve our understanding of their physical/mathematical relationship. Finally, the optimized DNN is tested with the preserved 5% of the dataset, which are not used during training, to verify the relationship.

  15. Rheoencephalographic (REG) Assessment of Head and Neck Cooling for use with Multiple Sclerosis Patients

    NASA Technical Reports Server (NTRS)

    Montogomery, Leslie D.; Ku, Yu-Tsuan E.; Webbon, Bruce W. (Technical Monitor)

    1995-01-01

    We have prepared a computer program (RHEOSYS:RHEOencephalographic impedance trace scanning SyStem) that can be used to automate the analysis of segmental impedance blood flow waveforms. This program was developed to assist in the post test analysis of recorded impedance traces from multiple segments of the body. It incorporates many of the blood flow, segmental volume, and vascular state indices reported in the world literature. As it is currently programmed, seven points are selected from each blood flow pulse and associated ECG waveforrn: 1. peak of the first ECG QRS complex, 2. start of systolic slope on the blood flow trace, 3. maximum amplitude of the impedance pulse, 4. position of the dicrotic notch, 5. maximum amplitude of the postdicrotic segment, 6. peak of the second ECG QRS complex, and 7. start of the next blood flow pulse. These points we used to calculate various geometric, area, and time-related values associated with the impedance pulse morphology. RHEOSYS then calculates a series of 34 impedance and cardiac cycle parameters which include pulse amplitudes; areas; pulse propagation times; cardiac cycle times; and various measures of arterial and various tone, contractility, and pulse volume. We used this program to calculate the scalp and intracranial blood flow responses to head and neck cooling as it may be applied to lower the body temperatures of multiple sclerosis patients. Twelve women and twelve men were tested using a commercially available head and neck cooling system operated at its maximum cooling capacity for a period of 30 minutes. Head and neck cooling produced a transient change in scalp blood flow and a significant, (P<0.05) decrease of approx. 30% in intracranial blood flow. Results of this experiment will illustrate how REG and RHEOSYS can be used in biomedical applications.

  16. Montblanc1: GPU accelerated radio interferometer measurement equations in support of Bayesian inference for radio observations

    NASA Astrophysics Data System (ADS)

    Perkins, S. J.; Marais, P. C.; Zwart, J. T. L.; Natarajan, I.; Tasse, C.; Smirnov, O.

    2015-09-01

    We present Montblanc, a GPU implementation of the Radio interferometer measurement equation (RIME) in support of the Bayesian inference for radio observations (BIRO) technique. BIRO uses Bayesian inference to select sky models that best match the visibilities observed by a radio interferometer. To accomplish this, BIRO evaluates the RIME multiple times, varying sky model parameters to produce multiple model visibilities. χ2 values computed from the model and observed visibilities are used as likelihood values to drive the Bayesian sampling process and select the best sky model. As most of the elements of the RIME and χ2 calculation are independent of one another, they are highly amenable to parallel computation. Additionally, Montblanc caters for iterative RIME evaluation to produce multiple χ2 values. Modified model parameters are transferred to the GPU between each iteration. We implemented Montblanc as a Python package based upon NVIDIA's CUDA architecture. As such, it is easy to extend and implement different pipelines. At present, Montblanc supports point and Gaussian morphologies, but is designed for easy addition of new source profiles. Montblanc's RIME implementation is performant: On an NVIDIA K40, it is approximately 250 times faster than MEQTREES on a dual hexacore Intel E5-2620v2 CPU. Compared to the OSKAR simulator's GPU-implemented RIME components it is 7.7 and 12 times faster on the same K40 for single and double-precision floating point respectively. However, OSKAR's RIME implementation is more general than Montblanc's BIRO-tailored RIME. Theoretical analysis of Montblanc's dominant CUDA kernel suggests that it is memory bound. In practice, profiling shows that is balanced between compute and memory, as much of the data required by the problem is retained in L1 and L2 caches.

  17. Effectiveness of rehabilitation in multiple sclerosis relapse on fatigue, self-efficacy and physical activity.

    PubMed

    Nedeljkovic, Una; Raspopovic, Emilija Dubljanin; Ilic, Nela; Vujadinovic, Sanja Tomanovic; Soldatovic, Ivan; Drulovic, Jelena

    2016-09-01

    Relapse of disease is one of the most prominent characteristics of multiple sclerosis. Effectiveness of rehabilitation programmes on fatigue, self-efficacy (SE) and physical activity (PA) has not been investigated so far in context of relapse. The aim of our study was to examine if rehabilitation programme in addition to high-dose methylprednisolone (HDMP) during relapse of disease can influence fatigue, SE and PA more than corticosteroid therapy alone. Patients were randomized in control group receiving only HDMP and experimental group which was in addition included in rehabilitation programme. Outcome measures used were Fatigue Severity Scale (FSS), Multiple Sclerosis Self- Efficacy scale (MSSES), Godin Leisure-Time Exercise Questionnaire (GLTEQ), completed on baseline, 1 and 3 months later. There was no significant change in FSS in both time points, despite different trend seen between groups. The mean MSSES for function and control improved significantly in treatment group after 1 month (807.1 ± 96.8, p = 0.005; 665.3 ± 145.1, p = 0.05) and 3 months (820 ± 83.5, p = 0.004; 720.0 ± 198.2, p = 0.016.) compared to baseline values. The mean GLTEQ score was significantly higher in the treatment group compared to the control at both follow-up time points (45.7 ± 7.6, p < 0.001; 34.3 ± 22.4, p < 0.01). Rehabilitation started along with corticosteroid treatment induced significant improvement in PA compared to HDMP therapy alone. It also influenced noticeable changes in self-efficacy, but effect on fatigue was insufficient.

  18. Simulated annealing model of acupuncture

    NASA Astrophysics Data System (ADS)

    Shang, Charles; Szu, Harold

    2015-05-01

    The growth control singularity model suggests that acupuncture points (acupoints) originate from organizers in embryogenesis. Organizers are singular points in growth control. Acupuncture can cause perturbation of a system with effects similar to simulated annealing. In clinical trial, the goal of a treatment is to relieve certain disorder which corresponds to reaching certain local optimum in simulated annealing. The self-organizing effect of the system is limited and related to the person's general health and age. Perturbation at acupoints can lead a stronger local excitation (analogous to higher annealing temperature) compared to perturbation at non-singular points (placebo control points). Such difference diminishes as the number of perturbed points increases due to the wider distribution of the limited self-organizing activity. This model explains the following facts from systematic reviews of acupuncture trials: 1. Properly chosen single acupoint treatment for certain disorder can lead to highly repeatable efficacy above placebo 2. When multiple acupoints are used, the result can be highly repeatable if the patients are relatively healthy and young but are usually mixed if the patients are old, frail and have multiple disorders at the same time as the number of local optima or comorbidities increases. 3. As number of acupoints used increases, the efficacy difference between sham and real acupuncture often diminishes. It predicted that the efficacy of acupuncture is negatively correlated to the disease chronicity, severity and patient's age. This is the first biological - physical model of acupuncture which can predict and guide clinical acupuncture research.

  19. Optimized Graph Learning Using Partial Tags and Multiple Features for Image and Video Annotation.

    PubMed

    Song, Jingkuan; Gao, Lianli; Nie, Feiping; Shen, Heng Tao; Yan, Yan; Sebe, Nicu

    2016-11-01

    In multimedia annotation, due to the time constraints and the tediousness of manual tagging, it is quite common to utilize both tagged and untagged data to improve the performance of supervised learning when only limited tagged training data are available. This is often done by adding a geometry-based regularization term in the objective function of a supervised learning model. In this case, a similarity graph is indispensable to exploit the geometrical relationships among the training data points, and the graph construction scheme essentially determines the performance of these graph-based learning algorithms. However, most of the existing works construct the graph empirically and are usually based on a single feature without using the label information. In this paper, we propose a semi-supervised annotation approach by learning an optimized graph (OGL) from multi-cues (i.e., partial tags and multiple features), which can more accurately embed the relationships among the data points. Since OGL is a transductive method and cannot deal with novel data points, we further extend our model to address the out-of-sample issue. Extensive experiments on image and video annotation show the consistent superiority of OGL over the state-of-the-art methods.

  20. Comparison of Multiple Molecular Dynamics Trajectories Calculated for the Drug-Resistant HIV-1 Integrase T66I/M154I Catalytic Domain

    PubMed Central

    Brigo, Alessandro; Lee, Keun Woo; Iurcu Mustata, Gabriela; Briggs, James M.

    2005-01-01

    HIV-1 integrase (IN) is an essential enzyme for the viral replication and an interesting target for the design of new pharmaceuticals for multidrug therapy of AIDS. Single and multiple mutations of IN at residues T66, S153, or M154 confer degrees of resistance to several inhibitors that prevent the enzyme from performing its normal strand transfer activity. Four different conformations of IN were chosen from a prior molecular dynamics (MD) simulation on the modeled IN T66I/M154I catalytic core domain as starting points for additional MD studies. The aim of this article is to understand the dynamic features that may play roles in the catalytic activity of the double mutant enzyme in the absence of any inhibitor. Moreover, we want to verify the influence of using different starting points on the MD trajectories and associated dynamical properties. By comparison of the trajectories obtained from these MD simulations we have demonstrated that the starting point does not affect the conformational space explored by this protein and that the time of the simulation is long enough to achieve convergence for this system. PMID:15764656

  1. Doubly stochastic Poisson process models for precipitation at fine time-scales

    NASA Astrophysics Data System (ADS)

    Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao

    2012-09-01

    This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.

  2. The role of point-of-care assessment of platelet function in predicting postoperative bleeding and transfusion requirements after coronary artery bypass grafting.

    PubMed

    Mishra, Pankaj Kumar; Thekkudan, Joyce; Sahajanandan, Raj; Gravenor, Mike; Lakshmanan, Suresh; Fayaz, Khazi Mohammed; Luckraz, Heyman

    2015-01-01

    OBJECTIVE platelet function assessment after cardiac surgery can predict postoperative blood loss, guide transfusion requirements and discriminate the need for surgical re-exploration. We conducted this study to assess the predictive value of point-of-care testing platelet function using the Multiplate® device. Patients undergoing isolated coronary artery bypass grafting were prospectively recruited ( n = 84). Group A ( n = 42) patients were on anti-platelet therapy until surgery; patients in Group B ( n = 42) stopped anti-platelet treatment at least 5 days preoperatively. Multiplate® and thromboelastography (TEG) tests were performed in the perioperative period. Primary end-point was excessive bleeding (>2.5 ml/kg/h) within first 3 h postoperative. Secondary end-points included transfusion requirements, re-exploration rates, intensive care unit and in-hospital stays. Patients in Group A had excessive bleeding (59% vs. 33%, P = 0.02), higher re-exploration rates (14% vs. 0%, P < 0.01) and higher rate of blood (41% vs. 14%, P < 0.01) and platelet (14% vs. 2%, P = 0.05) transfusions. On multivariate analysis, preoperative platelet function testing was the most significant predictor of excessive bleeding (odds ratio [OR]: 2.3, P = 0.08), need for blood (OR: 5.5, P < 0.01) and platelet transfusion (OR: 15.1, P < 0.01). Postoperative "ASPI test" best predicted the need for transfusion (sensitivity - 0.86) and excessive blood loss (sensitivity - 0.81). TEG results did not correlate well with any of these outcome measures. Peri-operative platelet functional assessment with Multiplate® was the strongest predictor for bleeding and transfusion requirements in patients on anti-platelet therapy until the time of surgery.

  3. The UK Earth System Models Marine Biogeochemical Evaluation Toolkit, BGC-val

    NASA Astrophysics Data System (ADS)

    de Mora, Lee

    2017-04-01

    The Biogeochemical Validation toolkit, BGC-val, is a model and grid independent python-based marine model evaluation framework that automates much of the validation of the marine component of an Earth System Model. BGC-val was initially developed to be a flexible and extensible system to evaluate the spin up of the marine UK Earth System Model (UKESM). However, the grid-independence and flexibility means that it is straightforward to adapt the BGC-val framework to evaluate other marine models. In addition to the marine component of the UKESM, this toolkit has been adapted to compare multiple models, including models from the CMIP5 and iMarNet inter-comparison projects. The BGC-val toolkit produces multiple levels of analysis which are presented in a simple to use interactive html5 document. Level 1 contains time series analyses, showing the development over time of many important biogeochemical and physical ocean metrics, such as the Global primary production or the Drake passage current. The second level of BGC-val is an in-depth spatial analyses of a single point in time. This is a series of point to point comparison of model and data in various regions, such as a comparison of Surface Nitrate in the model vs data from the world ocean atlas. The third level analyses are specialised ad-hoc packages to go in-depth on a specific question, such as the development of Oxygen minimum zones in the Equatorial Pacific. In additional to the three levels, the html5 document opens with a Level 0 table showing a summary of the status of the model run. The beta version of this toolkit is available via the Plymouth Marine Laboratory Gitlab server and uses the BSD 3 clause license.

  4. Effects of corn oil administered orally on conspicuity of ultrasonographic small intestinal lesions in dogs with lymphangiectasia.

    PubMed

    Pollard, Rachel E; Johnson, Eric G; Pesavento, Patricia A; Baker, Tomas W; Cannon, Allison B; Kass, Philip H; Marks, Stanley L

    2013-01-01

    Lymphangiectasia is one of the causes of protein-losing enteropathy in dogs and characteristic ultrasonographic small intestinal lesions have been previously described. The purpose of this study was to determine whether corn oil administered orally (COAO) would result in increased conspicuity of these characteristic small intestinal ultrasonographic lesions in dogs with lymphangiectasia. Affected dogs were included if they underwent corn oil administered orally and had a surgical full-thickness intestinal biopsy diagnosis of lymphangiectasia. Control dogs had normal clinical examination and standard laboratory test findings. Ultrasound images of duodenum, jejunum, and ileum were obtained prior to and 30, 60, 90, and 120 min after corn oil administered orally for all dogs. Parameters recorded for each ultrasound study were intestinal wall thickness, mucosal echogenicity, and presence or absence of hyperechoic mucosal striations (HMS) and a parallel hyperechoic mucosal line (PHML). Nine affected and five controls dogs were included in the study. Seven of the nine dogs with lymphangiectasia had hyperechoic mucosal striations prior to corn oil administered orally. Jejunal hyperechoic mucosal striations were significantly associated with lymphangiectasia at multiple time points (P < 0.05) and were best identified in dogs with lymphangiectasia 60 or 90 min after corn oil administered orally. Increased mucosal echogenicity was observed in all dogs at multiple time points after corn oil administered orally. A parallel hyperechoic mucosal line was present in the jejunum in 4/5 healthy and 6/9 dogs with lymphangiectasia at one or more time points after corn oil administered orally. Findings indicated that corn oil administered orally improves conspicuity of characteristic ultrasonographic lesions in dogs with lymphangiectasia, however some of these lesions may also be present in healthy dogs that recently received a fatty meal. © 2013 Veterinary Radiology & Ultrasound.

  5. Racial and ethnic variations in phthalate metabolite concentration changes across full-term pregnancies.

    PubMed

    James-Todd, Tamarra M; Meeker, John D; Huang, Tianyi; Hauser, Russ; Seely, Ellen W; Ferguson, Kelly K; Rich-Edwards, Janet W; McElrath, Thomas F

    2017-03-01

    Higher concentrations of certain phthalate metabolites are associated with adverse reproductive and pregnancy outcomes, as well as poor infant/child health outcomes. In non-pregnant populations, phthalate metabolite concentrations vary by race/ethnicity. Few studies have documented racial/ethnic differences between phthalate metabolite concentrations at multiple time points across the full-course of pregnancy. The objective of the study was to characterize the change in phthalate metabolite concentrations by race/ethnicity across multiple pregnancy time points. Women were participants in a prospectively collected pregnancy cohort who delivered at term (≥37 weeks) and had available urinary phthalate metabolite concentrations for ≥3 time points across full-term pregnancies (n=350 women). We assessed urinary concentrations of eight phthalate metabolites that were log-transformed and specific gravity-adjusted. We evaluated the potential racial/ethnic differences in phthalate metabolite concentrations at baseline (median 10 weeks gestation) using ANOVA and across pregnancy using linear mixed models to calculate the percent change and 95% confidence intervals adjusted for sociodemographic and lifestyle factors. Almost 30% of the population were non-Hispanic black or Hispanic. With the exception of mono-(3-carboxypropyl) (MCPP) and di-ethylhexyl phthalate (DEHP) metabolites, baseline levels of phthalate metabolites were significantly higher in non-whites (P<0.05). When evaluating patterns by race/ethnicity, mono-ethyl phthalate (MEP) and MCPP had significant percent changes across pregnancy. MEP was higher in Hispanics at baseline and decreased in mid-pregnancy but increased in late pregnancy for non-Hispanic blacks. MCPP was substantially higher in non-Hispanic blacks at baseline but decreased later in pregnancy. Across pregnancy, non-Hispanic black and Hispanic women had higher concentrations of certain phthalate metabolites. These differences may have implications for racial/ethnic differences in adverse pregnancy and child health outcomes.

  6. A novel protocol for dispatcher assisted CPR improves CPR quality and motivation among rescuers-A randomized controlled simulation study.

    PubMed

    Rasmussen, Stinne Eika; Nebsbjerg, Mette Amalie; Krogh, Lise Qvirin; Bjørnshave, Katrine; Krogh, Kristian; Povlsen, Jonas Agerlund; Riddervold, Ingunn Skogstad; Grøfte, Thorbjørn; Kirkegaard, Hans; Løfgren, Bo

    2017-01-01

    Emergency dispatchers use protocols to instruct bystanders in cardiopulmonary resuscitation (CPR). Studies changing one element in the dispatcher's protocol report improved CPR quality. Whether several changes interact is unknown and the effect of combining multiple changes previously reported to improve CPR quality into one protocol remains to be investigated. We hypothesize that a novel dispatch protocol, combining multiple beneficial elements improves CPR quality compared with a standard protocol. A novel dispatch protocol was designed including wording on chest compressions, using a metronome, regular encouragements and a 10-s rest each minute. In a simulated cardiac arrest scenario, laypersons were randomized to perform single-rescuer CPR guided with the novel or the standard protocol. a composite endpoint of time to first compression, hand position, compression depth and rate and hands-off time (maximum score: 22 points). Afterwards participants answered a questionnaire evaluating the dispatcher assistance. The novel protocol (n=61) improved CPR quality score compared with the standard protocol (n=64) (mean (SD): 18.6 (1.4)) points vs. 17.5 (1.7) points, p<0.001. The novel protocol resulted in deeper chest compressions (mean (SD): 58 (12)mm vs. 52 (13)mm, p=0.02) and improved rate of correct hand position (61% vs. 36%, p=0.01) compared with the standard protocol. In both protocols hands-off time was short. The novel protocol improved motivation among rescuers compared with the standard protocol (p=0.002). Participants guided with a standard dispatch protocol performed high quality CPR. A novel bundle of care protocol improved CPR quality score and motivation among rescuers. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Long-term forecasting of internet backbone traffic.

    PubMed

    Papagiannaki, Konstantina; Taft, Nina; Zhang, Zhi-Li; Diot, Christophe

    2005-09-01

    We introduce a methodology to predict when and where link additions/upgrades have to take place in an Internet protocol (IP) backbone network. Using simple network management protocol (SNMP) statistics, collected continuously since 1999, we compute aggregate demand between any two adjacent points of presence (PoPs) and look at its evolution at time scales larger than 1 h. We show that IP backbone traffic exhibits visible long term trends, strong periodicities, and variability at multiple time scales. Our methodology relies on the wavelet multiresolution analysis (MRA) and linear time series models. Using wavelet MRA, we smooth the collected measurements until we identify the overall long-term trend. The fluctuations around the obtained trend are further analyzed at multiple time scales. We show that the largest amount of variability in the original signal is due to its fluctuations at the 12-h time scale. We model inter-PoP aggregate demand as a multiple linear regression model, consisting of the two identified components. We show that this model accounts for 98% of the total energy in the original signal, while explaining 90% of its variance. Weekly approximations of those components can be accurately modeled with low-order autoregressive integrated moving average (ARIMA) models. We show that forecasting the long term trend and the fluctuations of the traffic at the 12-h time scale yields accurate estimates for at least 6 months in the future.

  8. Seismicity of the Bering Glacier Region: Inferences from Relocations Using Data from STEEP

    NASA Astrophysics Data System (ADS)

    Panessa, A. L.; Pavlis, G. L.; Hansen, R. A.; Ruppert, N.

    2008-12-01

    We relocated earthquakes recorded from 1990 to 2007 in the area of the Bering Glacier in southeastern Alaska to test a hypothesis that faults in this area are linked to glaciers. We used waveform correlation to improve arrival time measurements for data from all broadband channels including all the data from the STEEP experiment. We used a novel form of correlation based on interactive array processing of common receiver gathers linked to a three-dimensional grid of control points. This procedure produced 8556 gathers that we processed interactively to produce improved arrival time estimates. The interactive procedure allowed us to select which events in each gather were sufficiently similar to warrant correlation. Redundancy in the result was resolved in a secondary correlation that aligned event stacks of the same station-event pair associated with multiple control points. This procedure yielded only 2240 waveforms that correlated and modified only a total of 524 arrivals in a total database of 12263 arrivals. The correlation procedure changed arrival times on 145 of 509 events in this database. Events with arrivals constrained by correlation were not clustered but were randomly distributed throughout the study area. We used a version of the Progressive Multiple Event Location (PMEL) that analyzed data at each control point to invert for relative locations and a set of path anomalies for each control point. We applied the PMEL procedure with different velocity models and constraints and compared the results to a HypoDD solution produced from the original arrival time data. The relocations are all significant improvements from the standard single-event, catalog locations. The relocations suggest the seismicity in this region is mostly linked to fold and thrust deformation in the Yakatat block. There is a suggestion of a north-dipping trend to much of the seismicity, but the dominant trend is a fairly diffuse cloud of events largely confined to the Yakatat block south of the Bagley Icefield. This is consistent with the recently published tectonic model by Berger et al. (2008).

  9. Objective evaluation of female feet and leg joint conformation at time of selection and post first parity in swine.

    PubMed

    Stock, J D; Calderón Díaz, J A; Rothschild, M F; Mote, B E; Stalder, K J

    2018-06-09

    Feet and legs of replacement females were objectively evaluated at selection, i.e. approximately 150 days of age (n=319) and post first parity, i.e. any time after weaning of first litter and before 2nd parturition (n=277) to 1) compare feet and leg joint angle ranges between selection and post first parity; 2) identify feet and leg joint angle differences between selection and first three weeks of second gestation; 3) identify feet and leg join angle differences between farms and gestation days during second gestation; and 4) obtain genetic variance components for conformation angles for the two time points measured. Angles for carpal joint (knee), metacarpophalangeal joint (front pastern), metatarsophalangeal joint (rear pastern), tarsal joint (hock), and rear stance were measured using image analysis software. Between selection and post first parity significant differences were observed for all joints measured (P < 0.05). Knee, front and rear pastern angles were less (more flexion), and hock angles were greater (less flexion) as age progressed (P < 0.05), while the rear stance pattern was less (feet further under center) at selection than post first parity (only including measures during first three weeks of second gestation). Only using post first parity leg conformation information, farm was a significant source of variation for front and rear pasterns and rear stance angle measurements (P < 0.05). Knee angle was less (more flexion) (P < 0.05) as gestation age progressed. Heritability estimates were low to moderate (0.04 - 0.35) for all traits measured across time points. Genetic correlations between the same joints at different time points were high (> 0.8) between the front leg joints and low (<0.2) between the rear leg joints. High genetic correlations between time points indicate that the trait can be considered the same at either time point, and low genetic correlations indicate that the trait at different time points should be considered as two separate traits. Minimal change in the front leg suggests conformation traits that remain between selection and post first parity, while larger changes in rear leg indicate that rear leg conformation traits should be evaluated at multiple time periods.

  10. False Discovery Control in Large-Scale Spatial Multiple Testing

    PubMed Central

    Sun, Wenguang; Reich, Brian J.; Cai, T. Tony; Guindani, Michele; Schwartzman, Armin

    2014-01-01

    Summary This article develops a unified theoretical and computational framework for false discovery control in multiple testing of spatial signals. We consider both point-wise and cluster-wise spatial analyses, and derive oracle procedures which optimally control the false discovery rate, false discovery exceedance and false cluster rate, respectively. A data-driven finite approximation strategy is developed to mimic the oracle procedures on a continuous spatial domain. Our multiple testing procedures are asymptotically valid and can be effectively implemented using Bayesian computational algorithms for analysis of large spatial data sets. Numerical results show that the proposed procedures lead to more accurate error control and better power performance than conventional methods. We demonstrate our methods for analyzing the time trends in tropospheric ozone in eastern US. PMID:25642138

  11. Divergence times in the termite genus Macrotermes (Isoptera: Termitidae).

    PubMed

    Brandl, R; Hyodo, F; Korff-Schmising, M von; Maekawa, K; Miura, T; Takematsu, Y; Matsumoto, T; Abe, T; Bagine, R; Kaib, M

    2007-10-01

    The evolution of fungus-growing termites is supposed to have started in the African rain forests with multiple invasions of semi-arid habitats as well as multiple invasions of the Oriental region. We used sequences of the mitochondrial COII gene and Bayesian dating to investigate the time frame of the evolution of Macrotermes, an important genus of fungus-growing termites. We found that the genus Macrotermes consists of at least 6 distantly related clades. Furthermore, the COII sequences suggested some cryptic diversity within the analysed African Macrotermes species. The dates calculated with the COII data using a fossilized termite mound to calibrate the clock were in good agreement with dates calculated with COI sequences using the split between Locusta and Chortippus as calibration point which supports the consistency of the calibration points. The clades from the Oriental region dated back to the early Tertiary. These estimates of divergence times suggested that Macrotermes invaded Asia during periods with humid climates. For Africa, many speciation events predated the Pleistocene and fall in range of 6-23 million years ago. These estimates suggest that savannah-adapted African clades radiated with the spread of the semi-arid ecosystems during the Miocene. Apparently, events during the Pleistocene were of little importance for speciation within the genus Macrotermes. However, further investigations are necessary to increase the number of taxa for phylogenetic analysis.

  12. Pressure Ratio to Thermal Environments

    NASA Technical Reports Server (NTRS)

    Lopez, Pedro; Wang, Winston

    2012-01-01

    A pressure ratio to thermal environments (PRatTlE.pl) program is a Perl language code that estimates heating at requested body point locations by scaling the heating at a reference location times a pressure ratio factor. The pressure ratio factor is the ratio of the local pressure at the reference point and the requested point from CFD (computational fluid dynamics) solutions. This innovation provides pressure ratio-based thermal environments in an automated and traceable method. Previously, the pressure ratio methodology was implemented via a Microsoft Excel spreadsheet and macro scripts. PRatTlE is able to calculate heating environments for 150 body points in less than two minutes. PRatTlE is coded in Perl programming language, is command-line-driven, and has been successfully executed on both the HP and Linux platforms. It supports multiple concurrent runs. PRatTlE contains error trapping and input file format verification, which allows clear visibility into the input data structure and intermediate calculations.

  13. Somatosensory impairment and its association with balance limitation in people with multiple sclerosis.

    PubMed

    Jamali, Akram; Sadeghi-Demneh, Ebrahim; Fereshtenajad, Niloufar; Hillier, Susan

    2017-09-01

    Somatosensory impairments are common in multiple sclerosis. However, little data are available to characterize the nature and frequency of these problems in people with multiple sclerosis. To investigate the frequency of somatosensory impairments and identify any association with balance limitations in people with multiple sclerosis. The design was a prospective cross-sectional study, involving 82 people with multiple sclerosis and 30 healthy controls. Tactile and proprioceptive sensory acuity were measured using the Rivermead Assessment of Somatosensory Performance. Vibration duration was assessed using a tuning fork. Duration for the Timed Up and Go Test and reaching distance of the Functional Reach Test were measured to assess balance limitations. The normative range of sensory modalities was defined using cut-off points in the healthy participants. The multivariate linear regression was used to identify the significant predictors of balance in people with multiple sclerosis. Proprioceptive impairments (66.7%) were more common than tactile (60.8%) and vibration impairments (44.9%). Somatosensory impairments were more frequent in the lower limb (78.2%) than the upper limb (64.1%). All sensory modalities were significantly associated with the Timed Up and Go and Functional Reach tests (p<0.05). The Timed Up and Go test was independently predicted by the severity of the neurological lesion, Body Mass Index, ataxia, and tactile sensation (R2=0.58), whereas the Functional Reach test was predicted by the severity of the neurological lesion, lower limb strength, and vibration sense (R2=0.49). Somatosensory impairments are very common in people with multiple sclerosis. These impairments are independent predictors of balance limitation. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Multiple excitation nano-spot generation and confocal detection for far-field microscopy.

    PubMed

    Mondal, Partha Pratim

    2010-03-01

    An imaging technique is developed for the controlled generation of multiple excitation nano-spots for far-field microscopy. The system point spread function (PSF) is obtained by interfering two counter-propagating extended depth-of-focus PSF (DoF-PSF), resulting in highly localized multiple excitation spots along the optical axis. The technique permits (1) simultaneous excitation of multiple planes in the specimen; (2) control of the number of spots by confocal detection; and (3) overcoming the point-by-point based excitation. Fluorescence detection from the excitation spots can be efficiently achieved by Z-scanning the detector/pinhole assembly. The technique complements most of the bioimaging techniques and may find potential application in high resolution fluorescence microscopy and nanoscale imaging.

  15. Multiple excitation nano-spot generation and confocal detection for far-field microscopy

    NASA Astrophysics Data System (ADS)

    Mondal, Partha Pratim

    2010-03-01

    An imaging technique is developed for the controlled generation of multiple excitation nano-spots for far-field microscopy. The system point spread function (PSF) is obtained by interfering two counter-propagating extended depth-of-focus PSF (DoF-PSF), resulting in highly localized multiple excitation spots along the optical axis. The technique permits (1) simultaneous excitation of multiple planes in the specimen; (2) control of the number of spots by confocal detection; and (3) overcoming the point-by-point based excitation. Fluorescence detection from the excitation spots can be efficiently achieved by Z-scanning the detector/pinhole assembly. The technique complements most of the bioimaging techniques and may find potential application in high resolution fluorescence microscopy and nanoscale imaging.

  16. Bridges between multiple-point geostatistics and texture synthesis: Review and guidelines for future research

    NASA Astrophysics Data System (ADS)

    Mariethoz, Gregoire; Lefebvre, Sylvain

    2014-05-01

    Multiple-Point Simulations (MPS) is a family of geostatistical tools that has received a lot of attention in recent years for the characterization of spatial phenomena in geosciences. It relies on the definition of training images to represent a given type of spatial variability, or texture. We show that the algorithmic tools used are similar in many ways to techniques developed in computer graphics, where there is a need to generate large amounts of realistic textures for applications such as video games and animated movies. Similarly to MPS, these texture synthesis methods use training images, or exemplars, to generate realistic-looking graphical textures. Both domains of multiple-point geostatistics and example-based texture synthesis present similarities in their historic development and share similar concepts. These disciplines have however remained separated, and as a result significant algorithmic innovations in each discipline have not been universally adopted. Texture synthesis algorithms present drastically increased computational efficiency, patterns reproduction and user control. At the same time, MPS developed ways to condition models to spatial data and to produce 3D stochastic realizations, which have not been thoroughly investigated in the field of texture synthesis. In this paper we review the possible links between these disciplines and show the potential and limitations of using concepts and approaches from texture synthesis in MPS. We also provide guidelines on how recent developments could benefit both fields of research, and what challenges remain open.

  17. A voxelwise approach to determine consensus regions-of-interest for the study of brain network plasticity.

    PubMed

    Rajtmajer, Sarah M; Roy, Arnab; Albert, Reka; Molenaar, Peter C M; Hillary, Frank G

    2015-01-01

    Despite exciting advances in the functional imaging of the brain, it remains a challenge to define regions of interest (ROIs) that do not require investigator supervision and permit examination of change in networks over time (or plasticity). Plasticity is most readily examined by maintaining ROIs constant via seed-based and anatomical-atlas based techniques, but these approaches are not data-driven, requiring definition based on prior experience (e.g., choice of seed-region, anatomical landmarks). These approaches are limiting especially when functional connectivity may evolve over time in areas that are finer than known anatomical landmarks or in areas outside predetermined seeded regions. An ideal method would permit investigators to study network plasticity due to learning, maturation effects, or clinical recovery via multiple time point data that can be compared to one another in the same ROI while also preserving the voxel-level data in those ROIs at each time point. Data-driven approaches (e.g., whole-brain voxelwise approaches) ameliorate concerns regarding investigator bias, but the fundamental problem of comparing the results between distinct data sets remains. In this paper we propose an approach, aggregate-initialized label propagation (AILP), which allows for data at separate time points to be compared for examining developmental processes resulting in network change (plasticity). To do so, we use a whole-brain modularity approach to parcellate the brain into anatomically constrained functional modules at separate time points and then apply the AILP algorithm to form a consensus set of ROIs for examining change over time. To demonstrate its utility, we make use of a known dataset of individuals with traumatic brain injury sampled at two time points during the first year of recovery and show how the AILP procedure can be applied to select regions of interest to be used in a graph theoretical analysis of plasticity.

  18. Evaluating health information technology: provider satisfaction with an HIV-specific, electronic clinical management and reporting system.

    PubMed

    Magnus, Manya; Herwehe, Jane; Andrews, Laura; Gibson, Laura; Daigrepont, Nathan; De Leon, Jordana M; Hyslop, Newton E; Styron, Steven; Wilcox, Ronald; Kaiser, Michael; Butler, Michael K

    2009-02-01

    Health information technology (HIT) offers the potential to improve care for persons living with HIV. Provider satisfaction with HIT is essential to realize benefits, yet its evaluation presents challenges. An HIV-specific, electronic clinical management and reporting system was implemented in Louisiana's eight HIV clinics, serving over 7500. A serial cross-sectional survey was administered at three points between April 2002 and July 2005; qualitative methods were used to augment quantitative. Multivariable methods were used to characterize provider satisfaction. The majority of the sample (n = 196; T1 = 105; T2 = 46; T3 = 45) was female (80.0%), between ages of 25 and 50 years (68.3%), frequent providers at that clinic (53.7% more than 4 days per week), and had been at the same clinic for a year or more (85.0%). Improvements in satisfaction were observed in patient tracking ( p < 0.05), distribution of educational materials ( p < 0.04), and belief that electronic systems improve care ( p < 0.05). Provider self-reports of time to complete critical functions decreased for all tasks, two significantly so. Time (in minutes) to find current CD4 count decreased at each time point (mean 3.9 [standard deviation {SD} 5.8], 2.9 [2.3], 2.1 [2.6], p>0.05), current viral load decreased at each time point (mean 4.0 [SD 5.6], 2.9 [2.5], 1.8 [2.6], p = 0.08], current antiretroviral status decreased at each time point (mean 3.9 [SD 4.7], 2.9 [3.7], 1.5 [1.1], p < 0.04), history of antiretroviral use decreased at each time point (mean 15.1 [SD 21.9], 6.0 [5.4], 5.4 [7.2], p < 0.04]. Time savings were realized, averaging 16.1 minutes per visit ( p < 0.04). Providers were satisfied with HIT in multiple domains, and significant time savings were realized.

  19. The Relationship between Baseline Drinking Status, Peer Motivational Interviewing Microskills, and Drinking Outcomes in a Brief Alcohol Intervention for Matriculating College Students: A Replication

    ERIC Educational Resources Information Center

    Tollison, Sean J.; Mastroleo, Nadine R.; Mallett, Kimberly A.; Witkiewitz, Katie; Lee, Christine M.; Ray, Anne E.; Larimer, Mary E.

    2013-01-01

    The purpose of this study was to replicate and extend previous findings (Tollison et al., 2008) on the association between peer facilitator adherence to motivational interviewing (MI) microskills and college student drinking behavior. This study used a larger sample size, multiple follow-up time-points, and latent variable analyses allowing for…

  20. Commentary: Working Memory Training and ADHD--Where Does Its Potential Lie? Reflections on Chacko et al. (2014)

    ERIC Educational Resources Information Center

    Gathercole, Susan E.

    2014-01-01

    Chacko et al.'s investigation of the clinical utility of WM training to alleviate key symptoms of ADHD is timely and substantial, and marks a significant point in cognitive training research. Cogmed Working Memory Training (CWMT) involves intensive practice on multiple memory span tasks that increase in difficulty as performance improves with…

  1. A Comparison of Three IRT Approaches to Examinee Ability Change Modeling in a Single-Group Anchor Test Design

    ERIC Educational Resources Information Center

    Paek, Insu; Park, Hyun-Jeong; Cai, Li; Chi, Eunlim

    2014-01-01

    Typically a longitudinal growth modeling based on item response theory (IRT) requires repeated measures data from a single group with the same test design. If operational or item exposure problems are present, the same test may not be employed to collect data for longitudinal analyses and tests at multiple time points are constructed with unique…

  2. Electronic Imaging: Rochester Imaging Consortium, Abstracts of Research Topics Reported at the Annual Meeting of the Optical Society of America Held in San Jose, California on 3-8 November 1991

    DTIC Science & Technology

    1991-11-01

    Nicholas George "Image Deblurring for Multiple-Point Impulse Responses," Bryan J. Stossel and Nicholas George 14. SUBJECT TERMS 15. NUMBER OF PAGES...Keith B. Farr Nicholas George Backscatter from a Tilted Rough Disc Donald J. Schertler Nicholas George Image Deblurring for Multiple-Point Impulse ...correlation components. Uf) c)z 0 CL C/) Ix I- z 0 0 LL C,z -J a 0l IMAGE DEBLURRING FOR MULTIPLE-POINT IMPULSE RESPONSES Bryan J. Stossel and Nicholas George

  3. The Buffering Effect of Mindfulness on Abusive Supervision and Creative Performance: A Social Cognitive Framework.

    PubMed

    Zheng, Xiaoming; Liu, Xin

    2017-01-01

    Our research draws upon social cognitive theory and incorporates a regulatory approach to investigate why and when abusive supervision influences employee creative performance. The analyses of data from multiple time points and multiple sources reveal that abusive supervision hampers employee self-efficacy at work, which in turn impairs employee creative performance. Further, employee mindfulness buffers the negative effects of abusive supervision on employee self-efficacy at work as well as the indirect effects of abusive supervision on employee creative performance. Our findings have implications for both theory and practice. Limitations and directions for future research are also discussed.

  4. The Buffering Effect of Mindfulness on Abusive Supervision and Creative Performance: A Social Cognitive Framework

    PubMed Central

    Zheng, Xiaoming; Liu, Xin

    2017-01-01

    Our research draws upon social cognitive theory and incorporates a regulatory approach to investigate why and when abusive supervision influences employee creative performance. The analyses of data from multiple time points and multiple sources reveal that abusive supervision hampers employee self-efficacy at work, which in turn impairs employee creative performance. Further, employee mindfulness buffers the negative effects of abusive supervision on employee self-efficacy at work as well as the indirect effects of abusive supervision on employee creative performance. Our findings have implications for both theory and practice. Limitations and directions for future research are also discussed. PMID:28955285

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ragan, Eric

    Several researchers have investigated phantom tactile sensation (i.e., the perception of a nonexistent actuator between two real actuators) and apparent tactile motion (i.e., the perception of a moving actuator due to time delays between onsets of multiple actuations). Prior work has focused primarily on determining appropriate Durations of Stimulation (DOS) and Stimulus Onset Asynchronies (SOA) for simple touch gestures, such as a single finger stroke. To expand upon this knowledge, we investigated complex touch gestures involving multiple, simultaneous points of contact, such as a whole hand touching the arm. To implement complex touch gestures, we modified the Tactile Brush algorithmmore » to support rectangular areas of tactile stimulation.« less

  6. Assisting People with Multiple Disabilities and Minimal Motor Behavior to Improve Computer Pointing Efficiency through a Mouse Wheel

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang; Chang, Man-Ling; Shih, Ching-Tien

    2009-01-01

    This study evaluated whether two people with multiple disabilities and minimal motor behavior would be able to improve their pointing performance using finger poke ability with a mouse wheel through a Dynamic Pointing Assistive Program (DPAP) and a newly developed mouse driver (i.e., a new mouse driver replaces standard mouse driver, changes a…

  7. Assisting People with Multiple Disabilities Improve Their Computer-Pointing Efficiency with Hand Swing through a Standard Mouse

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang; Chiu, Sheng-Kai; Chu, Chiung-Ling; Shih, Ching-Tien; Liao, Yung-Kun; Lin, Chia-Chen

    2010-01-01

    This study evaluated whether two people with multiple disabilities would be able to improve their pointing performance using hand swing with a standard mouse through an Extended Dynamic Pointing Assistive Program (EDPAP) and a newly developed mouse driver (i.e., a new mouse driver replaces standard mouse driver, and changes a mouse into a precise…

  8. Trial of Minocycline in a Clinically Isolated Syndrome of Multiple Sclerosis.

    PubMed

    Metz, Luanne M; Li, David K B; Traboulsee, Anthony L; Duquette, Pierre; Eliasziw, Misha; Cerchiaro, Graziela; Greenfield, Jamie; Riddehough, Andrew; Yeung, Michael; Kremenchutzky, Marcelo; Vorobeychik, Galina; Freedman, Mark S; Bhan, Virender; Blevins, Gregg; Marriott, James J; Grand'Maison, Francois; Lee, Liesly; Thibault, Manon; Hill, Michael D; Yong, V Wee

    2017-06-01

    On the basis of encouraging preliminary results, we conducted a randomized, controlled trial to determine whether minocycline reduces the risk of conversion from a first demyelinating event (also known as a clinically isolated syndrome) to multiple sclerosis. During the period from January 2009 through July 2013, we randomly assigned participants who had had their first demyelinating symptoms within the previous 180 days to receive either 100 mg of minocycline, administered orally twice daily, or placebo. Administration of minocycline or placebo was continued until a diagnosis of multiple sclerosis was established or until 24 months after randomization, whichever came first. The primary outcome was conversion to multiple sclerosis (diagnosed on the basis of the 2005 McDonald criteria) within 6 months after randomization. Secondary outcomes included conversion to multiple sclerosis within 24 months after randomization and changes on magnetic resonance imaging (MRI) at 6 months and 24 months (change in lesion volume on T 2 -weighted MRI, cumulative number of new lesions enhanced on T 1 -weighted MRI ["enhancing lesions"], and cumulative combined number of unique lesions [new enhancing lesions on T 1 -weighted MRI plus new and newly enlarged lesions on T 2 -weighted MRI]). A total of 142 eligible participants underwent randomization at 12 Canadian multiple sclerosis clinics; 72 participants were assigned to the minocycline group and 70 to the placebo group. The mean age of the participants was 35.8 years, and 68.3% were women. The unadjusted risk of conversion to multiple sclerosis within 6 months after randomization was 61.0% in the placebo group and 33.4% in the minocycline group, a difference of 27.6 percentage points (95% confidence interval [CI], 11.4 to 43.9; P=0.001). After adjustment for the number of enhancing lesions at baseline, the difference in the risk of conversion to multiple sclerosis within 6 months after randomization was 18.5 percentage points (95% CI, 3.7 to 33.3; P=0.01); the unadjusted risk difference was not significant at the 24-month secondary outcome time point (P=0.06). All secondary MRI outcomes favored minocycline over placebo at 6 months but not at 24 months. Trial withdrawals and adverse events of rash, dizziness, and dental discoloration were more frequent among participants who received minocycline than among those who received placebo. The risk of conversion from a clinically isolated syndrome to multiple sclerosis was significantly lower with minocycline than with placebo over 6 months but not over 24 months. (Funded by the Multiple Sclerosis Society of Canada; ClinicalTrials.gov number, NCT00666887 .).

  9. Planetary Crater Detection and Registration Using Marked Point Processes, Multiple Birth and Death Algorithms, and Region-Based Analysis

    NASA Technical Reports Server (NTRS)

    Solarna, David; Moser, Gabriele; Le Moigne-Stewart, Jacqueline; Serpico, Sebastiano B.

    2017-01-01

    Because of the large variety of sensors and spacecraft collecting data, planetary science needs to integrate various multi-sensor and multi-temporal images. These multiple data represent a precious asset, as they allow the study of targets spectral responses and of changes in the surface structure; because of their variety, they also require accurate and robust registration. A new crater detection algorithm, used to extract features that will be integrated in an image registration framework, is presented. A marked point process-based method has been developed to model the spatial distribution of elliptical objects (i.e. the craters) and a birth-death Markov chain Monte Carlo method, coupled with a region-based scheme aiming at computational efficiency, is used to find the optimal configuration fitting the image. The extracted features are exploited, together with a newly defined fitness function based on a modified Hausdorff distance, by an image registration algorithm whose architecture has been designed to minimize the computational time.

  10. Theodore E. Woodward Award: Spare Me the Powerpoint and Bring Back the Medical Textbook

    PubMed Central

    Southwick, Frederick S.

    2007-01-01

    A tutorial for 4th year medical students revealed absent long-term retention of microbiology and infectious disease facts taught during the 2nd year. Students were suffering from the Ziegarnik effect, the loss of memory after completion of a task. PowerPoint lectures and PowerPoint notes combined with multiple-choice questions may have encouraged this outcome; this teaching format was also associated with minimal use of the course textbook. During the subsequent year, active learning techniques, Just-in-Time Teaching (JiTT) and Peer Instruction (PI) were used, and instructors specifically taught from the textbook. Essays and short answer questions were combined with multiple-choice questions to encourage understanding and recall. Performance on the National Board Shelf exam improved from the 59th percentile (2002–2004) to the 83rd percentile (2005), and textbook use increased from 1.6% to 79%. This experience demonstrates that strategies incorporating active learning and textbook use correlate with striking improvement in medical student performance. PMID:18528495

  11. Coevolution of CRISPR bacteria and phage in 2 dimensions

    NASA Astrophysics Data System (ADS)

    Han, Pu; Deem, Michael

    2014-03-01

    CRISPR (cluster regularly interspaced short palindromic repeats) is a newly discovered adaptive, heritable immune system of prokaryotes. It can prevent infection of prokaryotes by phage. Most bacteria and almost all archae have CRISPR. The CRISPR system incorporates short nucleotide sequences from viruses. These incorporated sequences provide a historical record of the host and predator coevolution. We simulate the coevolution of bacteria and phage in 2 dimensions. Each phage has multiple proto-spacers that the bacteria can incorporate. Each bacterium can store multiple spacers in its CRISPR. Phages can escape recognition by the CRISPR system via point mutation or recombination. We will discuss the different evolutionary consequences of point mutation or recombination on the coevolution of bacteria and phage. We will also discuss an intriguing ``dynamic phase transition'' in the number of phage as a function of time and mutation rate. We will show that due to the arm race between phages and bacteria, the frequency of spacers and proto-spacers in a population can oscillate quite rapidly.

  12. 3D medical thermography device

    NASA Astrophysics Data System (ADS)

    Moghadam, Peyman

    2015-05-01

    In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.

  13. Separation of left and right lungs using 3D information of sequential CT images and a guided dynamic programming algorithm

    PubMed Central

    Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin

    2011-01-01

    Objective this article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on CT examinations. Methods we developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. Results the scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing dataset of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. Conclusions The proposed method is able to robustly and accurately disconnect all connections between left and right lungs and the guided dynamic programming algorithm is able to remove redundant processing. PMID:21412104

  14. Separation of left and right lungs using 3-dimensional information of sequential computed tomography images and a guided dynamic programming algorithm.

    PubMed

    Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin

    2011-01-01

    This article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on computed tomography (CT) examinations. We developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. The scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing data set of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. The proposed method is able to robustly and accurately disconnect all connections between left and right lungs, and the guided dynamic programming algorithm is able to remove redundant processing.

  15. Comparing Matchplay Characteristics and Physical Demands of Junior and Professional Tennis Athletes in the Era of Big Data.

    PubMed

    Kovalchik, Stephanie A; Reid, Machar

    2017-12-01

    Differences in the competitive performance characteristics of junior and professional tennis players are not well understood. The present study provides a comprehensive comparative analysis of junior and professional matchplay. The study utilized multiple large-scale datasets covering match, point, and shot outcomes over multiple years of competition. Regression analysis was used to identify differences between junior and professional matchplay. Top professional men and women were found to play significantly more matches, sets, and games compared to junior players of an equivalent ranking. Professional players had a greater serve advantage, men winning 4 and women winning 2 additional percentage points on serve compared to juniors. Clutch ability in break point conversion was 6 to 8 percentage points greater for junior players. In general, shots were more powerful and more accurate at the professional level with the largest differences observed for male players on serve. Serving to the center of the court was more than two times more common for junior players on first serve. While male professionals performed 50% more total work in a Grand Slam match than juniors, junior girls performed 50% more work than professional women. Understanding how competitiveness, play demands, and the physical characteristics of shots differ between junior and professional tennis players can help set realistic expectations and developmentally appropriate training for transitioning players.

  16. Hemispherical breathing mode speaker using a dielectric elastomer actuator.

    PubMed

    Hosoya, Naoki; Baba, Shun; Maeda, Shingo

    2015-10-01

    Although indoor acoustic characteristics should ideally be assessed by measuring the reverberation time using a point sound source, a regular polyhedron loudspeaker, which has multiple loudspeakers on a chassis, is typically used. However, such a configuration is not a point sound source if the size of the loudspeaker is large relative to the target sound field. This study investigates a small lightweight loudspeaker using a dielectric elastomer actuator vibrating in the breathing mode (the pulsating mode such as the expansion and contraction of a balloon). Acoustic testing with regard to repeatability, sound pressure, vibration mode profiles, and acoustic radiation patterns indicate that dielectric elastomer loudspeakers may be feasible.

  17. Safe driving and executive functions in healthy middle-aged drivers.

    PubMed

    León-Domínguez, Umberto; Solís-Marcos, Ignacio; Barrio-Álvarez, Elena; Barroso Y Martín, Juan Manuel; León-Carrión, José

    2017-01-01

    The introduction of the point system driver's license in several European countries could offer a valid framework for evaluating driving skills. This is the first study to use this framework to assess the functional integrity of executive functions in middle-aged drivers with full points, partial points or no points on their driver's license (N = 270). The purpose of this study is to find differences in executive functions that could be determinants in safe driving. Cognitive tests were used to assess attention processes, processing speed, planning, cognitive flexibility, and inhibitory control. Analyses for covariance (ANCOVAS) were used for group comparisons while adjusting for education level. The Bonferroni method was used for correcting for multiple comparisons. Overall, drivers with the full points on their license showed better scores than the other two groups. In particular, significant differences were found in reaction times on Simple and Conditioned Attention tasks (both p-values < 0.001) and in number of type-III errors on the Tower of Hanoi task (p = 0.026). Differences in reaction time on attention tasks could serve as neuropsychological markers for safe driving. Further analysis should be conducted in order to determine the behavioral impact of impaired executive functioning on driving ability.

  18. Simultaneous colour visualizations of multiple ALS point cloud attributes for land cover and vegetation analysis

    NASA Astrophysics Data System (ADS)

    Zlinszky, András; Schroiff, Anke; Otepka, Johannes; Mandlburger, Gottfried; Pfeifer, Norbert

    2014-05-01

    LIDAR point clouds hold valuable information for land cover and vegetation analysis, not only in the spatial distribution of the points but also in their various attributes. However, LIDAR point clouds are rarely used for visual interpretation, since for most users, the point cloud is difficult to interpret compared to passive optical imagery. Meanwhile, point cloud viewing software is available allowing interactive 3D interpretation, but typically only one attribute at a time. This results in a large number of points with the same colour, crowding the scene and often obscuring detail. We developed a scheme for mapping information from multiple LIDAR point attributes to the Red, Green, and Blue channels of a widely used LIDAR data format, which are otherwise mostly used to add information from imagery to create "photorealistic" point clouds. The possible combinations of parameters are therefore represented in a wide range of colours, but relative differences in individual parameter values of points can be well understood. The visualization was implemented in OPALS software, using a simple and robust batch script, and is viewer independent since the information is stored in the point cloud data file itself. In our case, the following colour channel assignment delivered best results: Echo amplitude in the Red, echo width in the Green and normalized height above a Digital Terrain Model in the Blue channel. With correct parameter scaling (but completely without point classification), points belonging to asphalt and bare soil are dark red, low grassland and crop vegetation are bright red to yellow, shrubs and low trees are green and high trees are blue. Depending on roof material and DTM quality, buildings are shown from red through purple to dark blue. Erroneously high or low points, or points with incorrect amplitude or echo width usually have colours contrasting from terrain or vegetation. This allows efficient visual interpretation of the point cloud in planar, profile and 3D views since it reduces crowding of the scene and delivers intuitive contextual information. The resulting visualization has proved useful for vegetation analysis for habitat mapping, and can also be applied as a first step for point cloud level classification. An interactive demonstration of the visualization script is shown during poster attendance, including the opportunity to view your own point cloud sample files.

  19. Synthetic observations of protostellar multiple systems

    NASA Astrophysics Data System (ADS)

    Lomax, O.; Whitworth, A. P.

    2018-04-01

    Observations of protostars are often compared with synthetic observations of models in order to infer the underlying physical properties of the protostars. The majority of these models have a single protostar, attended by a disc and an envelope. However, observational and numerical evidence suggests that a large fraction of protostars form as multiple systems. This means that fitting models of single protostars to observations may be inappropriate. We produce synthetic observations of protostellar multiple systems undergoing realistic, non-continuous accretion. These systems consist of multiple protostars with episodic luminosities, embedded self-consistently in discs and envelopes. We model the gas dynamics of these systems using smoothed particle hydrodynamics and we generate synthetic observations by post-processing the snapshots using the SPAMCART Monte Carlo radiative transfer code. We present simulation results of three model protostellar multiple systems. For each of these, we generate 4 × 104 synthetic spectra at different points in time and from different viewing angles. We propose a Bayesian method, using similar calculations to those presented here, but in greater numbers, to infer the physical properties of protostellar multiple systems from observations.

  20. A multiple-time-scale turbulence model based on variable partitioning of turbulent kinetic energy spectrum

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.; Chen, C.-P.

    1987-01-01

    A multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method is presented. In the model, the effect of the ratio of the production rate to the dissipation rate on eddy viscosity is modeled by use of the multiple-time-scales and a variable partitioning of the turbulent kinetic energy spectrum. The concept of a variable partitioning of the turbulent kinetic energy spectrum and the rest of the model details are based on the previously reported algebraic stress turbulence model. Example problems considered include: a fully developed channel flow, a plane jet exhausting into a moving stream, a wall jet flow, and a weakly coupled wake-boundary layer interaction flow. The computational results compared favorably with those obtained by using the algebraic stress turbulence model as well as experimental data. The present turbulence model, as well as the algebraic stress turbulence model, yielded significantly improved computational results for the complex turbulent boundary layer flows, such as the wall jet flow and the wake boundary layer interaction flow, compared with available computational results obtained by using the standard kappa-epsilon turbulence model.

  1. A multiple-time-scale turbulence model based on variable partitioning of the turbulent kinetic energy spectrum

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.; Chen, C.-P.

    1989-01-01

    A multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method is presented. In the model, the effect of the ratio of the production rate to the dissipation rate on eddy viscosity is modeled by use of the multiple-time-scales and a variable partitioning of the turbulent kinetic energy spectrum. The concept of a variable partitioning of the turbulent kinetic energy spectrum and the rest of the model details are based on the previously reported algebraic stress turbulence model. Example problems considered include: a fully developed channel flow, a plane jet exhausting into a moving stream, a wall jet flow, and a weakly coupled wake-boundary layer interaction flow. The computational results compared favorably with those obtained by using the algebraic stress turbulence model as well as experimental data. The present turbulence model, as well as the algebraic stress turbulence model, yielded significantly improved computational results for the complex turbulent boundary layer flows, such as the wall jet flow and the wake boundary layer interaction flow, compared with available computational results obtained by using the standard kappa-epsilon turbulence model.

  2. A Paper-Based Device for Performing Loop-Mediated Isothermal Amplification with Real-Time Simultaneous Detection of Multiple DNA Targets.

    PubMed

    Seok, Youngung; Joung, Hyou-Arm; Byun, Ju-Young; Jeon, Hyo-Sung; Shin, Su Jeong; Kim, Sanghyo; Shin, Young-Beom; Han, Hyung Soo; Kim, Min-Gon

    2017-01-01

    Paper-based diagnostic devices have many advantages as a one of the multiple diagnostic test platforms for point-of-care (POC) testing because they have simplicity, portability, and cost-effectiveness. However, despite high sensitivity and specificity of nucleic acid testing (NAT), the development of NAT based on a paper platform has not progressed as much as the others because various specific conditions for nucleic acid amplification reactions such as pH, buffer components, and temperature, inhibitions from technical differences of paper-based device. Here, we propose a paper-based device for performing loop-mediated isothermal amplification (LAMP) with real-time simultaneous detection of multiple DNA targets. We determined the optimal chemical components to enable dry conditions for the LAMP reaction without lyophilization or other techniques. We also devised the simple paper device structure by sequentially stacking functional layers, and employed a newly discovered property of hydroxynaphthol blue fluorescence to analyze real-time LAMP signals in the paper device. This proposed platform allowed analysis of three different meningitis DNA samples in a single device with single-step operation. This LAMP-based multiple diagnostic device has potential for real-time analysis with quantitative detection of 10 2 -10 5 copies of genomic DNA. Furthermore, we propose the transformation of DNA amplification devices to a simple and affordable paper system approach with great potential for realizing a paper-based NAT system for POC testing.

  3. A Paper-Based Device for Performing Loop-Mediated Isothermal Amplification with Real-Time Simultaneous Detection of Multiple DNA Targets

    PubMed Central

    Seok, Youngung; Joung, Hyou-Arm; Byun, Ju-Young; Jeon, Hyo-Sung; Shin, Su Jeong; Kim, Sanghyo; Shin, Young-Beom; Han, Hyung Soo; Kim, Min-Gon

    2017-01-01

    Paper-based diagnostic devices have many advantages as a one of the multiple diagnostic test platforms for point-of-care (POC) testing because they have simplicity, portability, and cost-effectiveness. However, despite high sensitivity and specificity of nucleic acid testing (NAT), the development of NAT based on a paper platform has not progressed as much as the others because various specific conditions for nucleic acid amplification reactions such as pH, buffer components, and temperature, inhibitions from technical differences of paper-based device. Here, we propose a paper-based device for performing loop-mediated isothermal amplification (LAMP) with real-time simultaneous detection of multiple DNA targets. We determined the optimal chemical components to enable dry conditions for the LAMP reaction without lyophilization or other techniques. We also devised the simple paper device structure by sequentially stacking functional layers, and employed a newly discovered property of hydroxynaphthol blue fluorescence to analyze real-time LAMP signals in the paper device. This proposed platform allowed analysis of three different meningitis DNA samples in a single device with single-step operation. This LAMP-based multiple diagnostic device has potential for real-time analysis with quantitative detection of 102-105 copies of genomic DNA. Furthermore, we propose the transformation of DNA amplification devices to a simple and affordable paper system approach with great potential for realizing a paper-based NAT system for POC testing. PMID:28740546

  4. Time-dependent summary receiver operating characteristics for meta-analysis of prognostic studies.

    PubMed

    Hattori, Satoshi; Zhou, Xiao-Hua

    2016-11-20

    Prognostic studies are widely conducted to examine whether biomarkers are associated with patient's prognoses and play important roles in medical decisions. Because findings from one prognostic study may be very limited, meta-analyses may be useful to obtain sound evidence. However, prognostic studies are often analyzed by relying on a study-specific cut-off value, which can lead to difficulty in applying the standard meta-analysis techniques. In this paper, we propose two methods to estimate a time-dependent version of the summary receiver operating characteristics curve for meta-analyses of prognostic studies with a right-censored time-to-event outcome. We introduce a bivariate normal model for the pair of time-dependent sensitivity and specificity and propose a method to form inferences based on summary statistics reported in published papers. This method provides a valid inference asymptotically. In addition, we consider a bivariate binomial model. To draw inferences from this bivariate binomial model, we introduce a multiple imputation method. The multiple imputation is found to be approximately proper multiple imputation, and thus the standard Rubin's variance formula is justified from a Bayesian view point. Our simulation study and application to a real dataset revealed that both methods work well with a moderate or large number of studies and the bivariate binomial model coupled with the multiple imputation outperforms the bivariate normal model with a small number of studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. THE SCREENING AND RANKING ALGORITHM FOR CHANGE-POINTS DETECTION IN MULTIPLE SAMPLES

    PubMed Central

    Song, Chi; Min, Xiaoyi; Zhang, Heping

    2016-01-01

    The chromosome copy number variation (CNV) is the deviation of genomic regions from their normal copy number states, which may associate with many human diseases. Current genetic studies usually collect hundreds to thousands of samples to study the association between CNV and diseases. CNVs can be called by detecting the change-points in mean for sequences of array-based intensity measurements. Although multiple samples are of interest, the majority of the available CNV calling methods are single sample based. Only a few multiple sample methods have been proposed using scan statistics that are computationally intensive and designed toward either common or rare change-points detection. In this paper, we propose a novel multiple sample method by adaptively combining the scan statistic of the screening and ranking algorithm (SaRa), which is computationally efficient and is able to detect both common and rare change-points. We prove that asymptotically this method can find the true change-points with almost certainty and show in theory that multiple sample methods are superior to single sample methods when shared change-points are of interest. Additionally, we report extensive simulation studies to examine the performance of our proposed method. Finally, using our proposed method as well as two competing approaches, we attempt to detect CNVs in the data from the Primary Open-Angle Glaucoma Genes and Environment study, and conclude that our method is faster and requires less information while our ability to detect the CNVs is comparable or better. PMID:28090239

  6. Compression performance comparison in low delay real-time video for mobile applications

    NASA Astrophysics Data System (ADS)

    Bivolarski, Lazar

    2012-10-01

    This article compares the performance of several current video coding standards in the conditions of low-delay real-time in a resource constrained environment. The comparison is performed using the same content and the metrics and mix of objective and perceptual quality metrics. The metrics results in different coding schemes are analyzed from a point of view of user perception and quality of service. Multiple standards are compared MPEG-2, MPEG4 and MPEG-AVC and well and H.263. The metrics used in the comparison include SSIM, VQM and DVQ. Subjective evaluation and quality of service are discussed from a point of view of perceptual metrics and their incorporation in the coding scheme development process. The performance and the correlation of results are presented as a predictor of the performance of video compression schemes.

  7. Improving Planck calibration by including frequency-dependent relativistic corrections

    NASA Astrophysics Data System (ADS)

    Quartin, Miguel; Notari, Alessio

    2015-09-01

    The Planck satellite detectors are calibrated in the 2015 release using the "orbital dipole", which is the time-dependent dipole generated by the Doppler effect due to the motion of the satellite around the Sun. Such an effect has also relativistic time-dependent corrections of relative magnitude 10-3, due to coupling with the "solar dipole" (the motion of the Sun compared to the CMB rest frame), which are included in the data calibration by the Planck collaboration. We point out that such corrections are subject to a frequency-dependent multiplicative factor. This factor differs from unity especially at the highest frequencies, relevant for the HFI instrument. Since currently Planck calibration errors are dominated by systematics, to the point that polarization data is currently unreliable at large scales, such a correction can in principle be highly relevant for future data releases.

  8. Fixed-point image orthorectification algorithms for reduced computational cost

    NASA Astrophysics Data System (ADS)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.

  9. Reentrainment of the circadian pacemaker during jet lag: East-west asymmetry and the effects of north-south travel.

    PubMed

    Diekman, Casey O; Bose, Amitabha

    2018-01-21

    The normal alignment of circadian rhythms with the 24-h light-dark cycle is disrupted after rapid travel between home and destination time zones, leading to sleep problems, indigestion, and other symptoms collectively known as jet lag. Using mathematical and computational analysis, we study the process of reentrainment to the light-dark cycle of the destination time zone in a model of the human circadian pacemaker. We calculate the reentrainment time for travel between any two points on the globe at any time of the day and year. We construct one-dimensional entrainment maps to explain several properties of jet lag, such as why most people experience worse jet lag after traveling east than west. We show that this east-west asymmetry depends on the endogenous period of the traveler's circadian clock as well as daylength. Thus the critical factor is not simply whether the endogenous period is greater than or less than 24 h as is commonly assumed. We show that the unstable fixed point of an entrainment map determines whether a traveler reentrains through phase advances or phase delays, providing an understanding of the threshold that separates orthodromic and antidromic modes of reentrainment. Contrary to the conventional wisdom that jet lag only occurs after east-west travel across multiple time zones, we predict that the change in daylength encountered during north-south travel can cause jet lag even when no time zones are crossed. Our techniques could be used to provide advice to travelers on how to minimize jet lag on trips involving multiple destinations and a combination of transmeridian and translatitudinal travel. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Smoked cannabis for spasticity in multiple sclerosis: a randomized, placebo-controlled trial.

    PubMed

    Corey-Bloom, Jody; Wolfson, Tanya; Gamst, Anthony; Jin, Shelia; Marcotte, Thomas D; Bentley, Heather; Gouaux, Ben

    2012-07-10

    Spasticity is a common and poorly controlled symptom of multiple sclerosis. Our objective was to determine the short-term effect of smoked cannabis on this symptom. We conducted a placebo-controlled, crossover trial involving adult patients with multiple sclerosis and spasticity. We recruited participants from a regional clinic or by referral from specialists. We randomly assigned participants to either the intervention (smoked cannabis, once daily for three days) or control (identical placebo cigarettes, once daily for three days). Each participant was assessed daily before and after treatment. After a washout interval of 11 days, participants crossed over to the opposite group. Our primary outcome was change in spasticity as measured by patient score on the modified Ashworth scale. Our secondary outcomes included patients' perception of pain (as measured using a visual analogue scale), a timed walk and changes in cognitive function (as measured by patient performance on the Paced Auditory Serial Addition Test), in addition to ratings of fatigue. Thirty-seven participants were randomized at the start of the study, 30 of whom completed the trial. Treatment with smoked cannabis resulted in a reduction in patient scores on the modified Ashworth scale by an average of 2.74 points more than placebo (p < 0.0001). In addition, treatment reduced pain scores on a visual analogue scale by an average of 5.28 points more than placebo (p = 0.008). Scores for the timed walk did not differ significantly between treatment and placebo (p = 0.2). Scores on the Paced Auditory Serial Addition Test decreased by 8.67 points more with treatment than with placebo (p = 0.003). No serious adverse events occurred during the trial. Smoked cannabis was superior to placebo in symptom and pain reduction in participants with treatment-resistant spasticity. Future studies should examine whether different doses can result in similar beneficial effects with less cognitive impact.

  11. Smoked cannabis for spasticity in multiple sclerosis: a randomized, placebo-controlled trial

    PubMed Central

    Corey-Bloom, Jody; Wolfson, Tanya; Gamst, Anthony; Jin, Shelia; Marcotte, Thomas D.; Bentley, Heather; Gouaux, Ben

    2012-01-01

    Background: Spasticity is a common and poorly controlled symptom of multiple sclerosis. Our objective was to determine the short-term effect of smoked cannabis on this symptom. Methods: We conducted a placebo-controlled, crossover trial involving adult patients with multiple sclerosis and spasticity. We recruited participants from a regional clinic or by referral from specialists. We randomly assigned participants to either the intervention (smoked cannabis, once daily for three days) or control (identical placebo cigarettes, once daily for three days). Each participant was assessed daily before and after treatment. After a washout interval of 11 days, participants crossed over to the opposite group. Our primary outcome was change in spasticity as measured by patient score on the modified Ashworth scale. Our secondary outcomes included patients’ perception of pain (as measured using a visual analogue scale), a timed walk and changes in cognitive function (as measured by patient performance on the Paced Auditory Serial Addition Test), in addition to ratings of fatigue. Results: Thirty-seven participants were randomized at the start of the study, 30 of whom completed the trial. Treatment with smoked cannabis resulted in a reduction in patient scores on the modified Ashworth scale by an average of 2.74 points more than placebo (p < 0.0001). In addition, treatment reduced pain scores on a visual analogue scale by an average of 5.28 points more than placebo (p = 0.008). Scores for the timed walk did not differ significantly between treatment and placebo (p = 0.2). Scores on the Paced Auditory Serial Addition Test decreased by 8.67 points more with treatment than with placebo (p = 0.003). No serious adverse events occurred during the trial. Interpretation: Smoked cannabis was superior to placebo in symptom and pain reduction in participants with treatment-resistant spasticity. Future studies should examine whether different doses can result in similar beneficial effects with less cognitive impact. PMID:22586334

  12. Exact Asymptotics of the Freezing Transition of a Logarithmically Correlated Random Energy Model

    NASA Astrophysics Data System (ADS)

    Webb, Christian

    2011-12-01

    We consider a logarithmically correlated random energy model, namely a model for directed polymers on a Cayley tree, which was introduced by Derrida and Spohn. We prove asymptotic properties of a generating function of the partition function of the model by studying a discrete time analogy of the KPP-equation—thus translating Bramson's work on the KPP-equation into a discrete time case. We also discuss connections to extreme value statistics of a branching random walk and a rescaled multiplicative cascade measure beyond the critical point.

  13. Terrain classification in navigation of an autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Dodds, David R.

    1991-03-01

    In this paper we describe a method of path planning that integrates terrain classification (by means of fractals) the certainty grid method of spatial representation Kehtarnavaz Griswold collision-zones Dubois Prade fuzzy temporal and spatial knowledge and non-point sized qualitative navigational planning. An initially planned (" end-to-end" ) path is piece-wise modified to accommodate known and inferred moving obstacles and includes attention to time-varying multiple subgoals which may influence a section of path at a time after the robot has begun traversing that planned path.

  14. Atom optics in the time domain

    NASA Astrophysics Data System (ADS)

    Arndt, M.; Szriftgiser, P.; Dalibard, J.; Steane, A. M.

    1996-05-01

    Atom-optics experiments are presented using a time-modulated evanescent light wave as an atomic mirror in the trampoline configuration, i.e., perpendicular to the direction of the atomic free fall. This modulated mirror is used to accelerate cesium atoms, to focus their trajectories, and to apply a ``multiple lens'' to separately focus different velocity classes of atoms originating from a point source. We form images of a simple two-slit object to show the resolution of the device. The experiments are modelled by a general treatment analogous to classical ray optics.

  15. Competitive Reporter Monitored Amplification (CMA) - Quantification of Molecular Targets by Real Time Monitoring of Competitive Reporter Hybridization

    PubMed Central

    Ullrich, Thomas; Ermantraut, Eugen; Schulz, Torsten; Steinmetzer, Katrin

    2012-01-01

    Background State of the art molecular diagnostic tests are based on the sensitive detection and quantification of nucleic acids. However, currently established diagnostic tests are characterized by elaborate and expensive technical solutions hindering the development of simple, affordable and compact point-of-care molecular tests. Methodology and Principal Findings The described competitive reporter monitored amplification allows the simultaneous amplification and quantification of multiple nucleic acid targets by polymerase chain reaction. Target quantification is accomplished by real-time detection of amplified nucleic acids utilizing a capture probe array and specific reporter probes. The reporter probes are fluorescently labeled oligonucleotides that are complementary to the respective capture probes on the array and to the respective sites of the target nucleic acids in solution. Capture probes and amplified target compete for reporter probes. Increasing amplicon concentration leads to decreased fluorescence signal at the respective capture probe position on the array which is measured after each cycle of amplification. In order to observe reporter probe hybridization in real-time without any additional washing steps, we have developed a mechanical fluorescence background displacement technique. Conclusions and Significance The system presented in this paper enables simultaneous detection and quantification of multiple targets. Moreover, the presented fluorescence background displacement technique provides a generic solution for real time monitoring of binding events of fluorescently labelled ligands to surface immobilized probes. With the model assay for the detection of human immunodeficiency virus type 1 and 2 (HIV 1/2), we have been able to observe the amplification kinetics of five targets simultaneously and accommodate two additional hybridization controls with a simple instrument set-up. The ability to accommodate multiple controls and targets into a single assay and to perform the assay on simple and robust instrumentation is a prerequisite for the development of novel molecular point of care tests. PMID:22539973

  16. Gene selection with multiple ordering criteria.

    PubMed

    Chen, James J; Tsai, Chen-An; Tzeng, Shengli; Chen, Chun-Houh

    2007-03-05

    A microarray study may select different differentially expressed gene sets because of different selection criteria. For example, the fold-change and p-value are two commonly known criteria to select differentially expressed genes under two experimental conditions. These two selection criteria often result in incompatible selected gene sets. Also, in a two-factor, say, treatment by time experiment, the investigator may be interested in one gene list that responds to both treatment and time effects. We propose three layer ranking algorithms, point-admissible, line-admissible (convex), and Pareto, to provide a preference gene list from multiple gene lists generated by different ranking criteria. Using the public colon data as an example, the layer ranking algorithms are applied to the three univariate ranking criteria, fold-change, p-value, and frequency of selections by the SVM-RFE classifier. A simulation experiment shows that for experiments with small or moderate sample sizes (less than 20 per group) and detecting a 4-fold change or less, the two-dimensional (p-value and fold-change) convex layer ranking selects differentially expressed genes with generally lower FDR and higher power than the standard p-value ranking. Three applications are presented. The first application illustrates a use of the layer rankings to potentially improve predictive accuracy. The second application illustrates an application to a two-factor experiment involving two dose levels and two time points. The layer rankings are applied to selecting differentially expressed genes relating to the dose and time effects. In the third application, the layer rankings are applied to a benchmark data set consisting of three dilution concentrations to provide a ranking system from a long list of differentially expressed genes generated from the three dilution concentrations. The layer ranking algorithms are useful to help investigators in selecting the most promising genes from multiple gene lists generated by different filter, normalization, or analysis methods for various objectives.

  17. Computerized optimization of multiple isocentres in stereotactic convergent beam irradiation

    NASA Astrophysics Data System (ADS)

    Treuer, U.; Treuer, H.; Hoevels, M.; Müller, R. P.; Sturm, V.

    1998-01-01

    A method for the fully computerized determination and optimization of positions of target points and collimator sizes in convergent beam irradiation is presented. In conventional interactive trial and error methods, which are very time consuming, the treatment parameters are chosen according to the operator's experience and improved successively. This time is reduced significantly by the use of a computerized procedure. After the definition of target volume and organs at risk in the CT or MR scans, an initial configuration is created automatically. In the next step the target point positions and collimator diameters are optimized by the program. The aim of the optimization is to find a configuration for which a prescribed dose at the target surface is approximated as close as possible. At the same time dose peaks inside the target volume are minimized and organs at risk and tissue surrounding the target are spared. To enhance the speed of the optimization a fast method for approximate dose calculation in convergent beam irradiation is used. A possible application of the method for calculating the leaf positions when irradiating with a micromultileaf collimator is briefly discussed. The success of the procedure has been demonstrated for several clinical cases with up to six target points.

  18. Comparison of Interferometric Time-Series Analysis Techniques with Implications for Future Mission Design

    NASA Astrophysics Data System (ADS)

    Werner, C. L.; Wegmuller, U.; Strozzi, T.; Wiesmann, A.

    2006-12-01

    Principle contributors to the noise in differential SAR interferograms are temporal phase stability of the surface, geometry relating to baseline and surface slope, and propagation path delay variations due to tropospheric water vapor and the ionosphere. Time series analysis of multiple interferograms generated from a stack of SAR SLC images seeks to determine the deformation history of the surface while reducing errors. Only those scatterers within a resolution element that are stable and coherent for each interferometric pair contribute to the desired deformation signal. Interferograms with baselines exceeding 1/3 the critical baseline have substantial geometrical decorrelation for distributed targets. Short baseline pairs with multiple reference scenes can be combined using least-squares estimation to obtain a global deformation solution. Alternately point-like persistent scatterers can be identified in scenes that do not exhibit geometrical decorrelation associated with large baselines. In this approach interferograms are formed from a stack of SAR complex images using a single reference scene. Stable distributed scatter pixels are excluded however due to the presence of large baselines. We apply both point- based and short-baseline methodologies and compare results for a stack of fine-beam Radarsat data acquired in 2002-2004 over a rapidly subsiding oil field near Lost Hills, CA. We also investigate the density of point-like scatters with respect to image resolution. The primary difficulty encountered when applying time series methods is phase unwrapping errors due to spatial and temporal gaps. Phase unwrapping requires sufficient spatial and temporal sampling. Increasing the SAR range bandwidth increases the range resolution as well as increasing the critical interferometric baseline that defines the required satellite orbital tube diameter. Sufficient spatial sampling also permits unwrapping because of the reduced phase/pixel gradient. Short time intervals further reduce the differential phase due to deformation when the deformation is continuous. Lower frequency systems (L- vs. C-Band) substantially improve the ability to unwrap the phase correctly by directly reducing both interferometric phase amplitude and temporal decorrelation.

  19. D-dimer levels over time and the risk of recurrent venous thromboembolism: an update of the Vienna prediction model.

    PubMed

    Eichinger, Sabine; Heinze, Georg; Kyrle, Paul A

    2014-01-02

    Patients with unprovoked venous thromboembolism (VTE) can be stratified according to their recurrence risk based on their sex, the VTE location, and D-dimer measured 3 weeks after anticoagulation by the Vienna Prediction Model. We aimed to expand the model to also assess the recurrence risk from later points on. Five hundred and fifty-three patients with a first VTE were followed for a median of 68 months. We excluded patients with VTE provoked by a transient risk factor or female hormone intake, with a natural inhibitor deficiency, the lupus anticoagulant, or cancer. The study end point was recurrent VTE, which occurred in 150 patients. D-dimer levels did not substantially increase over time. Subdistribution hazard ratios (95% confidence intervals) dynamically changed from 2.43 (1.57 to 3.77) at 3 weeks to 2.27 (1.48 to 3.48), 1.98 (1.30 to 3.02) , and 1.73 (1.11 to 2.69) at 3, 9, and 15 months in men versus women, from 1.84 (1.00 to 3.43) to 1.68 (0.91 to 3.10), 1.49 (0.79 to 2.81) , and 1.44 (0.76 to 2.72) in patients with proximal deep vein thrombosis or pulmonary embolism compared with calf vein thrombosis, and from 1.30 (1.07 to 1.58) to 1.27 (1.06 to 1.51), 1.20 (1.02 to 1.41), and 1.13 (0.95 to 1.36) per doubling D-dimer. Using a dynamic landmark competing risks regression approach, we generated nomograms and a web-based calculator to calculate risk scores and recurrence rates from multiple times after anticoagulation. Risk of recurrent VTE after discontinuation of anticoagulation can be predicted from multiple random time points by integrating the patient's sex, location of first VTE, and serial D-dimer measurements.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rycroft, Chris H.; Bazant, Martin Z.

    An advection-diffusion-limited dissolution model of an object being eroded by a two-dimensional potential flow is presented. By taking advantage of the conformal invariance of the model, a numerical method is introduced that tracks the evolution of the object boundary in terms of a time-dependent Laurent series. Simulations of a variety of dissolving objects are shown, which shrink and collapse to a single point in finite time. The simulations reveal a surprising exact relationship, whereby the collapse point is the root of a non-Analytic function given in terms of the flow velocity and the Laurent series coefficients describing the initial shape.more » This result is subsequently derived using residue calculus. The structure of the non-Analytic function is examined for three different test cases, and a practical approach to determine the collapse point using a generalized Newton-Raphson root-finding algorithm is outlined. These examples also illustrate the possibility that the model breaks down in finite time prior to complete collapse, due to a topological singularity, as the dissolving boundary overlaps itself rather than breaking up into multiple domains (analogous to droplet pinch-off in fluid mechanics). In conclusion, the model raises fundamental mathematical questions about broken symmetries in finite-Time singularities of both continuous and stochastic dynamical systems.« less

  1. Asymmetric collapse by dissolution or melting in a uniform flow

    PubMed Central

    Bazant, Martin Z.

    2016-01-01

    An advection–diffusion-limited dissolution model of an object being eroded by a two-dimensional potential flow is presented. By taking advantage of the conformal invariance of the model, a numerical method is introduced that tracks the evolution of the object boundary in terms of a time-dependent Laurent series. Simulations of a variety of dissolving objects are shown, which shrink and collapse to a single point in finite time. The simulations reveal a surprising exact relationship, whereby the collapse point is the root of a non-analytic function given in terms of the flow velocity and the Laurent series coefficients describing the initial shape. This result is subsequently derived using residue calculus. The structure of the non-analytic function is examined for three different test cases, and a practical approach to determine the collapse point using a generalized Newton–Raphson root-finding algorithm is outlined. These examples also illustrate the possibility that the model breaks down in finite time prior to complete collapse, due to a topological singularity, as the dissolving boundary overlaps itself rather than breaking up into multiple domains (analogous to droplet pinch-off in fluid mechanics). The model raises fundamental mathematical questions about broken symmetries in finite-time singularities of both continuous and stochastic dynamical systems. PMID:26997890

  2. Asymmetric collapse by dissolution or melting in a uniform flow

    DOE PAGES

    Rycroft, Chris H.; Bazant, Martin Z.

    2016-01-06

    An advection-diffusion-limited dissolution model of an object being eroded by a two-dimensional potential flow is presented. By taking advantage of the conformal invariance of the model, a numerical method is introduced that tracks the evolution of the object boundary in terms of a time-dependent Laurent series. Simulations of a variety of dissolving objects are shown, which shrink and collapse to a single point in finite time. The simulations reveal a surprising exact relationship, whereby the collapse point is the root of a non-Analytic function given in terms of the flow velocity and the Laurent series coefficients describing the initial shape.more » This result is subsequently derived using residue calculus. The structure of the non-Analytic function is examined for three different test cases, and a practical approach to determine the collapse point using a generalized Newton-Raphson root-finding algorithm is outlined. These examples also illustrate the possibility that the model breaks down in finite time prior to complete collapse, due to a topological singularity, as the dissolving boundary overlaps itself rather than breaking up into multiple domains (analogous to droplet pinch-off in fluid mechanics). In conclusion, the model raises fundamental mathematical questions about broken symmetries in finite-Time singularities of both continuous and stochastic dynamical systems.« less

  3. Near Real Time Structural Health Monitoring with Multiple Sensors in a Cloud Environment

    NASA Astrophysics Data System (ADS)

    Bock, Y.; Todd, M.; Kuester, F.; Goldberg, D.; Lo, E.; Maher, R.

    2017-12-01

    A repeated near real time 3-D digital surrogate representation of critical engineered structures can be used to provide actionable data on subtle time-varying displacements in support of disaster resiliency. We describe a damage monitoring system of optimally-integrated complementary sensors, including Global Navigation Satellite Systems (GNSS), Micro-Electro-Mechanical Systems (MEMS) accelerometers coupled with the GNSS (seismogeodesy), light multi-rotor Unmanned Aerial Vehicles (UAVs) equipped with high-resolution digital cameras and GNSS/IMU, and ground-based Light Detection and Ranging (LIDAR). The seismogeodetic system provides point measurements of static and dynamic displacements and seismic velocities of the structure. The GNSS ties the UAV and LIDAR imagery to an absolute reference frame with respect to survey stations in the vicinity of the structure to isolate the building response to ground motions. The GNSS/IMU can also estimate the trajectory of the UAV with respect to the absolute reference frame. With these constraints, multiple UAVs and LIDAR images can provide 4-D displacements of thousands of points on the structure. The UAV systematically circumnavigates the target structure, collecting high-resolution image data, while the ground LIDAR scans the structure from different perspectives to create a detailed baseline 3-D reference model. UAV- and LIDAR-based imaging can subsequently be repeated after extreme events, or after long time intervals, to assess before and after conditions. The unique challenge is that disaster environments are often highly dynamic, resulting in rapidly evolving, spatio-temporal data assets with the need for near real time access to the available data and the tools to translate these data into decisions. The seismogeodetic analysis has already been demonstrated in the NASA AIST Managed Cloud Environment (AMCE) designed to manage large NASA Earth Observation data projects on Amazon Web Services (AWS). The Cloud provides distinct advantages in terms of extensive storage and computing resources required for processing UAV and LIDAR imagery. Furthermore, it avoids single points of failure and allows for remote operations during emergencies, when near real time access to structures may be limited.

  4. A simple and fast representation space for classifying complex time series

    NASA Astrophysics Data System (ADS)

    Zunino, Luciano; Olivares, Felipe; Bariviera, Aurelio F.; Rosso, Osvaldo A.

    2017-03-01

    In the context of time series analysis considerable effort has been directed towards the implementation of efficient discriminating statistical quantifiers. Very recently, a simple and fast representation space has been introduced, namely the number of turning points versus the Abbe value. It is able to separate time series from stationary and non-stationary processes with long-range dependences. In this work we show that this bidimensional approach is useful for distinguishing complex time series: different sets of financial and physiological data are efficiently discriminated. Additionally, a multiscale generalization that takes into account the multiple time scales often involved in complex systems has been also proposed. This multiscale analysis is essential to reach a higher discriminative power between physiological time series in health and disease.

  5. A hierarchical model combining distance sampling and time removal to estimate detection probability during avian point counts

    USGS Publications Warehouse

    Amundson, Courtney L.; Royle, J. Andrew; Handel, Colleen M.

    2014-01-01

    Imperfect detection during animal surveys biases estimates of abundance and can lead to improper conclusions regarding distribution and population trends. Farnsworth et al. (2005) developed a combined distance-sampling and time-removal model for point-transect surveys that addresses both availability (the probability that an animal is available for detection; e.g., that a bird sings) and perceptibility (the probability that an observer detects an animal, given that it is available for detection). We developed a hierarchical extension of the combined model that provides an integrated analysis framework for a collection of survey points at which both distance from the observer and time of initial detection are recorded. Implemented in a Bayesian framework, this extension facilitates evaluating covariates on abundance and detection probability, incorporating excess zero counts (i.e. zero-inflation), accounting for spatial autocorrelation, and estimating population density. Species-specific characteristics, such as behavioral displays and territorial dispersion, may lead to different patterns of availability and perceptibility, which may, in turn, influence the performance of such hierarchical models. Therefore, we first test our proposed model using simulated data under different scenarios of availability and perceptibility. We then illustrate its performance with empirical point-transect data for a songbird that consistently produces loud, frequent, primarily auditory signals, the Golden-crowned Sparrow (Zonotrichia atricapilla); and for 2 ptarmigan species (Lagopus spp.) that produce more intermittent, subtle, and primarily visual cues. Data were collected by multiple observers along point transects across a broad landscape in southwest Alaska, so we evaluated point-level covariates on perceptibility (observer and habitat), availability (date within season and time of day), and abundance (habitat, elevation, and slope), and included a nested point-within-transect and park-level effect. Our results suggest that this model can provide insight into the detection process during avian surveys and reduce bias in estimates of relative abundance but is best applied to surveys of species with greater availability (e.g., breeding songbirds).

  6. Laser guide star pointing camera for ESO LGS Facilities

    NASA Astrophysics Data System (ADS)

    Bonaccini Calia, D.; Centrone, M.; Pedichini, F.; Ricciardi, A.; Cerruto, A.; Ambrosino, F.

    2014-08-01

    Every observatory using LGS-AO routinely has the experience of the long time needed to bring and acquire the laser guide star in the wavefront sensor field of view. This is mostly due to the difficulty of creating LGS pointing models, because of the opto-mechanical flexures and hysteresis in the launch and receiver telescope structures. The launch telescopes are normally sitting on the mechanical structure of the larger receiver telescope. The LGS acquisition time is even longer in case of multiple LGS systems. In this framework the optimization of the LGS systems absolute pointing accuracy is relevant to boost the time efficiency of both science and technical observations. In this paper we show the rationale, the design and the feasibility tests of a LGS Pointing Camera (LPC), which has been conceived for the VLT Adaptive Optics Facility 4LGSF project. The LPC would assist in pointing the four LGS, while the VLT is doing the initial active optics cycles to adjust its own optics on a natural star target, after a preset. The LPC allows minimizing the needed accuracy for LGS pointing model calibrations, while allowing to reach sub-arcsec LGS absolute pointing accuracy. This considerably reduces the LGS acquisition time and observations operation overheads. The LPC is a smart CCD camera, fed by a 150mm diameter aperture of a Maksutov telescope, mounted on the top ring of the VLT UT4, running Linux and acting as server for the client 4LGSF. The smart camera is able to recognize within few seconds the sky field using astrometric software, determining the stars and the LGS absolute positions. Upon request it returns the offsets to give to the LGS, to position them at the required sky coordinates. As byproduct goal, once calibrated the LPC can calculate upon request for each LGS, its return flux, its fwhm and the uplink beam scattering levels.

  7. Project VALOR: Trajectories of Change in PTSD in Combat-Exposed Veterans

    DTIC Science & Technology

    2014-10-01

    8. PERFORMING ORGANIZATION REPORT NUMBER Boston VA Research Institute Inc. 150 South Huntington Ave Boston, MA 02130...comprehensive data on PTSD symptoms and related exposures and outcomes at multiple time points in a cohort of VA users with and without PTSD provide...proportion of women in our sample will allow us to examine variation in the associations by gender. 15. SUBJECT TERMS Risk factors for PTSD, PTSD symptom

  8. Distributed Learning, Extremum Seeking, and Model-Free Optimization for the Resilient Coordination of Multi-Agent Adversarial Groups

    DTIC Science & Technology

    2016-09-07

    been demonstrated on maximum power point tracking for photovoltaic arrays and for wind turbines . 3. ES has recently been implemented on the Mars...high-dimensional optimization problems . Extensions and applications of these techniques were developed during the realization of the project. 15...studied problems of dynamic average consensus and a class of unconstrained continuous-time optimization algorithms for the coordination of multiple

  9. The utility of multiple strategies for understanding complex behaviors.

    PubMed Central

    Adler, N E; Kegeles, S M; Irwin, C E

    1990-01-01

    Nickerson's critique of our brief report on changes in knowledge, attitudes and use of condoms among adolescents over a year's time mistakenly interprets the paper as examining an attitude/behavior discrepancy. A number of her criticisms follow from this mistaken interpretation. We agree with some of her general points but identify several errors in her analysis and note areas of disagreement about strategies for studying complex behaviors. PMID:2400026

  10. New methods for the numerical integration of ordinary differential equations and their application to the equations of motion of spacecraft

    NASA Technical Reports Server (NTRS)

    Banyukevich, A.; Ziolkovski, K.

    1975-01-01

    A number of hybrid methods for solving Cauchy problems are described on the basis of an evaluation of advantages of single and multiple-point numerical integration methods. The selection criterion is the principle of minimizing computer time. The methods discussed include the Nordsieck method, the Bulirsch-Stoer extrapolation method, and the method of recursive Taylor-Steffensen power series.

  11. Multiple role occupancy in midlife: balancing work and family life in Britain.

    PubMed

    Evandrou, Maria; Glaser, Karen; Henz, Ursula

    2002-12-01

    This article investigates the extent of multiple-role occupancy among midlife individuals in Britain in cross-section and over the life course, focusing on work and family commitments. The association between demographic and social factors and multiple-role obligations is also investigated. The research is based on secondary analysis of the British Family and Working Lives Survey, which contains retrospective paid work, caregiving, and child coresidence histories. The proportion of individuals in midlife (women aged 45-59 and men aged 45-64) who have multiple roles, in terms of paid work and consistent family care, at any one point in time is low (2%). This is primarily due to the relatively small proportion (7%) of people in this age group who are caring for a dependent. Being older, unmarried, and in poor health significantly reduces the number of roles held among men and women. Although the frequency of multiple role occupancy, and intensive multiple role occupancy, is low on a cross-sectional basis, a much higher proportion of individuals have ever occupied multiple roles over their life course (14%). The findings will inform debate on how policy can best aid those endeavouring to balance paid work, family life, and caring responsibilities.

  12. Accelerating simulation for the multiple-point statistics algorithm using vector quantization

    NASA Astrophysics Data System (ADS)

    Zuo, Chen; Pan, Zhibin; Liang, Hao

    2018-03-01

    Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.

  13. Analyzing the effect of selected control policy measures and sociodemographic factors on alcoholic beverage consumption in Europe within the AMPHORA project: statistical methods.

    PubMed

    Baccini, Michela; Carreras, Giulia

    2014-10-01

    This paper describes the methods used to investigate variations in total alcoholic beverage consumption as related to selected control intervention policies and other socioeconomic factors (unplanned factors) within 12 European countries involved in the AMPHORA project. The analysis presented several critical points: presence of missing values, strong correlation among the unplanned factors, long-term waves or trends in both the time series of alcohol consumption and the time series of the main explanatory variables. These difficulties were addressed by implementing a multiple imputation procedure for filling in missing values, then specifying for each country a multiple regression model which accounted for time trend, policy measures and a limited set of unplanned factors, selected in advance on the basis of sociological and statistical considerations are addressed. This approach allowed estimating the "net" effect of the selected control policies on alcohol consumption, but not the association between each unplanned factor and the outcome.

  14. Effect of multiple forming tools on geometrical and mechanical properties in incremental sheet forming

    NASA Astrophysics Data System (ADS)

    Wernicke, S.; Dang, T.; Gies, S.; Tekkaya, A. E.

    2018-05-01

    The tendency to a higher variety of products requires economical manufacturing processes suitable for the production of prototypes and small batches. In the case of complex hollow-shaped parts, single point incremental forming (SPIF) represents a highly flexible process. The flexibility of this process comes along with a very long process time. To decrease the process time, a new incremental forming approach with multiple forming tools is investigated. The influence of two incremental forming tools on the resulting mechanical and geometrical component properties compared to SPIF is presented. Sheets made of EN AW-1050A were formed to frustums of a pyramid using different tool-path strategies. Furthermore, several variations of the tool-path strategy are analyzed. A time saving between 40% and 60% was observed depending on the tool-path and the radii of the forming tools while the mechanical properties remained unchanged. This knowledge can increase the cost efficiency of incremental forming processes.

  15. Hybrid Optimization Parallel Search PACKage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-11-10

    HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework provides a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, amore » useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less

  16. FireProt: Energy- and Evolution-Based Computational Design of Thermostable Multiple-Point Mutants.

    PubMed

    Bednar, David; Beerens, Koen; Sebestova, Eva; Bendl, Jaroslav; Khare, Sagar; Chaloupkova, Radka; Prokop, Zbynek; Brezovsky, Jan; Baker, David; Damborsky, Jiri

    2015-11-01

    There is great interest in increasing proteins' stability to enhance their utility as biocatalysts, therapeutics, diagnostics and nanomaterials. Directed evolution is a powerful, but experimentally strenuous approach. Computational methods offer attractive alternatives. However, due to the limited reliability of predictions and potentially antagonistic effects of substitutions, only single-point mutations are usually predicted in silico, experimentally verified and then recombined in multiple-point mutants. Thus, substantial screening is still required. Here we present FireProt, a robust computational strategy for predicting highly stable multiple-point mutants that combines energy- and evolution-based approaches with smart filtering to identify additive stabilizing mutations. FireProt's reliability and applicability was demonstrated by validating its predictions against 656 mutations from the ProTherm database. We demonstrate that thermostability of the model enzymes haloalkane dehalogenase DhaA and γ-hexachlorocyclohexane dehydrochlorinase LinA can be substantially increased (ΔTm = 24°C and 21°C) by constructing and characterizing only a handful of multiple-point mutants. FireProt can be applied to any protein for which a tertiary structure and homologous sequences are available, and will facilitate the rapid development of robust proteins for biomedical and biotechnological applications.

  17. Floating point only SIMD instruction set architecture including compare, select, Boolean, and alignment operations

    DOEpatents

    Gschwind, Michael K [Chappaqua, NY

    2011-03-01

    Mechanisms for implementing a floating point only single instruction multiple data instruction set architecture are provided. A processor is provided that comprises an issue unit, an execution unit coupled to the issue unit, and a vector register file coupled to the execution unit. The execution unit has logic that implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA). The floating point vector registers of the vector register file store both scalar and floating point values as vectors having a plurality of vector elements. The processor may be part of a data processing system.

  18. Identifying Variability in Mental Models Within and Between Disciplines Caring for the Cardiac Surgical Patient.

    PubMed

    Brown, Evans K H; Harder, Kathleen A; Apostolidou, Ioanna; Wahr, Joyce A; Shook, Douglas C; Farivar, R Saeid; Perry, Tjorvi E; Konia, Mojca R

    2017-07-01

    The cardiac operating room is a complex environment requiring efficient and effective communication between multiple disciplines. The objectives of this study were to identify and rank critical time points during the perioperative care of cardiac surgical patients, and to assess variability in responses, as a correlate of a shared mental model, regarding the importance of these time points between and within disciplines. Using Delphi technique methodology, panelists from 3 institutions were tasked with developing a list of critical time points, which were subsequently assigned to pause point (PP) categories. Panelists then rated these PPs on a 100-point visual analog scale. Descriptive statistics were expressed as percentages, medians, and interquartile ranges (IQRs). We defined low response variability between panelists as an IQR ≤ 20, moderate response variability as an IQR > 20 and ≤ 40, and high response variability as an IQR > 40. Panelists identified a total of 12 PPs. The PPs identified by the highest number of panelists were (1) before surgical incision, (2) before aortic cannulation, (3) before cardiopulmonary bypass (CPB) initiation, (4) before CPB separation, and (5) at time of transfer of care from operating room (OR) to intensive care unit (ICU) staff. There was low variability among panelists' ratings of the PP "before surgical incision," moderate response variability for the PPs "before separation from CPB," "before transfer from OR table to bed," and "at time of transfer of care from OR to ICU staff," and high response variability for the remaining 8 PPs. In addition, the perceived importance of each of these PPs varies between disciplines and between institutions. Cardiac surgical providers recognize distinct critical time points during cardiac surgery. However, there is a high degree of variability within and between disciplines as to the importance of these times, suggesting an absence of a shared mental model among disciplines caring for cardiac surgical patients during the perioperative period. A lack of a shared mental model could be one of the factors contributing to preventable errors in cardiac operating rooms.

  19. Development of a needle driver with multiple degrees of freedom for neonatal laparoscopic surgery.

    PubMed

    Ishimaru, Tetsuya; Takazawa, Shinya; Uchida, Hiroo; Kawashima, Hiroshi; Fujii, Masahiro; Harada, Kanako; Sugita, Naohiko; Mitsuishi, Mamoru; Iwanaka, Tadashi

    2013-07-01

    The aims of this study were to develop a thin needle driver with multiple degrees of freedom and to evaluate its efficacy in multidirectional suturing compared with a conventional needle driver. The tip (15 mm) of the novel user-friendly needle driver (3.5 mm in diameter) has three degrees of freedom for grasping, rotation, and deflection. Six pediatric surgeons performed two kinds of suturing tasks in a dry box: three stitches in continuous suturing that were perpendicular or parallel to the insertion direction of the instrument, first using the novel instrument, then using a conventional instrument, and finally using the novel instrument again. The accuracy of insertion and exit compared with the target points and the procedure time were measured. In the conventional and novel procedures the mean gaps from the insertion point to the target in perpendicular suturing were 0.8 mm and 0.7 mm, respectively; in parallel suturing they were 0.8 mm and 0.6 mm, respectively. The mean gaps from the exit point to the target in perpendicular suturing were 0.6 mm and 0.6 mm for conventional and novel procedures, respectively; in parallel suturing they were 0.6 mm and 0.8 mm, respectively. The procedure time for perpendicular suturing was 33 seconds and 64 seconds for conventional and novel procedures, respectively (P=.02); for parallel suturing it was 114 seconds and 91 seconds, respectively. Our novel needle driver maintained accuracy of suturing; parallel suturing with the novel driver may be easier than with the conventional one.

  20. Detection of increased vasa vasorum in artery walls: improving CT number accuracy using image deconvolution

    NASA Astrophysics Data System (ADS)

    Rajendran, Kishore; Leng, Shuai; Jorgensen, Steven M.; Abdurakhimova, Dilbar; Ritman, Erik L.; McCollough, Cynthia H.

    2017-03-01

    Changes in arterial wall perfusion are an indicator of early atherosclerosis. This is characterized by an increased spatial density of vasa vasorum (VV), the micro-vessels that supply oxygen and nutrients to the arterial wall. Detection of increased VV during contrast-enhanced computed tomography (CT) imaging is limited due to contamination from blooming effect from the contrast-enhanced lumen. We report the application of an image deconvolution technique using a measured system point-spread function, on CT data obtained from a photon-counting CT system to reduce blooming and to improve the CT number accuracy of arterial wall, which enhances detection of increased VV. A phantom study was performed to assess the accuracy of the deconvolution technique. A porcine model was created with enhanced VV in one carotid artery; the other carotid artery served as a control. CT images at an energy range of 25-120 keV were reconstructed. CT numbers were measured for multiple locations in the carotid walls and for multiple time points, pre and post contrast injection. The mean CT number in the carotid wall was compared between the left (increased VV) and right (control) carotid arteries. Prior to deconvolution, results showed similar mean CT numbers in the left and right carotid wall due to the contamination from blooming effect, limiting the detection of increased VV in the left carotid artery. After deconvolution, the mean CT number difference between the left and right carotid arteries was substantially increased at all the time points, enabling detection of the increased VV in the artery wall.

  1. Different Antibiotic Resistance and Sporulation Properties within Multiclonal Clostridium difficile PCR Ribotypes 078, 126, and 033 in a Single Calf Farm

    PubMed Central

    Zidaric, Valerija; Pardon, Bart; dos Vultos, Tiago; Deprez, Piet; Brouwer, Michael Sebastiaan Maria; Roberts, Adam P.; Henriques, Adriano O.

    2012-01-01

    Clostridium difficile strains were sampled periodically from 50 animals at a single veal calf farm over a period of 6 months. At arrival, 10% of animals were C. difficile positive, and the peak incidence was determined to occur at the age of 18 days (16%). The prevalence then decreased, and at slaughter, C. difficile could not be isolated. Six different PCR ribotypes were detected, and strains within a single PCR ribotype could be differentiated further by pulsed-field gel electrophoresis (PFGE). The PCR ribotype diversity was high up to the animal age of 18 days, but at later sampling points, PCR ribotype 078 and the highly related PCR ribotype 126 predominated. Resistance to tetracycline, doxycycline, and erythromycin was detected, while all strains were susceptible to amoxicillin and metronidazole. Multiple variations of the resistance gene tet(M) were present at the same sampling point, and these changed over time. We have shown that PCR ribotypes often associated with cattle (ribotypes 078, 126, and 033) were not clonal but differed in PFGE type, sporulation properties, antibiotic sensitivities, and tetracycline resistance determinants, suggesting that multiple strains of the same PCR ribotype infected the calves and that calves were likely to be infected prior to arrival at the farm. Importantly, strains isolated at later time points were more likely to be resistant to tetracycline and erythromycin and showed higher early sporulation efficiencies in vitro, suggesting that these two properties converge to promote the persistence of C. difficile in the environment or in hosts. PMID:23001653

  2. ImNet: a fiber optic network with multistar topology for high-speed data transmission

    NASA Astrophysics Data System (ADS)

    Vossebuerger, F.; Keizers, Andreas; Soederman, N.; Meyer-Ebrecht, Dietrich

    1993-10-01

    ImNet is a fiber-optic local area network, which has been developed for high speed image communication in Picture Archiving and Communication Systems (PACS). A comprehensive analysis of image communication requirements in hospitals led to the conclusion that there is a need for networks which are optimized for the transmission of large datafiles. ImNet is optimized for this application in contrast to current-state LANs. ImNet consists of two elements: a link module and a switch module. The point-to-point link module can be up to 4 km by using fiber optic cable. For short distances up to 100 m a cheaper module using shielded twisted pair cable is available. The link module works bi-directionally and handles all protocols up to OSI-Level 3. The data rate per link is up to 140 MBit/s (clock rate 175 MHz). The switch module consists of the control unit and the cross-point-switch array. The array has up to fourteen interfaces for link modules. Up to fourteen data transfers each with a maximal transfer rate of 400 MBit/s can be handled at the same time. Thereby the maximal throughput of a switch module is 5.6 GBit/s. Out of these modules a multi-star network can be built i.e., an arbitrary tree structure of stars. This topology allows multiple transmissions at the same time as long as they do not require identical links. Therefore the overall throughput of ImNet can be a multiple of the datarate per link.

  3. Robust and efficient overset grid assembly for partitioned unstructured meshes

    NASA Astrophysics Data System (ADS)

    Roget, Beatrice; Sitaraman, Jayanarayanan

    2014-03-01

    This paper presents a method to perform efficient and automated Overset Grid Assembly (OGA) on a system of overlapping unstructured meshes in a parallel computing environment where all meshes are partitioned into multiple mesh-blocks and processed on multiple cores. The main task of the overset grid assembler is to identify, in parallel, among all points in the overlapping mesh system, at which points the flow solution should be computed (field points), interpolated (receptor points), or ignored (hole points). Point containment search or donor search, an algorithm to efficiently determine the cell that contains a given point, is the core procedure necessary for accomplishing this task. Donor search is particularly challenging for partitioned unstructured meshes because of the complex irregular boundaries that are often created during partitioning.

  4. Causes and Solutions of the Trampoline Effect.

    PubMed

    Miwa, Masamiki; Ota, Noboru; Ando, Chiyono; Miyazaki, Yukio

    2015-01-01

    A trampoline effect may occur mainly when a buttonhole tract and the vessel flap fail to form a straight line. Certain findings, however, suggest another cause is when the vessel flap is too small. The frequency of the trampoline effect, for example, is lower when a buttonhole tract is created by multiple punctures of the arteriovenous fistula (AVF) vessel than when it is done by one-time puncture of the vessel. Lower frequency of the trampoline effect with multiple punctures of the AVF vessel may be due to enlargement of the initial puncture hole on the vessel every time the vessel is punctured with a sharp needle. Even if aiming at exactly the same point on the AVF vessel every time, the actual puncture point shifts slightly at every puncture, which potentially results in enlargement of the initial hole on the AVF vessel. Moreover, in some patients, continued use of a buttonhole tract for an extended period of time increases the frequency of the trampoline effect. In such cases, reduction of the incidence of the trampoline effect can be achieved by one buttonhole cannulation using a new dull needle with sharp side edges that is used to enlarge the vessel flap. Such single buttonhole cannulation may suggest that the increased frequency of the trampoline effect also potentially occurs in association with gradually diminishing flap size. As a final observation, dull needle insertion into a vessel flap in the reverse direction has been more smoothly achieved than insertion into a vessel flap in the conventional direction. A vessel flap in the reverse direction can be adopted clinically. © 2015 S. Karger AG, Basel.

  5. A real-time method for autonomous passive acoustic detection-classification of humpback whales.

    PubMed

    Abbot, Ted A; Premus, Vincent E; Abbot, Philip A

    2010-05-01

    This paper describes a method for real-time, autonomous, joint detection-classification of humpback whale vocalizations. The approach adapts the spectrogram correlation method used by Mellinger and Clark [J. Acoust. Soc. Am. 107, 3518-3529 (2000)] for bowhead whale endnote detection to the humpback whale problem. The objective is the implementation of a system to determine the presence or absence of humpback whales with passive acoustic methods and to perform this classification with low false alarm rate in real time. Multiple correlation kernels are used due to the diversity of humpback song. The approach also takes advantage of the fact that humpbacks tend to vocalize repeatedly for extended periods of time, and identification is declared only when multiple song units are detected within a fixed time interval. Humpback whale vocalizations from Alaska, Hawaii, and Stellwagen Bank were used to train the algorithm. It was then tested on independent data obtained off Kaena Point, Hawaii in February and March of 2009. Results show that the algorithm successfully classified humpback whales autonomously in real time, with a measured probability of correct classification in excess of 74% and a measured probability of false alarm below 1%.

  6. Asteroid detection using a single multi-wavelength CCD scan

    NASA Astrophysics Data System (ADS)

    Melton, Jonathan

    2016-09-01

    Asteroid detection is a topic of great interest due to the possibility of diverting possibly dangerous asteroids or mining potentially lucrative ones. Currently, asteroid detection is generally performed by taking multiple images of the same patch of sky separated by 10-15 minutes, then subtracting the images to find movement. However, this is time consuming because of the need to revisit the same area multiple times per night. This paper describes an algorithm that can detect asteroids using a single CCD camera scan, thus cutting down on the time and cost of an asteroid survey. The algorithm is based on the fact that some telescopes scan the sky at multiple wavelengths with a small time separation between the wavelength components. As a result, an object moving with sufficient speed will appear in different places in different wavelength components of the same image. Using image processing techniques we detect the centroids of points of light in the first component and compare these positions to the centroids in the other components using a nearest neighbor algorithm. The algorithm was used on a test set of 49 images obtained from the Sloan telescope in New Mexico and found 100% of known asteroids with only 3 false positives. This algorithm has the advantage of decreasing the amount of time required to perform an asteroid scan, thus allowing more sky to be scanned in the same amount of time or freeing a telescope for other pursuits.

  7. Generation of Ground Truth Datasets for the Analysis of 3d Point Clouds in Urban Scenes Acquired via Different Sensors

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.

    2018-04-01

    In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.

  8. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less

  9. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    DOE PAGES

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-06-13

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less

  10. Terrain modeling for real-time simulation

    NASA Astrophysics Data System (ADS)

    Devarajan, Venkat; McArthur, Donald E.

    1993-10-01

    There are many applications, such as pilot training, mission rehearsal, and hardware-in-the- loop simulation, which require the generation of realistic images of terrain and man-made objects in real-time. One approach to meeting this requirement is to drape photo-texture over a planar polygon model of the terrain. The real time system then computes, for each pixel of the output image, the address in a texture map based on the intersection of the line-of-sight vector with the terrain model. High quality image generation requires that the terrain be modeled with a fine mesh of polygons while hardware costs limit the number of polygons which may be displayed for each scene. The trade-off between these conflicting requirements must be made in real-time because it depends on the changing position and orientation of the pilot's eye point or simulated sensor. The traditional approach is to develop a data base consisting of multiple levels of detail (LOD), and then selecting for display LODs as a function of range. This approach could lead to both anomalies in the displayed scene and inefficient use of resources. An approach has been developed in which the terrain is modeled with a set of nested polygons and organized as a tree with each node corresponding to a polygon. This tree is pruned to select the optimum set of nodes for each eye-point position. As the point of view moves, the visibility of some nodes drops below the limit of perception and may be deleted while new points must be added in regions near the eye point. An analytical model has been developed to determine the number of polygons required for display. This model leads to quantitative performance measures of the triangulation algorithm which is useful for optimizing system performance with a limited display capability.

  11. Annual longitudinal survey at up to five time points reveals reciprocal effects of bedtime delay and depression/anxiety in adolescents.

    PubMed

    Tochigi, Mamoru; Usami, Satoshi; Matamura, Misato; Kitagawa, Yuko; Fukushima, Masako; Yonehara, Hiromi; Togo, Fumiharu; Nishida, Atsushi; Sasaki, Tsukasa

    2016-01-01

    To investigate the longitudinal relationship between sleep habits and mental health in adolescents. Multipoint observation data of up to five years were employed from a prospective cohort study of sleep habits and mental health status conducted from 2009 to 2013 in a unified junior and senior high school (grades 7-12) in Tokyo, Japan. A total of 1078 students answered a self-report questionnaire, including items on usual bed and wake-up times on school days, and the Japanese version of the 12-item General Health Questionnaire (GHQ-12). Latent growth model (LGM) analysis, which requires three or more time point data, showed that longitudinal changes in bedtime and GHQ-12 score (or score for depression/anxiety) were significantly and moderately correlated (correlation coefficient = 0.510, p < 0.05). Another result of interest was that, using an autoregressive cross-lagged (ARCL) model, bedtime and the depression/anxiety score had reciprocal effects the following year: ie, bedtime significantly affects the following year's depression/anxiety, and vice versa. In addition, the analysis provided estimates of mutually predicted changes: one-hour bedtime delay may worsen the GHQ-12 score by 0.2 points, and one-point worsening of the score may delay bedtime by 2.2 minutes. By using up to five multiple time point data, the present study confirms the correlational and reciprocally longitudinal relationship between bedtime delay and mental health status in Japanese adolescents. The results indicate that preventing late bedtime may have a significant effect on improving mental health in adolescents. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Homogenising time series: Beliefs, dogmas and facts

    NASA Astrophysics Data System (ADS)

    Domonkos, P.

    2010-09-01

    For obtaining reliable information about climate change and climate variability the use of high quality data series is essentially important, and one basic tool of quality improvements is the statistical homogenisation of observed time series. In the recent decades large number of homogenisation methods has been developed, but the real effects of their application on time series are still not known entirely. The ongoing COST HOME project (COST ES0601) is devoted to reveal the real impacts of homogenisation methods more detailed and with higher confidence than earlier. As part of the COST activity, a benchmark dataset was built whose characteristics approach well the characteristics of real networks of observed time series. This dataset offers much better opportunity than ever to test the wide variety of homogenisation methods, and analyse the real effects of selected theoretical recommendations. The author believes that several old theoretical rules have to be re-evaluated. Some examples of the hot questions, a) Statistically detected change-points can be accepted only with the confirmation of metadata information? b) Do semi-hierarchic algorithms for detecting multiple change-points in time series function effectively in practise? c) Is it good to limit the spatial comparison of candidate series with up to five other series in the neighbourhood? Empirical results - those from the COST benchmark, and other experiments too - show that real observed time series usually include several inhomogeneities of different sizes. Small inhomogeneities seem like part of the climatic variability, thus the pure application of classic theory that change-points of observed time series can be found and corrected one-by-one is impossible. However, after homogenisation the linear trends, seasonal changes and long-term fluctuations of time series are usually much closer to the reality, than in raw time series. The developers and users of homogenisation methods have to bear in mind that the eventual purpose of homogenisation is not to find change-points, but to have the observed time series with statistical properties those characterise well the climate change and climate variability.

  13. Generating and executing programs for a floating point single instruction multiple data instruction set architecture

    DOEpatents

    Gschwind, Michael K

    2013-04-16

    Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.

  14. Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit.

    PubMed

    Yi, Faliu; Lee, Jieun; Moon, Inkyu

    2014-05-01

    The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.

  15. Sound source localization inspired by the ears of the Ormia ochracea

    NASA Astrophysics Data System (ADS)

    Kuntzman, Michael L.; Hall, Neal A.

    2014-07-01

    The parasitoid fly Ormia ochracea has the remarkable ability to locate crickets using audible sound. This ability is, in fact, remarkable as the fly's hearing mechanism spans only 1.5 mm which is 50× smaller than the wavelength of sound emitted by the cricket. The hearing mechanism is, for all practical purposes, a point in space with no significant interaural time or level differences to draw from. It has been discovered that evolution has empowered the fly with a hearing mechanism that utilizes multiple vibration modes to amplify interaural time and level differences. Here, we present a fully integrated, man-made mimic of the Ormia's hearing mechanism capable of replicating the remarkable sound localization ability of the special fly. A silicon-micromachined prototype is presented which uses multiple piezoelectric sensing ports to simultaneously transduce two orthogonal vibration modes of the sensing structure, thereby enabling simultaneous measurement of sound pressure and pressure gradient.

  16. Lexical access changes in patients with multiple sclerosis: a two-year follow-up study.

    PubMed

    Sepulcre, Jorge; Peraita, Herminia; Goni, Joaquin; Arrondo, Gonzalo; Martincorena, Inigo; Duque, Beatriz; Velez de Mendizabal, Nieves; Masdeu, Joseph C; Villoslada, Pablo

    2011-02-01

    The aim of the study was to analyze lexical access strategies in patients with multiple sclerosis (MS) and their changes over time. We studied lexical access strategies during semantic and phonemic verbal fluency tests and also confrontation naming in a 2-year prospective cohort of 45 MS patients and 20 healthy controls. At baseline, switching lexical access strategy (both in semantic and in phonemic verbal fluency tests) and confrontation naming were significantly impaired in MS patients compared with controls. After 2 years follow-up, switching score decreased, and cluster size increased over time in semantic verbal fluency tasks, suggesting a failure in the retrieval of lexical information rather than an impairment of the lexical pool. In conclusion, these findings underline the significant presence of lexical access problems in patients with MS and could point out their key role in the alterations of high-level communications abilities in MS.

  17. Tool for simplifying the complex interactions within resilient communities

    NASA Astrophysics Data System (ADS)

    Stwertka, C.; Albert, M. R.; White, K. D.

    2016-12-01

    In recent decades, scientists have observed and documented impacts from climate change that will impact multiple sectors, will be impacted by decisions from multiple sectors, and will change over time. This complex human-engineered system has a large number of moving, interacting parts, which are interdependent and evolve over time towards their purpose. Many of the existing resilience frameworks and vulnerability frameworks focus on interactions between the domains, but do not include the structure of the interactions. We present an engineering systems approach to investigate the structural elements that influence a community's ability to be resilient. In this presentation we will present and analyze four common methods for building community resilience, utilizing our common framework. For several existing case studies we examine the stress points in the system and identify the impacts on the outcomes from the case studies. In ongoing research we will apply our system tool to a new case in the field.

  18. Engineering metabolic pathways in plants by multigene transformation.

    PubMed

    Zorrilla-López, Uxue; Masip, Gemma; Arjó, Gemma; Bai, Chao; Banakar, Raviraj; Bassie, Ludovic; Berman, Judit; Farré, Gemma; Miralpeix, Bruna; Pérez-Massot, Eduard; Sabalza, Maite; Sanahuja, Georgina; Vamvaka, Evangelia; Twyman, Richard M; Christou, Paul; Zhu, Changfu; Capell, Teresa

    2013-01-01

    Metabolic engineering in plants can be used to increase the abundance of specific valuable metabolites, but single-point interventions generally do not improve the yields of target metabolites unless that product is immediately downstream of the intervention point and there is a plentiful supply of precursors. In many cases, an intervention is necessary at an early bottleneck, sometimes the first committed step in the pathway, but is often only successful in shifting the bottleneck downstream, sometimes also causing the accumulation of an undesirable metabolic intermediate. Occasionally it has been possible to induce multiple genes in a pathway by controlling the expression of a key regulator, such as a transcription factor, but this strategy is only possible if such master regulators exist and can be identified. A more robust approach is the simultaneous expression of multiple genes in the pathway, preferably representing every critical enzymatic step, therefore removing all bottlenecks and ensuring completely unrestricted metabolic flux. This approach requires the transfer of multiple enzyme-encoding genes to the recipient plant, which is achieved most efficiently if all genes are transferred at the same time. Here we review the state of the art in multigene transformation as applied to metabolic engineering in plants, highlighting some of the most significant recent advances in the field.

  19. Optimization of Treatment Geometry to Reduce Normal Brain Dose in Radiosurgery of Multiple Brain Metastases with Single-Isocenter Volumetric Modulated Arc Therapy.

    PubMed

    Wu, Qixue; Snyder, Karen Chin; Liu, Chang; Huang, Yimei; Zhao, Bo; Chetty, Indrin J; Wen, Ning

    2016-09-30

    Treatment of patients with multiple brain metastases using a single-isocenter volumetric modulated arc therapy (VMAT) has been shown to decrease treatment time with the tradeoff of larger low dose to the normal brain tissue. We have developed an efficient Projection Summing Optimization Algorithm to optimize the treatment geometry in order to reduce dose to normal brain tissue for radiosurgery of multiple metastases with single-isocenter VMAT. The algorithm: (a) measures coordinates of outer boundary points of each lesion to be treated using the Eclipse Scripting Application Programming Interface, (b) determines the rotations of couch, collimator, and gantry using three matrices about the cardinal axes, (c) projects the outer boundary points of the lesion on to Beam Eye View projection plane, (d) optimizes couch and collimator angles by selecting the least total unblocked area for each specific treatment arc, and (e) generates a treatment plan with the optimized angles. The results showed significant reduction in the mean dose and low dose volume to normal brain, while maintaining the similar treatment plan qualities on the thirteen patients treated previously. The algorithm has the flexibility with regard to the beam arrangements and can be integrated in the treatment planning system for clinical application directly.

  20. Multi-static networked 3D ladar for surveillance and access control

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Ogirala, S. S. R.; Hu, B.; Le, Han Q.

    2007-04-01

    A theoretical design and simulation of a 3D ladar system concept for surveillance, intrusion detection, and access control is described. It is a non-conventional system architecture that consists of: i) multi-static configuration with an arbitrarily scalable number of transmitters (Tx's) and receivers (Rx's) that form an optical wireless code-division-multiple-access (CDMA) network, and ii) flexible system architecture with modular plug-and-play components that can be deployed for any facility with arbitrary topology. Affordability is a driving consideration; and a key feature for low cost is an asymmetric use of many inexpensive Rx's in conjunction with fewer Tx's, which are generally more expensive. The Rx's are spatially distributed close to the surveyed area for large coverage, and capable of receiving signals from multiple Tx's with moderate laser power. The system produces sensing information that scales as NxM, where N, M are the number of Tx's and Rx's, as opposed to linear scaling ~N in non-network system. Also, for target positioning, besides laser pointing direction and time-of-flight, the algorithm includes multiple point-of-view image fusion and triangulation for enhanced accuracy, which is not applicable to non-networked monostatic ladars. Simulation and scaled model experiments on some aspects of this concept are discussed.

  1. Multi-point laser coherent detection system and its application on vibration measurement

    NASA Astrophysics Data System (ADS)

    Fu, Y.; Yang, C.; Xu, Y. J.; Liu, H.; Yan, K.; Guo, M.

    2015-05-01

    Laser Doppler vibrometry (LDV) is a well-known interferometric technique to measure the motions, vibrations and mode shapes of machine components and structures. The drawback of commercial LDV is that it can only offer a pointwise measurement. In order to build up a vibrometric image, a scanning device is normally adopted to scan the laser point in two spatial axes. These scanning laser Doppler vibrometers (SLDV) assume that the measurement conditions remain invariant while multiple and identical, sequential measurements are performed. This assumption makes SLDVs impractical to do measurement on transient events. In this paper, we introduce a new multiple-point laser coherent detection system based on spatial-encoding technology and fiber configuration. A simultaneous vibration measurement on multiple points is realized using a single photodetector. A prototype16-point laser coherent detection system is built and it is applied to measure the vibration of various objects, such as body of a car or a motorcycle when engine is on and under shock tests. The results show the prospect of multi-point laser coherent detection system in the area of nondestructive test and precise dynamic measurement.

  2. Nuclear-coupled thermal-hydraulic stability analysis of boiling water reactors

    NASA Astrophysics Data System (ADS)

    Karve, Atul A.

    We have studied the nuclear-coupled thermal-hydraulic stability of boiling water reactors (BWRs) using a model we developed from: the space-time modal neutron kinetics equations based on spatial omega-modes, the equations for two-phase flow in parallel boiling channels, the fuel rod heat conduction equations, and a simple model for the recirculation loop. The model is represented as a dynamical system comprised of time-dependent nonlinear ordinary differential equations, and it is studied using stability analysis, modern bifurcation theory, and numerical simulations. We first determine the stability boundary (SB) in the most relevant parameter plane, the inlet-subcooling-number/external-pressure-drop plane, for a fixed control rod induced external reactivity equal to the 100% rod line value and then transform the SB to the practical power-flow map. Using this SB, we show that the normal operating point at 100% power is very stable, stability of points on the 100% rod line decreases as the flow rate is reduced, and that points are least stable in the low-flow/high-power region. We also determine the SB when the modal kinetics is replaced by simple point reactor kinetics and show that the first harmonic mode has no significant effect on the SB. Later we carry out the relevant numerical simulations where we first show that the Hopf bifurcation, that occurs as a parameter is varied across the SB is subcritical, and that, in the important low-flow/high-power region, growing oscillations can result following small finite perturbations of stable steady-states on the 100% rod line. Hence, a point on the 100% rod line in the low-flow/high-power region, although stable, may nevertheless be a point at which a BWR should not be operated. Numerical simulations are then done to calculate the decay ratios (DRs) and frequencies of oscillations for various points on the 100% rod line. It is determined that the NRC requirement of DR < 0.75-0.8 is not rigorously satisfied in the low-flow/high-power region and hence these points should be avoided during normal startup and shutdown operations. The frequency of oscillation is shown to decrease as the flow rate is reduced and the frequency of 0.5Hz observed in the low-flow/high-power region is consistent with those observed during actual instability incidents. Additional numerical simulations show that in the low-flow/high-power region, for the same initial conditions, the use of point kinetics leads to damped oscillations, whereas the model that includes the modal kinetics equations results in growing nonlinear oscillations. Thus, we show that side-by-side out-of-phase growing power oscillations result due to the very important first harmonic mode effect and that the use of point kinetics, which fails to predict these growing oscillations, leads to dramatically nonconservative results. Finally, the effect of a simple recirculation loop model that we develop is studied by carrying out additional stability analyses and additional numerical simulations. It is shown that the loop has a stabilizing effect on certain points on the 100% rod line for time delays equal to integer multiples of the natural period of oscillation, whereas it has a destabilizing effect for half-integer multiples. However, for more practical time delays, it is determined that the overall effect generally is destabilizing.

  3. [Survival analysis with competing risks: estimating failure probability].

    PubMed

    Llorca, Javier; Delgado-Rodríguez, Miguel

    2004-01-01

    To show the impact of competing risks of death on survival analysis. We provide an example of survival time without chronic rejection after heart transplantation, where death before rejection acts as a competing risk. Using a computer simulation, we compare the Kaplan-Meier estimator and the multiple decrement model. The Kaplan-Meier method overestimated the probability of rejection. Next, we illustrate the use of the multiple decrement model to analyze secondary end points (in our example: death after rejection). Finally, we discuss Kaplan-Meier assumptions and why they fail in the presence of competing risks. Survival analysis should be adjusted for competing risks of death to avoid overestimation of the risk of rejection produced with the Kaplan-Meier method.

  4. Generalization of Wilemski-Fixman-Weiss decoupling approximation to the case involving multiple sinks of different sizes, shapes, and reactivities.

    PubMed

    Uhm, Jesik; Lee, Jinuk; Eun, Changsun; Lee, Sangyoub

    2006-08-07

    We generalize the Wilemski-Fixman-Weiss decoupling approximation to calculate the transient rate of absorption of point particles into multiple sinks of different sizes, shapes, and reactivities. As an application we consider the case involving two spherical sinks. We obtain a Laplace-transform expression for the transient rate that is in excellent agreement with computer simulations. The long-time steady-state rate has a relatively simple expression, which clearly shows the dependence on the diffusion constant of the particles and on the sizes and reactivities of sinks, and its numerical result is in good agreement with the known exact result that is given in terms of recursion relations.

  5. Personalized medicine in multiple sclerosis.

    PubMed

    Giovannoni, Gavin

    2017-11-01

    The therapeutic approach in multiple sclerosis (MS) requires a personalized medicine frame beyond the precision medicine concept, which is not currently implementable due to the lack of robust biomarkers and detailed understanding of MS pathogenesis. Personalized medicine demands a patient-focused approach, with disease taxonomy informed by characterization of pathophysiological processes. Important questions concerning MS taxonomy are: when does MS begin? When does the progressive phase begin? Is MS really two or three diseases? Does a therapeutic window truly exist? Newer evidence points to a disease spectrum and a therapeutic lag of several years for benefits to be observed from disease-modifying therapy. For personalized treatment, it is important to ascertain disease stage and any worsening of focal inflammatory lesions over time.

  6. Development of a piecewise linear omnidirectional 3D image registration method

    NASA Astrophysics Data System (ADS)

    Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo

    2016-12-01

    This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.

  7. Evidence and age-related distribution of mtDNA D-loop point mutations in skeletal muscle from healthy subjects and mitochondrial patients.

    PubMed

    Del Bo, Roberto; Bordoni, Andreina; Martinelli Boneschi, Filippo; Crimi, Marco; Sciacco, Monica; Bresolin, Nereo; Scarlato, Guglielmo; Comi, Giacomo Pietri

    2002-10-15

    The progressive accumulation of mitochondrial DNA (mtDNA) alterations, ranging from single mutations to large-scale deletions, in both the normal ageing process and pathological conditions is a relevant phenomenon in terms of frequency and heteroplasmic degree. Recently, two point mutations (A189G and T408A) within the Displacement loop (D-loop) region, the control region for mtDNA replication, were shown to occur in skeletal muscles from aged individuals. We evaluated the presence and the heteroplasmy levels of these two mutations in muscle biopsies from 91 unrelated individuals of different ages (21 healthy subjects and 70 patients affected by mitochondrial encephalomyopathies). Overall, both mutations significantly accumulate with age. However, a different relationship was discovered among the different subgroups of patients: a higher number of A189G positive subjects younger than 53 years was detected in the subgroup of multiple-deleted patients; furthermore, a trend towards an increased risk for the mutations was evidenced among patients carrying multiple deletions when compared to healthy controls. These findings support the idea that a common biological mechanism determines the accumulation of somatic point mutations in the D-loop region, both in healthy subjects and in mitochondrial myopathy patients. At the same time, it appears that disorders caused by mutations of nuclear genes controlling mtDNA replication (the "mtDNA multiple deletions" syndromes) present a temporal advantage to mutate in the D-loop region. This observation may be relevant to the definition of the molecular pathogenesis of these latter syndromes. Copyright 2002 Elsevier Science B.V.

  8. Longitudinal change in physical activity and its correlates in relapsing-remitting multiple sclerosis.

    PubMed

    Motl, Robert W; McAuley, Edward; Sandroff, Brian M

    2013-08-01

    Physical activity is beneficial for people with multiple sclerosis (MS), but this population is largely inactive. There is minimal information on change in physical activity and its correlates for informing the development of behavioral interventions. This study examined change in physical activity and its symptomatic, social-cognitive, and ambulatory or disability correlates over a 2.5-year period of time in people with relapsing-remitting multiple sclerosis. On 6 occasions, each separated by 6 months, people (N=269) with relapsing-remitting multiple sclerosis completed assessments of symptoms, self-efficacy, walking impairment, disability, and physical activity. The participants wore an accelerometer for 7 days. The change in study variables over 6 time points was examined with unconditional latent growth curve modeling. The association among changes in study variables over time was examined using conditional latent growth curve modeling, and the associations were expressed as standardized path coefficients (β). There were significant linear changes in self-reported and objectively measured physical activity, self-efficacy, walking impairment, and disability over the 2.5-year period; there were no changes in fatigue, depression, and pain. The changes in self-reported and objective physical activity were associated with change in self-efficacy (β=.49 and β=.61, respectively), after controlling for other variables and confounders. The primary limitations of the study were the generalizability of results among those with progressive multiple sclerosis and inclusion of a single variable from social-cognitive theory. Researchers should consider designing interventions that target self-efficacy for the promotion and maintenance of physical activity in this population.

  9. The genomic response of skeletal muscle to methylprednisolone using microarrays: tailoring data mining to the structure of the pharmacogenomic time series

    PubMed Central

    DuBois, Debra C; Piel, William H; Jusko, William J

    2008-01-01

    High-throughput data collection using gene microarrays has great potential as a method for addressing the pharmacogenomics of complex biological systems. Similarly, mechanism-based pharmacokinetic/pharmacodynamic modeling provides a tool for formulating quantitative testable hypotheses concerning the responses of complex biological systems. As the response of such systems to drugs generally entails cascades of molecular events in time, a time series design provides the best approach to capturing the full scope of drug effects. A major problem in using microarrays for high-throughput data collection is sorting through the massive amount of data in order to identify probe sets and genes of interest. Due to its inherent redundancy, a rich time series containing many time points and multiple samples per time point allows for the use of less stringent criteria of expression, expression change and data quality for initial filtering of unwanted probe sets. The remaining probe sets can then become the focus of more intense scrutiny by other methods, including temporal clustering, functional clustering and pharmacokinetic/pharmacodynamic modeling, which provide additional ways of identifying the probes and genes of pharmacological interest. PMID:15212590

  10. A possible simplification for the estimation of area under the curve (AUC₀₋₁₂) of enteric-coated mycophenolate sodium in renal transplant patients receiving tacrolimus.

    PubMed

    Fleming, Denise H; Mathew, Binu S; Prasanna, Samuel; Annapandian, Vellaichamy M; John, George T

    2011-04-01

    Enteric-coated mycophenolate sodium (EC-MPS) is widely used in renal transplantation. With a delayed absorption profile, it has not been possible to develop limited sampling strategies to estimate area under the curve (mycophenolic acid [MPA] AUC₀₋₁₂), which have limited time points and are completed in 2 hours. We developed and validated simplified strategies to estimate MPA AUC₀₋₁₂ in an Indian renal transplant population prescribed EC-MPS together with prednisolone and tacrolimus. Intensive pharmacokinetic sampling (17 samples each) was performed in 18 patients to measure MPA AUC₀₋₁₂. The profiles at 1 month were used to develop the simplified strategies and those at 5.5 months used for validation. We followed two approaches. In one, the AUC was calculated using the trapezoidal rule with fewer time points followed by an extrapolation. In the second approach, by stepwise multiple regression analysis, models with different time points were identified and linear regression analysis performed. Using the trapezoidal rule, two equations were developed with six time points and sampling to 6 or 8 hours (8hrAUC[₀₋₁₂exp]) after the EC-MPS dose. On validation, the 8hrAUC(₀₋₁₂exp) compared with total measured AUC₀₋₁₂ had a coefficient of correlation (r²) of 0.872 with a bias and precision (95% confidence interval) of 0.54% (-6.07-7.15) and 9.73% (5.37-14.09), respectively. Second, limited sampling strategies were developed with four, five, six, seven, and eight time points and completion within 2 hours, 4 hours, 6 hours, and 8 hours after the EC-MPS dose. On validation, six, seven, and eight time point equations, all with sampling to 8 hours, had an acceptable r with the total measured MPA AUC₀₋₁₂ (0.817-0.927). In the six, seven, and eight time points, the bias (95% confidence interval) was 3.00% (-4.59 to 10.59), 0.29% (-5.4 to 5.97), and -0.72% (-5.34 to 3.89) and the precision (95% confidence interval) was 10.59% (5.06-16.13), 8.33% (4.55-12.1), and 6.92% (3.94-9.90), respectively. Of the eight simplified approaches, inclusion of seven or eight time points improved the accuracy of the predicted AUC compared with the actual and can be advocated based on the priority of the user.

  11. Steering and positioning targets for HWIL IR testing at cryogenic conditions

    NASA Astrophysics Data System (ADS)

    Perkes, D. W.; Jensen, G. L.; Higham, D. L.; Lowry, H. S.; Simpson, W. R.

    2006-05-01

    In order to increase the fidelity of hardware-in-the-loop ground-truth testing, it is desirable to create a dynamic scene of multiple, independently controlled IR point sources. ATK-Mission Research has developed and supplied the steering mirror systems for the 7V and 10V Space Simulation Test Chambers at the Arnold Engineering Development Center (AEDC), Air Force Materiel Command (AFMC). A portion of the 10V system incorporates multiple target sources beam-combined at the focal point of a 20K cryogenic collimator. Each IR source consists of a precision blackbody with cryogenic aperture and filter wheels mounted on a cryogenic two-axis translation stage. This point source target scene is steered by a high-speed steering mirror to produce further complex motion. The scene changes dynamically in order to simulate an actual operational scene as viewed by the System Under Test (SUT) as it executes various dynamic look-direction changes during its flight to a target. Synchronization and real-time hardware-in-the-loop control is accomplished using reflective memory for each subsystem control and feedback loop. This paper focuses on the steering mirror system and the required tradeoffs of optical performance, precision, repeatability and high-speed motion as well as the complications of encoder feedback calibration and operation at 20K.

  12. a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.

    2015-04-01

    Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.

  13. Contaminant point source localization error estimates as functions of data quantity and model quality

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Vesselinov, Velimir V.

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.

  14. Expanding Lorentz and spectrum corrections to large volumes of reciprocal space for single-crystal time-of-flight neutron diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michels-Clark, Tara M.; Savici, Andrei T.; Lynch, Vickie E.

    Evidence is mounting that potentially exploitable properties of technologically and chemically interesting crystalline materials are often attributable to local structure effects, which can be observed as modulated diffuse scattering (mDS) next to Bragg diffraction (BD). BD forms a regular sparse grid of intense discrete points in reciprocal space. Traditionally, the intensity of each Bragg peak is extracted by integration of each individual reflection first, followed by application of the required corrections. In contrast, mDS is weak and covers expansive volumes of reciprocal space close to, or between, Bragg reflections. For a representative measurement of the diffuse scattering, multiple sample orientationsmore » are generally required, where many points in reciprocal space are measured multiple times and the resulting data are combined. The common post-integration data reduction method is not optimal with regard to counting statistics. A general and inclusive data processing method is needed. In this contribution, a comprehensive data analysis approach is introduced to correct and merge the full volume of scattering data in a single step, while correctly accounting for the statistical weight of the individual measurements. Lastly, development of this new approach required the exploration of a data treatment and correction protocol that includes the entire collected reciprocal space volume, using neutron time-of-flight or wavelength-resolved data collected at TOPAZ at the Spallation Neutron Source at Oak Ridge National Laboratory.« less

  15. Expanding Lorentz and spectrum corrections to large volumes of reciprocal space for single-crystal time-of-flight neutron diffraction

    DOE PAGES

    Michels-Clark, Tara M.; Savici, Andrei T.; Lynch, Vickie E.; ...

    2016-03-01

    Evidence is mounting that potentially exploitable properties of technologically and chemically interesting crystalline materials are often attributable to local structure effects, which can be observed as modulated diffuse scattering (mDS) next to Bragg diffraction (BD). BD forms a regular sparse grid of intense discrete points in reciprocal space. Traditionally, the intensity of each Bragg peak is extracted by integration of each individual reflection first, followed by application of the required corrections. In contrast, mDS is weak and covers expansive volumes of reciprocal space close to, or between, Bragg reflections. For a representative measurement of the diffuse scattering, multiple sample orientationsmore » are generally required, where many points in reciprocal space are measured multiple times and the resulting data are combined. The common post-integration data reduction method is not optimal with regard to counting statistics. A general and inclusive data processing method is needed. In this contribution, a comprehensive data analysis approach is introduced to correct and merge the full volume of scattering data in a single step, while correctly accounting for the statistical weight of the individual measurements. Lastly, development of this new approach required the exploration of a data treatment and correction protocol that includes the entire collected reciprocal space volume, using neutron time-of-flight or wavelength-resolved data collected at TOPAZ at the Spallation Neutron Source at Oak Ridge National Laboratory.« less

  16. The role of photographic parameters in laser speckle or particle image displacement velocimetry

    NASA Technical Reports Server (NTRS)

    Lourenco, L.; Krothapalli, A.

    1987-01-01

    The parameters involved in obtaining the multiple exposure photographs in the laser speckle velocimetry method (to record the light scattering by the seeding particles) were optimized. The effects of the type, concentration, and dimensions of the tracer, the exposure conditions (time between exposures, exposure time, and number of exposures), and the sensitivity and resolution of the film on the quality of the final results were investigated, photographing an experimental flow behind an impulsively started circular cylinder. The velocity data were acquired by digital processing of Young's fringes, produced by point-by-point scanning of a photographic negative. Using the optimal photographing conditions, the errors involved in the estimation of the fringe angle and spacing were of the order of 1 percent for the spacing and +/1 deg for the fringe orientation. The resulting accuracy in the velocity was of the order of 2-3 percent of the maximum velocity in the field.

  17. Consultation sequencing of a hospital with multiple service points using genetic programming

    NASA Astrophysics Data System (ADS)

    Morikawa, Katsumi; Takahashi, Katsuhiko; Nagasawa, Keisuke

    2018-07-01

    A hospital with one consultation room operated by a physician and several examination rooms is investigated. Scheduled patients and walk-ins arrive at the hospital, each patient goes to the consultation room first, and some of them visit other service points before consulting the physician again. The objective function consists of the sum of three weighted average waiting times. The problem of sequencing patients for consultation is focused. To alleviate the stress of waiting, the consultation sequence is displayed. A dispatching rule is used to decide the sequence, and best rules are explored by genetic programming (GP). The simulation experiments indicate that the rules produced by GP can be reduced to simple permutations of queues, and the best permutation depends on the weight used in the objective function. This implies that a balanced allocation of waiting times can be achieved by ordering the priority among three queues.

  18. Improving Planck calibration by including frequency-dependent relativistic corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quartin, Miguel; Notari, Alessio, E-mail: mquartin@if.ufrj.br, E-mail: notari@ffn.ub.es

    2015-09-01

    The Planck satellite detectors are calibrated in the 2015 release using the 'orbital dipole', which is the time-dependent dipole generated by the Doppler effect due to the motion of the satellite around the Sun. Such an effect has also relativistic time-dependent corrections of relative magnitude 10{sup −3}, due to coupling with the 'solar dipole' (the motion of the Sun compared to the CMB rest frame), which are included in the data calibration by the Planck collaboration. We point out that such corrections are subject to a frequency-dependent multiplicative factor. This factor differs from unity especially at the highest frequencies, relevantmore » for the HFI instrument. Since currently Planck calibration errors are dominated by systematics, to the point that polarization data is currently unreliable at large scales, such a correction can in principle be highly relevant for future data releases.« less

  19. Sensing Atomic Motion from the Zero Point to Room Temperature with Ultrafast Atom Interferometry.

    PubMed

    Johnson, K G; Neyenhuis, B; Mizrahi, J; Wong-Campos, J D; Monroe, C

    2015-11-20

    We sense the motion of a trapped atomic ion using a sequence of state-dependent ultrafast momentum kicks. We use this atom interferometer to characterize a nearly pure quantum state with n=1 phonon and accurately measure thermal states ranging from near the zero-point energy to n[over ¯]~10^{4}, with the possibility of extending at least 100 times higher in energy. The complete energy range of this method spans from the ground state to far outside of the Lamb-Dicke regime, where atomic motion is greater than the optical wavelength. Apart from thermometry, these interferometric techniques are useful for characterizing ultrafast entangling gates between multiple trapped ions.

  20. PageMan: an interactive ontology tool to generate, display, and annotate overview graphs for profiling experiments.

    PubMed

    Usadel, Björn; Nagel, Axel; Steinhauser, Dirk; Gibon, Yves; Bläsing, Oliver E; Redestig, Henning; Sreenivasulu, Nese; Krall, Leonard; Hannah, Matthew A; Poree, Fabien; Fernie, Alisdair R; Stitt, Mark

    2006-12-18

    Microarray technology has become a widely accepted and standardized tool in biology. The first microarray data analysis programs were developed to support pair-wise comparison. However, as microarray experiments have become more routine, large scale experiments have become more common, which investigate multiple time points or sets of mutants or transgenics. To extract biological information from such high-throughput expression data, it is necessary to develop efficient analytical platforms, which combine manually curated gene ontologies with efficient visualization and navigation tools. Currently, most tools focus on a few limited biological aspects, rather than offering a holistic, integrated analysis. Here we introduce PageMan, a multiplatform, user-friendly, and stand-alone software tool that annotates, investigates, and condenses high-throughput microarray data in the context of functional ontologies. It includes a GUI tool to transform different ontologies into a suitable format, enabling the user to compare and choose between different ontologies. It is equipped with several statistical modules for data analysis, including over-representation analysis and Wilcoxon statistical testing. Results are exported in a graphical format for direct use, or for further editing in graphics programs.PageMan provides a fast overview of single treatments, allows genome-level responses to be compared across several microarray experiments covering, for example, stress responses at multiple time points. This aids in searching for trait-specific changes in pathways using mutants or transgenics, analyzing development time-courses, and comparison between species. In a case study, we analyze the results of publicly available microarrays of multiple cold stress experiments using PageMan, and compare the results to a previously published meta-analysis.PageMan offers a complete user's guide, a web-based over-representation analysis as well as a tutorial, and is freely available at http://mapman.mpimp-golm.mpg.de/pageman/. PageMan allows multiple microarray experiments to be efficiently condensed into a single page graphical display. The flexible interface allows data to be quickly and easily visualized, facilitating comparisons within experiments and to published experiments, thus enabling researchers to gain a rapid overview of the biological responses in the experiments.

  1. Feature point based 3D tracking of multiple fish from multi-view images

    PubMed Central

    Qian, Zhi-Ming

    2017-01-01

    A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly. PMID:28665966

  2. Feature point based 3D tracking of multiple fish from multi-view images.

    PubMed

    Qian, Zhi-Ming; Chen, Yan Qiu

    2017-01-01

    A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly.

  3. Role of Erosion in Shaping Point Bars

    NASA Astrophysics Data System (ADS)

    Moody, J.; Meade, R.

    2012-04-01

    A powerful metaphor in fluvial geomorphology has been that depositional features such as point bars (and other floodplain features) constitute the river's historical memory in the form of uniformly thick sedimentary deposits waiting for the geomorphologist to dissect and interpret the past. For the past three decades, along the channel of Powder River (Montana USA) we have documented (with annual cross-sectional surveys and pit trenches) the evolution of the shape of three point bars that were created when an extreme flood in 1978 cut new channels across the necks of two former meander bends and radically shifted the location of a third bend. Subsequent erosion has substantially reshaped, at different time scales, the relic sediment deposits of varying age. At the weekly to monthly time scale (i.e., floods from snowmelt or floods from convective or cyclonic storms), the maximum scour depth was computed (by using a numerical model) at locations spaced 1 m apart across the entire point bar for a couple of the largest floods. The maximum predicted scour is about 0.22 m. At the annual time scale, repeated cross-section topographic surveys (25 during 32 years) indicate that net annual erosion at a single location can be as great as 0.5 m, and that the net erosion is greater than net deposition during 8, 16, and 32% of the years for the three point bars. On average, the median annual net erosion was 21, 36, and 51% of the net deposition. At the decadal time scale, an index of point bar preservation often referred to as completeness was defined for each cross section as the percentage of the initial deposit (older than 10 years) that was still remaining in 2011; computations indicate that 19, 41, and 36% of the initial deposits of sediment were eroded. Initial deposits were not uniform in thickness and often represented thicker pods of sediment connected by thin layers of sediment or even isolated pods at different elevations across the point bar in response to multiple floods during a water year. Erosion often was preferential and removed part or all of pods at lower elevations, and in time left what appears to be a random arrangement of sediment pods forming the point bar. Thus, we conclude that the erosional process is as important as the deposition process in shaping the final form of the point bar, and that point bars are not uniformly aggradational or transgressive deposits of sediment in which the age of the deposit increases monotonically downward at all locations across the point bar.

  4. NULL Convention Floating Point Multiplier

    PubMed Central

    Ramachandran, Seshasayanan

    2015-01-01

    Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation. PMID:25879069

  5. NULL convention floating point multiplier.

    PubMed

    Albert, Anitha Juliette; Ramachandran, Seshasayanan

    2015-01-01

    Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation.

  6. Absolute Points for Multiple Assignment Problems

    ERIC Educational Resources Information Center

    Adlakha, V.; Kowalski, K.

    2006-01-01

    An algorithm is presented to solve multiple assignment problems in which a cost is incurred only when an assignment is made at a given cell. The proposed method recursively searches for single/group absolute points to identify cells that must be loaded in any optimal solution. Unlike other methods, the first solution is the optimal solution. The…

  7. System for Training Aviation Regulations (STAR): Using Multiple Vantage Points To Learn Complex Information through Scenario-Based Instruction and Multimedia Techniques.

    ERIC Educational Resources Information Center

    Chandler, Terrell N.

    1996-01-01

    The System for Training of Aviation Regulations (STAR) provides comprehensive training in understanding and applying Federal aviation regulations. STAR gives multiple vantage points with multimedia presentations and storytelling within four categories of learning environments: overviews, scenarios, challenges, and resources. Discusses the…

  8. Detection of bifurcations in noisy coupled systems from multiple time series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, Mark S., E-mail: m.s.williamson@exeter.ac.uk; Lenton, Timothy M.

    We generalize a method of detecting an approaching bifurcation in a time series of a noisy system from the special case of one dynamical variable to multiple dynamical variables. For a system described by a stochastic differential equation consisting of an autonomous deterministic part with one dynamical variable and an additive white noise term, small perturbations away from the system's fixed point will decay slower the closer the system is to a bifurcation. This phenomenon is known as critical slowing down and all such systems exhibit this decay-type behaviour. However, when the deterministic part has multiple coupled dynamical variables, themore » possible dynamics can be much richer, exhibiting oscillatory and chaotic behaviour. In our generalization to the multi-variable case, we find additional indicators to decay rate, such as frequency of oscillation. In the case of approaching a homoclinic bifurcation, there is no change in decay rate but there is a decrease in frequency of oscillations. The expanded method therefore adds extra tools to help detect and classify approaching bifurcations given multiple time series, where the underlying dynamics are not fully known. Our generalisation also allows bifurcation detection to be applied spatially if one treats each spatial location as a new dynamical variable. One may then determine the unstable spatial mode(s). This is also something that has not been possible with the single variable method. The method is applicable to any set of time series regardless of its origin, but may be particularly useful when anticipating abrupt changes in the multi-dimensional climate system.« less

  9. Detection of bifurcations in noisy coupled systems from multiple time series

    NASA Astrophysics Data System (ADS)

    Williamson, Mark S.; Lenton, Timothy M.

    2015-03-01

    We generalize a method of detecting an approaching bifurcation in a time series of a noisy system from the special case of one dynamical variable to multiple dynamical variables. For a system described by a stochastic differential equation consisting of an autonomous deterministic part with one dynamical variable and an additive white noise term, small perturbations away from the system's fixed point will decay slower the closer the system is to a bifurcation. This phenomenon is known as critical slowing down and all such systems exhibit this decay-type behaviour. However, when the deterministic part has multiple coupled dynamical variables, the possible dynamics can be much richer, exhibiting oscillatory and chaotic behaviour. In our generalization to the multi-variable case, we find additional indicators to decay rate, such as frequency of oscillation. In the case of approaching a homoclinic bifurcation, there is no change in decay rate but there is a decrease in frequency of oscillations. The expanded method therefore adds extra tools to help detect and classify approaching bifurcations given multiple time series, where the underlying dynamics are not fully known. Our generalisation also allows bifurcation detection to be applied spatially if one treats each spatial location as a new dynamical variable. One may then determine the unstable spatial mode(s). This is also something that has not been possible with the single variable method. The method is applicable to any set of time series regardless of its origin, but may be particularly useful when anticipating abrupt changes in the multi-dimensional climate system.

  10. Anatomy-driven multiple trajectory planning (ADMTP) of intracranial electrodes for epilepsy surgery.

    PubMed

    Sparks, Rachel; Vakharia, Vejay; Rodionov, Roman; Vos, Sjoerd B; Diehl, Beate; Wehner, Tim; Miserocchi, Anna; McEvoy, Andrew W; Duncan, John S; Ourselin, Sebastien

    2017-08-01

    Epilepsy is potentially curable with resective surgery if the epileptogenic zone (EZ) can be identified. If non-invasive imaging is unable to elucidate the EZ, intracranial electrodes may be implanted to identify the EZ as well as map cortical function. In current clinical practice, each electrode trajectory is determined by time-consuming manual inspection of preoperative imaging to find a path that avoids blood vessels while traversing appropriate deep and superficial regions of interest (ROIs). We present anatomy-driven multiple trajectory planning (ADMTP) to find safe trajectories from a list of user-defined ROIs within minutes rather than the hours required for manual planning. Electrode trajectories are automatically computed in three steps: (1) Target Point Selection to identify appropriate target points within each ROI; (2) Trajectory Risk Scoring to quantify the cumulative distance to critical structures (blood vessels) along each trajectory, defined as the skull entry point to target point. (3) Implantation Plan Computation: to determine a feasible combination of low-risk trajectories for all electrodes. ADMTP was evaluated on 20 patients (190 electrodes). ADMTP lowered the quantitative risk score in 83% of electrodes. Qualitative results show ADMTP found suitable trajectories for 70% of electrodes; a similar portion of manual trajectories were considered suitable. Trajectory suitability for ADMTP was 95% if traversing sulci was not included in the safety criteria. ADMTP is computationally efficient, computing between 7 and 12 trajectories in 54.5 (17.3-191.9) s. ADMTP efficiently compute safe and surgically feasible electrode trajectories.

  11. Effect of web-supported health education on knowledge of health and healthy-living behaviour of female staff in a Turkish university.

    PubMed

    Nurgul, Keser; Nursan, Cinar; Dilek, Kose; Over, Ozcelik Tijen; Sevin, Altinkaynak

    2015-01-01

    Once limited with face-to face courses, health education has now moved into the web environment after new developments in information technology This study was carried out in order to give training to the university academic and administrative female staff who have difficulty in attending health education planned for specific times and places. The web-supported training focuses on healthy diet, the importance of physical activity, damage of smoking and stress management. The study was carried out in Sakarya University between the years 2012-2013 as a descriptive and quasi experimental study. The sample consisted of 30 participants who agreed to take part in the survey, filled in the forms and completed the whole training. The data were collected via a "Personel Information Form", "Health Promotion Life-Style Profile (HPLSP)", and "Multiple Choice Questionnaire (MCQ). There was a statistically significant difference between the total points from "Health Promotion Life-Style Profile" and the total points from the sub-scale after and before the training (t=3.63, p=0.001). When the points from the multiple choice questionnaire after and before training were compared, it was seen that the average points were higher after the training (t=8.57, p<0.001). It was found that web-supported health training has a positive effect on the healthy living behaviour of female staff working at a Turkish university and on their knowledge of health promotion.

  12. The Development and Application of Spatiotemporal Metrics for the Characterization of Point Source FFCO2 Emissions and Dispersion

    NASA Astrophysics Data System (ADS)

    Roten, D.; Hogue, S.; Spell, P.; Marland, E.; Marland, G.

    2017-12-01

    There is an increasing role for high resolution, CO2 emissions inventories across multiple arenas. The breadth of the applicability of high-resolution data is apparent from their use in atmospheric CO2 modeling, their potential for validation of space-based atmospheric CO2 remote-sensing, and the development of climate change policy. This work focuses on increasing our understanding of the uncertainty in these inventories and the implications on their downstream use. The industrial point sources of emissions (power generating stations, cement manufacturing plants, paper mills, etc.) used in the creation of these inventories often have robust emissions characteristics, beyond just their geographic location. Physical parameters of the emission sources such as number of exhaust stacks, stack heights, stack diameters, exhaust temperatures, and exhaust velocities, as well as temporal variability and climatic influences can be important in characterizing emissions. Emissions from large point sources can behave much differently than emissions from areal sources such as automobiles. For many applications geographic location is not an adequate characterization of emissions. This work demonstrates the sensitivities of atmospheric models to the physical parameters of large point sources and provides a methodology for quantifying parameter impacts at multiple locations across the United States. The sensitivities highlight the importance of location and timing and help to highlight potential aspects that can guide efforts to reduce uncertainty in emissions inventories and increase the utility of the models.

  13. A scoring algorithm for predicting the presence of adult asthma: a prospective derivation study.

    PubMed

    Tomita, Katsuyuki; Sano, Hiroyuki; Chiba, Yasutaka; Sato, Ryuji; Sano, Akiko; Nishiyama, Osamu; Iwanaga, Takashi; Higashimoto, Yuji; Haraguchi, Ryuta; Tohda, Yuji

    2013-03-01

    To predict the presence of asthma in adult patients with respiratory symptoms, we developed a scoring algorithm using clinical parameters. We prospectively analysed 566 adult outpatients who visited Kinki University Hospital for the first time with complaints of nonspecific respiratory symptoms. Asthma was comprehensively diagnosed by specialists using symptoms, signs, and objective tools including bronchodilator reversibility and/or the assessment of bronchial hyperresponsiveness (BHR). Multiple logistic regression analysis was performed to categorise patients and determine the accuracy of diagnosing asthma. A scoring algorithm using the symptom-sign score was developed, based on diurnal variation of symptoms (1 point), recurrent episodes (2 points), medical history of allergic diseases (1 point), and wheeze sound (2 points). A score of >3 had 35% sensitivity and 97% specificity for discriminating between patients with and without asthma and assigned a high probability of having asthma (accuracy 90%). A score of 1 or 2 points assigned intermediate probability (accuracy 68%). After providing additional data of forced expiratory volume in 1 second/forced vital capacity (FEV(1)/FVC) ratio <0.7, the post-test probability of having asthma was increased to 93%. A score of 0 points assigned low probability (accuracy 31%). After providing additional data of positive reversibility, the post-test probability of having asthma was increased to 88%. This pragmatic diagnostic algorithm is useful for predicting the presence of adult asthma and for determining the appropriate time for consultation with a pulmonologist.

  14. How Many Grid Points are Required for Time Accurate Simulations Scheme Selection and Scale-Discriminant Stabilization

    DTIC Science & Technology

    2015-11-24

    spatial concerns: ¤ how well are gradients captured? (resolution requirement) spatial/temporal concerns: ¤ dispersion and dissipation error...distribution is unlimited. Gradient Capture vs. Resolution: Single Mode FFT: Solution/Derivative: Convergence: f x( )= sin(x) with x∈[0,2π ] df dx...distribution is unlimited. Gradient Capture vs. Resolution: 
 Multiple Modes FFT: Solution/Derivative: Convergence: 6 __ CD02 __ CD04 __ CD06

  15. Results of a multi-media multiple behavior obesity prevention program for adolescents.

    PubMed

    Mauriello, Leanne M; Ciavatta, Mary Margaret H; Paiva, Andrea L; Sherman, Karen J; Castle, Patricia H; Johnson, Janet L; Prochaska, Janice M

    2010-12-01

    This study reports on effectiveness trial outcomes of Health in Motion, a computer tailored multiple behavior intervention for adolescents. Using school as level of assignment, students (n=1800) from eight high schools in four states (RI, TN, MA, and NY) were stratified and randomly assigned to no treatment or a multi-media intervention for physical activity, fruit and vegetable consumption, and limited TV viewing between 2006 and 2007. Intervention effects on continuous outcomes, on movement to action and maintenance stages, and on stability within action and maintenance stages were evaluated using random effects modeling. Effects were most pronounced for fruit and vegetable consumption and for total risks across all time points and for each behavior immediately post intervention. Co-variation of behavior change occurred within the treatment group, where individuals progressing to action or maintenance for one behavior were 1.4-4.2 times more likely to make similar progress on another behavior. Health in Motion is an innovative, multiple behavior obesity prevention intervention relevant for all adolescents that relies solely on interactive technology to deliver tailored feedback. The outcomes of the effectiveness trial demonstrate both an ability to initiate behavior change across multiple energy balance behaviors simultaneously and feasibility for ease of dissemination. Copyright © 2010 The Institute For Cancer Prevention. Published by Elsevier Inc. All rights reserved.

  16. A multiple-point spatially weighted k-NN method for object-based classification

    NASA Astrophysics Data System (ADS)

    Tang, Yunwei; Jing, Linhai; Li, Hui; Atkinson, Peter M.

    2016-10-01

    Object-based classification, commonly referred to as object-based image analysis (OBIA), is now commonly regarded as able to produce more appealing classification maps, often of greater accuracy, than pixel-based classification and its application is now widespread. Therefore, improvement of OBIA using spatial techniques is of great interest. In this paper, multiple-point statistics (MPS) is proposed for object-based classification enhancement in the form of a new multiple-point k-nearest neighbour (k-NN) classification method (MPk-NN). The proposed method first utilises a training image derived from a pre-classified map to characterise the spatial correlation between multiple points of land cover classes. The MPS borrows spatial structures from other parts of the training image, and then incorporates this spatial information, in the form of multiple-point probabilities, into the k-NN classifier. Two satellite sensor images with a fine spatial resolution were selected to evaluate the new method. One is an IKONOS image of the Beijing urban area and the other is a WorldView-2 image of the Wolong mountainous area, in China. The images were object-based classified using the MPk-NN method and several alternatives, including the k-NN, the geostatistically weighted k-NN, the Bayesian method, the decision tree classifier (DTC), and the support vector machine classifier (SVM). It was demonstrated that the new spatial weighting based on MPS can achieve greater classification accuracy relative to the alternatives and it is, thus, recommended as appropriate for object-based classification.

  17. Lasing in optimized two-dimensional iron-nail-shaped rod photonic crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwon, Soon-Yong; Moon, Seul-Ki; Yang, Jin-Kyu, E-mail: jinkyuyang@kongju.ac.kr

    2016-03-15

    We demonstrated lasing at the Γ-point band-edge (BE) modes in optimized two-dimensional iron-nail-shaped rod photonic crystals by optical pulse pumping at room temperature. As the radius of the rod increased quadratically toward the edge of the pattern, the quality factor of the Γ-point BE mode increased up to three times, and the modal volume decreased to 56% compared with the values of the original Γ-point BE mode because of the reduction of the optical loss in the horizontal direction. Single-mode lasing from an optimized iron-nail-shaped rod array with an InGaAsP multiple quantum well embedded in the nail heads was observedmore » at a low threshold pump power of 160 μW. Real-image-based numerical simulations showed that the lasing actions originated from the optimized Γ-point BE mode and agreed well with the measurement results, including the lasing polarization, wavelength, and near-field image.« less

  18. Systems and Methods for Imaging of Falling Objects

    NASA Technical Reports Server (NTRS)

    Fallgatter, Cale (Inventor); Garrett, Tim (Inventor)

    2014-01-01

    Imaging of falling objects is described. Multiple images of a falling object can be captured substantially simultaneously using multiple cameras located at multiple angles around the falling object. An epipolar geometry of the captured images can be determined. The images can be rectified to parallelize epipolar lines of the epipolar geometry. Correspondence points between the images can be identified. At least a portion of the falling object can be digitally reconstructed using the identified correspondence points to create a digital reconstruction.

  19. MSClique: Multiple Structure Discovery through the Maximum Weighted Clique Problem.

    PubMed

    Sanroma, Gerard; Penate-Sanchez, Adrian; Alquézar, René; Serratosa, Francesc; Moreno-Noguer, Francesc; Andrade-Cetto, Juan; González Ballester, Miguel Ángel

    2016-01-01

    We present a novel approach for feature correspondence and multiple structure discovery in computer vision. In contrast to existing methods, we exploit the fact that point-sets on the same structure usually lie close to each other, thus forming clusters in the image. Given a pair of input images, we initially extract points of interest and extract hierarchical representations by agglomerative clustering. We use the maximum weighted clique problem to find the set of corresponding clusters with maximum number of inliers representing the multiple structures at the correct scales. Our method is parameter-free and only needs two sets of points along with their tentative correspondences, thus being extremely easy to use. We demonstrate the effectiveness of our method in multiple-structure fitting experiments in both publicly available and in-house datasets. As shown in the experiments, our approach finds a higher number of structures containing fewer outliers compared to state-of-the-art methods.

  20. Phase II Trials for Heterogeneous Patient Populations with a Time-to-Event Endpoint.

    PubMed

    Jung, Sin-Ho

    2017-07-01

    In this paper, we consider a single-arm phase II trial with a time-to-event end-point. We assume that the study population has multiple subpopulations with different prognosis, but the study treatment is expected to be similarly efficacious across the subpopulations. We review a stratified one-sample log-rank test and present its sample size calculation method under some practical design settings. Our sample size method requires specification of the prevalence of subpopulations. We observe that the power of the resulting sample size is not very sensitive to misspecification of the prevalence.

  1. Multivariate meta-analysis of prognostic factor studies with multiple cut-points and/or methods of measurement.

    PubMed

    Riley, Richard D; Elia, Eleni G; Malin, Gemma; Hemming, Karla; Price, Malcolm P

    2015-07-30

    A prognostic factor is any measure that is associated with the risk of future health outcomes in those with existing disease. Often, the prognostic ability of a factor is evaluated in multiple studies. However, meta-analysis is difficult because primary studies often use different methods of measurement and/or different cut-points to dichotomise continuous factors into 'high' and 'low' groups; selective reporting is also common. We illustrate how multivariate random effects meta-analysis models can accommodate multiple prognostic effect estimates from the same study, relating to multiple cut-points and/or methods of measurement. The models account for within-study and between-study correlations, which utilises more information and reduces the impact of unreported cut-points and/or measurement methods in some studies. The applicability of the approach is improved with individual participant data and by assuming a functional relationship between prognostic effect and cut-point to reduce the number of unknown parameters. The models provide important inferential results for each cut-point and method of measurement, including the summary prognostic effect, the between-study variance and a 95% prediction interval for the prognostic effect in new populations. Two applications are presented. The first reveals that, in a multivariate meta-analysis using published results, the Apgar score is prognostic of neonatal mortality but effect sizes are smaller at most cut-points than previously thought. In the second, a multivariate meta-analysis of two methods of measurement provides weak evidence that microvessel density is prognostic of mortality in lung cancer, even when individual participant data are available so that a continuous prognostic trend is examined (rather than cut-points). © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  2. Optic probe for multiple angle image capture and optional stereo imaging

    DOEpatents

    Malone, Robert M.; Kaufman, Morris I.

    2016-11-29

    A probe including a multiple lens array is disclosed to measure velocity distribution of a moving surface along many lines of sight. Laser light, directed to the moving surface is reflected back from the surface and is Doppler shifted, collected into the array, and then directed to detection equipment through optic fibers. The received light is mixed with reference laser light and using photonic Doppler velocimetry, a continuous time record of the surface movement is obtained. An array of single-mode optical fibers provides an optic signal to the multiple lens array. Numerous fibers in a fiber array project numerous rays to establish many measurement points at numerous different locations. One or more lens groups may be replaced with imaging lenses so a stereo image of the moving surface can be recorded. Imaging a portion of the surface during initial travel can determine whether the surface is breaking up.

  3. Diagnostic performance of HbA1c for diabetes in Arab vs. European populations: a systematic review and meta-analysis.

    PubMed

    Bertran, E A; Berlie, H D; Taylor, A; Divine, G; Jaber, L A

    2017-02-01

    To examine differences in the performance of HbA 1c for diagnosing diabetes in Arabs compared with Europeans. The PubMed, Embase and Cochrane library databases were searched for records published between 1998 and 2015. Estimates of sensitivity, specificity and log diagnostic odds ratios for an HbA 1c cut-point of 48 mmol/mol (6.5%) were compared between Arabs and Europeans, using a bivariate linear mixed-model approach. For studies reporting multiple cut-points, population-specific summary receiver operating characteristic (SROC) curves were constructed. In addition, sensitivity, specificity and Youden Index were estimated for strata defined by HbA 1c cut-point and population type. Database searches yielded 1912 unique records; 618 full-text articles were reviewed. Fourteen studies met the inclusion criteria; hand-searching yielded three additional eligible studies. Three Arab (N = 2880) and 16 European populations (N = 49 127) were included in the analysis. Summary sensitivity and specificity for a HbA 1c cut-point of 48 mmol/mol (6.5%) in both populations were 42% (33-51%), and 97% (95-98%). There was no difference in area under SROC curves between Arab and European populations (0.844 vs. 0.847; P = 0.867), suggesting no difference in HbA 1c diagnostic accuracy between populations. Multiple cut-point summary estimates stratified by population suggest that Arabs have lower sensitivity and higher specificity at a HbA 1c cut-point of 44 mmol/mol (6.2%) compared with European populations. Estimates also suggest similar test performance at cut-points of 44 mmol/mol (6.2%) and 48 mmol/mol (6.5%) for Arabs. Given the low sensitivity of HbA 1c in the high-risk Arab American population, we recommend a combination of glucose-based and HbA 1c testing to ensure an accurate and timely diagnosis of diabetes. © 2016 Diabetes UK.

  4. Prevalence and co-occurrence of addictive behaviors among former alternative high school youth: A longitudinal follow-up study.

    PubMed

    Sussman, Steve; Pokhrel, Pallav; Sun, Ping; Rohrbach, Louise A; Spruijt-Metz, Donna

    2015-09-01

    Recent work has studied addictions using a matrix measure, which taps multiple addictions through single responses for each type. This is the first longitudinal study using a matrix measure. We investigated the use of this approach among former alternative high school youth (average age = 19.8 years at baseline; longitudinal n = 538) at risk for addictions. Lifetime and last 30-day prevalence of one or more of 11 addictions reviewed in other work was the primary focus (i.e., cigarettes, alcohol, hard drugs, shopping, gambling, Internet, love, sex, eating, work, and exercise). These were examined at two time-points one year apart. Latent class and latent transition analyses (LCA and LTA) were conducted in Mplus. Prevalence rates were stable across the two time-points. As in the cross-sectional baseline analysis, the 2-class model (addiction class, non-addiction class) fit the data better at follow-up than models with more classes. Item-response or conditional probabilities for each addiction type did not differ between time-points. As a result, the LTA model utilized constrained the conditional probabilities to be equal across the two time-points. In the addiction class, larger conditional probabilities (i.e., 0.40-0.49) were found for love, sex, exercise, and work addictions; medium conditional probabilities (i.e., 0.17-0.27) were found for cigarette, alcohol, other drugs, eating, Internet and shopping addiction; and a small conditional probability (0.06) was found for gambling. Persons in an addiction class tend to remain in this addiction class over a one-year period.

  5. Quantifying mechanical properties in a murine fracture healing system using inverse modeling: preliminary work

    NASA Astrophysics Data System (ADS)

    Miga, Michael I.; Weis, Jared A.; Granero-Molto, Froilan; Spagnoli, Anna

    2010-03-01

    Understanding bone remodeling and mechanical property characteristics is important for assessing treatments to accelerate healing or in developing diagnostics to evaluate successful return to function. The murine system whereby mid-diaphaseal tibia fractures are imparted on the subject and fracture healing is assessed at different time points and under different therapeutic conditions is a particularly useful model to study. In this work, a novel inverse geometric nonlinear elasticity modeling framework is proposed that can reconstruct multiple mechanical properties from uniaxial testing data. To test this framework, the Lame' constants were reconstructed within the context of a murine cohort (n=6) where there were no differences in treatment post tibia fracture except that half of the mice were allowed to heal 4 days longer (10 day, and 14 day healing time point, respectively). The properties reconstructed were a shear modulus of G=511.2 +/- 295.6 kPa, and 833.3+/- 352.3 kPa for the 10 day, and 14 day time points respectively. The second Lame' constant reconstructed at λ=1002.9 +/-42.9 kPa, and 14893.7 +/- 863.3 kPa for the 10 day, and 14 day time points respectively. An unpaired Student t-test was used to test for statistically significant differences among the groups. While the shear modulus did not meet our criteria for significance, the second Lame' constant did at a value p<0.0001. Traditional metrics that are commonly used within the bone fracture healing research community were not found to be statistically significant.

  6. Angiogenesis Is Induced and Wound Size Is Reduced by Electrical Stimulation in an Acute Wound Healing Model in Human Skin

    PubMed Central

    Ud-Din, Sara; Sebastian, Anil; Giddings, Pamela; Colthurst, James; Whiteside, Sigrid; Morris, Julie; Nuccitelli, Richard; Pullar, Christine; Baguneid, Mo; Bayat, Ardeshir

    2015-01-01

    Angiogenesis is critical for wound healing. Insufficient angiogenesis can result in impaired wound healing and chronic wound formation. Electrical stimulation (ES) has been shown to enhance angiogenesis. We previously showed that ES enhanced angiogenesis in acute wounds at one time point (day 14). The aim of this study was to further evaluate the role of ES in affecting angiogenesis during the acute phase of cutaneous wound healing over multiple time points. We compared the angiogenic response to wounding in 40 healthy volunteers (divided into two groups and randomised), treated with ES (post-ES) and compared them to secondary intention wound healing (control). Biopsy time points monitored were days 0, 3, 7, 10, 14. Objective non-invasive measures and H&E analysis were performed in addition to immunohistochemistry (IHC) and Western blotting (WB). Wound volume was significantly reduced on D7, 10 and 14 post-ES (p = 0.003, p = 0.002, p<0.001 respectively), surface area was reduced on days 10 (p = 0.001) and 14 (p<0.001) and wound diameter reduced on days 10 (p = 0.009) and 14 (p = 0.002). Blood flow increased significantly post-ES on D10 (p = 0.002) and 14 (p = 0.001). Angiogenic markers were up-regulated following ES application; protein analysis by IHC showed an increase (p<0.05) in VEGF-A expression by ES treatment on days 7, 10 and 14 (39%, 27% and 35% respectively) and PLGF expression on days 3 and 7 (40% on both days), compared to normal healing. Similarly, WB demonstrated an increase (p<0.05) in PLGF on days 7 and 14 (51% and 35% respectively). WB studies showed a significant increase of 30% (p>0.05) on day 14 in VEGF-A expression post-ES compared to controls. Furthermore, organisation of granulation tissue was improved on day 14 post-ES. This randomised controlled trial has shown that ES enhanced wound healing by reduced wound dimensions and increased VEGF-A and PLGF expression in acute cutaneous wounds, which further substantiates the role of ES in up-regulating angiogenesis as observed over multiple time points. This therapeutic approach may have potential application for clinical management of delayed and chronic wounds. PMID:25928356

  7. Assimilating Flow Data into Complex Multiple-Point Statistical Facies Models Using Pilot Points Method

    NASA Astrophysics Data System (ADS)

    Ma, W.; Jafarpour, B.

    2017-12-01

    We develop a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information:: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) and its multiple data assimilation variant (ES-MDA) are adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at select locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  8. More than just tracking time: Complex measures of user engagement with an internet-based health promotion intervention.

    PubMed

    Baltierra, Nina B; Muessig, Kathryn E; Pike, Emily C; LeGrand, Sara; Bull, Sheana S; Hightow-Weidman, Lisa B

    2016-02-01

    There has been a rise in internet-based health interventions without a concomitant focus on new methods to measure user engagement and its effect on outcomes. We describe current user tracking methods for internet-based health interventions and offer suggestions for improvement based on the design and pilot testing of healthMpowerment.org (HMP). HMP is a multi-component online intervention for young Black men and transgender women who have sex with men (YBMSM/TW) to reduce risky sexual behaviors, promote healthy living and build social support. The intervention is non-directive, incorporates interactive features, and utilizes a point-based reward system. Fifteen YBMSM/TW (age 20-30) participated in a one-month pilot study to test the usability and efficacy of HMP. Engagement with the intervention was tracked using a customized data capture system and validated with Google Analytics. Usage was measured in time spent (total and across sections) and points earned. Average total time spent on HMP was five hours per person (range 0-13). Total time spent was correlated with total points earned and overall site satisfaction. Measuring engagement in internet-based interventions is crucial to determining efficacy. Multiple methods of tracking helped derive more comprehensive user profiles. Results highlighted the limitations of measures to capture user activity and the elusiveness of the concept of engagement. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Lack of significant effect of bilastine administered at therapeutic and supratherapeutic doses and concomitantly with ketoconazole on ventricular repolarization: results of a thorough QT study (TQTS) with QT-concentration analysis.

    PubMed

    Tyl, Benoît; Kabbaj, Meriam; Azzam, Sara; Sologuren, Ander; Valiente, Román; Reinbolt, Elizabeth; Roupe, Kathryn; Blanco, Nathalie; Wheeler, William

    2012-06-01

    The effect of bilastine on cardiac repolarization was studied in 30 healthy participants during a multiple-dose, triple-dummy, crossover, thorough QT study that included 5 arms: placebo, active control (400 mg moxifloxacin), bilastine at therapeutic and supratherapeutic doses (20 mg and 100 mg once daily, respectively), and bilastine 20 mg administered with ketoconazole 400 mg. Time-matched, triplicate electrocardiograms (ECGs) were recorded with 13 time points extracted predose and 16 extracted over 72 hours post day 4 dosing. Four QT/RR corrections were implemented: QTcB; QTcF; a linear individual correction (QTcNi), the primary correction; and a nonlinear one (QTcNnl). Moxifloxacin was associated with a significant increase in QTcNi at all time points between 1 and 12 hours, inclusively. Bilastine administration at 20 mg and 100 mg had no clinically significant impact on QTc (maximum increase in QTcNi, 5.02 ms; upper confidence limit [UCL] of the 1-sided, 95% confidence interval, 7.87 ms). Concomitant administration of ketoconazole and bilastine 20 mg induced a clinically relevant increase in QTc (maximum increase in QTcNi, 9.3 ms; UCL, 12.16 ms). This result was most likely related to the cardiac effect of ketoconazole because for all time points, bilastine plasma concentrations were lower than those observed following the supratherapeutic dose.

  10. A Prospective Observational Comparison Between Arm and Wrist Blood Pressure During Scheduled Cesarean Delivery.

    PubMed

    Sebbag, Ilana; Massey, Simon R; Albert, Arianne Y K; Dube, Alison; Gunka, Vit; Douglas, M Joanne

    2015-09-01

    Shivering is common during cesarean delivery (CD) under neuraxial anesthesia and may disrupt the measurement of noninvasive blood pressure (BP). BP measured at the wrist may be less affected by shivering. There have been no studies comparing trends in BP measured on the upper arm and wrist. We hypothesized that wrist systolic blood pressure (sBP) would accurately trend with upper arm sBP measurements (agree within a limit of ±10%) in parturients undergoing elective CD under spinal anesthesia or combined spinal-epidural anesthesia. After initiation of spinal anesthesia, BP measurements were obtained simultaneously from the upper arm and wrist on opposite arms. The interval between measurements was 1 to 2 minutes, and data were collected for 20 minutes or until delivery. The primary outcome was agreement in dynamic changes in sBP measurements between the upper arm and the wrist. Bland-Altman plots indicating the levels of agreement between the methods were drawn for baseline measurements, over multiple measurements, and over multiple measurements on percentage change from baseline. Forty-nine patients were recruited and completed the study. The wrist sBP tended to overestimate the upper sBP for both baseline data (sBP bias = 13.4 mm Hg; 95% confidence interval = +10.4 to +16.4 mm Hg) and data obtained over multiple measurements (sBP bias = 12.8 mm Hg; 95% confidence interval = +9.3 to +16.3 mm Hg). For change in sBP from baseline over multiple measurements, the mean difference between the wrist and the arm sBP was -0.2 percentage points (99% limits of agreement -25 to +25 percentage points). The wrist measurement overestimated the reading relative to the upper arm measurement for multiple measurements over time. However, when the time series for each subject was examined for percentage change from baseline, the 2 methods mirrored each other in most cases. Nevertheless, our hypothesis was rejected as the limits of agreement were higher than ±10%. This finding suggests that wrist BP may not be an accurate method of detecting hypotension or hypertension during spinal or combined spinal-epidural anesthesia for CD.

  11. Improved-resolution real-time skin-dose mapping for interventional fluoroscopic procedures

    NASA Astrophysics Data System (ADS)

    Rana, Vijay K.; Rudin, Stephen; Bednarek, Daniel R.

    2014-03-01

    We have developed a dose-tracking system (DTS) that provides a real-time display of the skin-dose distribution on a 3D patient graphic during fluoroscopic procedures. Radiation dose to individual points on the skin is calculated using exposure and geometry parameters from the digital bus on a Toshiba C-arm unit. To accurately define the distribution of dose, it is necessary to use a high-resolution patient graphic consisting of a large number of elements. In the original DTS version, the patient graphics were obtained from a library of population body scans which consisted of larger-sized triangular elements resulting in poor congruence between the graphic points and the x-ray beam boundary. To improve the resolution without impacting real-time performance, the number of calculations must be reduced and so we created software-designed human models and modified the DTS to read the graphic as a list of vertices of the triangular elements such that common vertices of adjacent triangles are listed once. Dose is calculated for each vertex point once instead of the number of times that a given vertex appears in multiple triangles. By reformatting the graphic file, we were able to subdivide the triangular elements by a factor of 64 times with an increase in the file size of only 1.3 times. This allows a much greater number of smaller triangular elements and improves resolution of the patient graphic without compromising the real-time performance of the DTS and also gives a smoother graphic display for better visualization of the dose distribution.

  12. Adolescents' Sedentary Behaviors in Two European Cities.

    PubMed

    Aibar Solana, Alberto; Bois, Julien E; Zaragoza, Javier; Bru, Noëlle; Paillard, Thierry; Generelo, Eduardo

    2015-01-01

    The aim of this study was to determine and compare the correlates of objective sedentary behavior (SB) and nonschool self-reported SB in adolescents from 2 midsized cities, 1 in France (Tarbes) and 1 in Spain (Huesca). Stability of objective SB and nonschool self-reported SB were also assessed at different time points during 1 academic year. Starting with a total of 829 participants and after applying inclusion criteria, objective SB was assessed for 646 adolescents (Mage = 14.30 ± 0.71 years) with GT3X accelerometers for 7 days at 2 time points. Nonschool self-reported SB was measured for 781 adolescents (Mage = 14.46 ± 0.76 years) at 3 time points by means of a questionnaire. Data were analyzed using multiple regression analysis. Gender and ambient temperature emerged as the main statistically significant correlates in all objective SB models, showing higher objective SB levels in girls and lower objective SB levels when ambient temperature was higher. According to nonschool self-reported SB, a gender effect was found in almost all behaviors. Whereas boys spent more time playing with video games as well as games on their mobile phones, girls spent more time studying and using their computers and mobile phones to communicate with each other. The findings showed a statistically significant city effect on study time (Huesca > Tarbes) and video games and telephone communication time (Tarbes > Huesca). Nonschool self-reported SB patterns were different in Huesca and Tarbes. Intervention programs should be adapted to target the reduction of adolescents' SB according to different contexts.

  13. Validating a refractometer to evaluate immunoglobulin G concentration in Jersey colostrum and the effect of multiple freeze-thaw cycles on evaluating colostrum quality.

    PubMed

    Morrill, K M; Robertson, K E; Spring, M M; Robinson, A L; Tyler, H D

    2015-01-01

    The objectives of this study were to (1) validate a method using refractometry to rapidly and accurately determine immunoglobulin (IgG) concentration in Jersey colostrum, (2) determine whether there should be different refractive index (nD) and %Brix cut points for Jersey colostrum, and (3) evaluate the effect of multiple freeze-thaw (FT) cycles on radial immunodiffusion (RID) and a digital refractometer to determine IgG concentration in Jersey colostrum. Samples (n=58; 3L) of colostrum were collected from a dairy in northwestern Iowa. Samples were analyzed within 2h of collection for IgG concentration by RID, %Brix, and nD by refractometer and an estimate of IgG by colostrometer. Samples were frozen, placed on dry ice, and transported to the laboratory at Iowa State University (Ames). Samples arrived frozen and were placed in a -20°C manual-defrost freezer until further analysis. On d 7 (1FT), d 14 (2FT), and 1yr (3FT) all samples were thawed, analyzed for IgG by RID, %Brix, nD by refractometer, and IgG estimate by colostrometer, and frozen until reanalysis at the next time point. Fresh colostrum had a mean (±SD) IgG concentration of 72.91 (±33.53) mg/mL, 21.24% (±4.43) Brix, and nD 1.3669 (±0.0074). Multiple FT cycles did affect IgG as determined by RID and colostrometer reading. The IgG concentrations were greater in fresh and 1FT samples as compared with 2FT and 3FT samples (72.91, 75.38, 67.20, and 67.31mg of IgG/mL, respectively). The colostrometer reading was lower in 1FT samples compared with fresh and 2FT samples. Multiple FT cycles had no effect on nD or %Brix reading. In fresh samples, IgG concentration was moderately correlated with nD (r=0.79), %Brix (r=0.79), and colostrometer reading (r=0.79). Diagnostic test characteristics using the recommended cut point of 1.35966 nD resulted in similar sensitivities for 1FT and 2 FT samples (94.87 and 94.74%, respectively). Cut points of 18 and 19% Brix resulted in the greatest sensitivities (92.31 and 84.62%) and specificity (94.74 and 94.74%, respectively). The 18% Brix cut point resulted in 94.83% of the samples being correctly classified based on IgG concentration. These data support the use of digital refractometer to accurately and rapidly determine IgG concentration in fresh Jersey colostrum. Additionally, these data suggest that IgG concentration determined by RID is affected by multiple FT cycles, whereas estimates obtained by refractometer are not affected by multiple FT cycles. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. Robust object matching for persistent tracking with heterogeneous features.

    PubMed

    Guo, Yanlin; Hsu, Steve; Sawhney, Harpreet S; Kumar, Rakesh; Shan, Ying

    2007-05-01

    This paper addresses the problem of matching vehicles across multiple sightings under variations in illumination and camera poses. Since multiple observations of a vehicle are separated in large temporal and/or spatial gaps, thus prohibiting the use of standard frame-to-frame data association, we employ features extracted over a sequence during one time interval as a vehicle fingerprint that is used to compute the likelihood that two or more sequence observations are from the same or different vehicles. Furthermore, since our domain is aerial video tracking, in order to deal with poor image quality and large resolution and quality variations, our approach employs robust alignment and match measures for different stages of vehicle matching. Most notably, we employ a heterogeneous collection of features such as lines, points, and regions in an integrated matching framework. Heterogeneous features are shown to be important. Line and point features provide accurate localization and are employed for robust alignment across disparate views. The challenges of change in pose, aspect, and appearances across two disparate observations are handled by combining a novel feature-based quasi-rigid alignment with flexible matching between two or more sequences. However, since lines and points are relatively sparse, they are not adequate to delineate the object and provide a comprehensive matching set that covers the complete object. Region features provide a high degree of coverage and are employed for continuous frames to provide a delineation of the vehicle region for subsequent generation of a match measure. Our approach reliably delineates objects by representing regions as robust blob features and matching multiple regions to multiple regions using Earth Mover's Distance (EMD). Extensive experimentation under a variety of real-world scenarios and over hundreds of thousands of Confirmatory Identification (CID) trails has demonstrated about 95 percent accuracy in vehicle reacquisition with both visible and Infrared (IR) imaging cameras.

  15. Dynamic cellular uptake of mixed-monolayer protected nanoparticles.

    PubMed

    Carney, Randy P; Carney, Tamara M; Mueller, Marie; Stellacci, Francesco

    2012-12-01

    Nanoparticles (NPs) are gaining increasing attention for potential application in medicine; consequently, studying their interaction with cells is of central importance. We found that both ligand arrangement and composition on gold nanoparticles play a crucial role in their cellular internalization. In our previous investigation, we showed that 66-34OT nanoparticles coated with stripe-like domains of hydrophobic (octanethiol, OT, 34%) and hydrophilic (11-mercaptoundecane sulfonate, MUS, 66%) ligands permeated through the cellular lipid bilayer via passive diffusion, in addition to endo-/pino-cytosis. Here, we show an analysis of NP internalization by DC2.4, 3T3, and HeLa cells at two temperatures and multiple time points. We study four NPs that differ in their surface structures and ligand compositions and report on their cellular internalization by intracellular fluorescence quantification. Using confocal laser scanning microscopy we have found that all three cell types internalize the 66-34OT NPs more than particles coated only with MUS, or particles coated with a very similar coating but lacking any detectable ligand shell structure, or 'striped' particles but with a different composition (34-66OT) at multiple data points.

  16. Complete N-point superstring disk amplitude II. Amplitude and hypergeometric function structure

    NASA Astrophysics Data System (ADS)

    Mafra, Carlos R.; Schlotterer, Oliver; Stieberger, Stephan

    2013-08-01

    Using the pure spinor formalism in part I (Mafra et al., preprint [1]) we compute the complete tree-level amplitude of N massless open strings and find a striking simple and compact form in terms of minimal building blocks: the full N-point amplitude is expressed by a sum over (N-3)! Yang-Mills partial subamplitudes each multiplying a multiple Gaussian hypergeometric function. While the former capture the space-time kinematics of the amplitude the latter encode the string effects. This result disguises a lot of structure linking aspects of gauge amplitudes as color and kinematics with properties of generalized Euler integrals. In this part II the structure of the multiple hypergeometric functions is analyzed in detail: their relations to monodromy equations, their minimal basis structure, and methods to determine their poles and transcendentality properties are proposed. Finally, a Gröbner basis analysis provides independent sets of rational functions in the Euler integrals. In contrast to [1] here we use momenta redefined by a factor of i. As a consequence the signs of the kinematic invariants are flipped, e.g. |→|.

  17. Response to depression treatment in the Aging Brain Care Medical Home model.

    PubMed

    LaMantia, Michael A; Perkins, Anthony J; Gao, Sujuan; Austrom, Mary G; Alder, Cathy A; French, Dustin D; Litzelman, Debra K; Cottingham, Ann H; Boustani, Malaz A

    2016-01-01

    To evaluate the effect of the Aging Brain Care (ABC) Medical Home program's depression module on patients' depression severity measurement over time. Retrospective chart review. Public hospital system. Patients enrolled in the ABC Medical Home program between October 1, 2012 and March 31, 2014. The response of 773 enrolled patients who had multiple patient health questionnaire-9 (PHQ-9) scores recorded in the ABC Medical Home program's depression care protocol was evaluated. Repeatedly measured PHQ-9 change scores were the dependent variables in the mixed effects models, and demographic and comorbid medical conditions were tested as potential independent variables while including random effects for time and intercept. Among those patients with baseline PHQ-9 scores >10, there was a significant decrease in PHQ-9 scores over time ( P <0.001); however, the effect differed by gender ( P =0.015). On average, women's scores (4.5 point drop at 1 month) improved faster than men's scores (1 point drop at 1 month). Moreover, both men and women had a predicted drop of 7 points (>50% decline from baseline) on the PHQ-9 at 6 months. These analyses demonstrate evidence for the sustained effectiveness of the ABC Medical Home program at inducing depression remission outcomes while employing clinical staff who required less formal training than earlier clinical trials.

  18. A Novel Health Evaluation Strategy for Multifunctional Self-Validating Sensors

    PubMed Central

    Shen, Zhengguang; Wang, Qi

    2013-01-01

    The performance evaluation of sensors is very important in actual application. In this paper, a theory based on multi-variable information fusion is studied to evaluate the health level of multifunctional sensors. A novel conception of health reliability degree (HRD) is defined to indicate a quantitative health level, which is different from traditional so-called qualitative fault diagnosis. To evaluate the health condition from both local and global perspectives, the HRD of a single sensitive component at multiple time points and the overall multifunctional sensor at a single time point are defined, respectively. The HRD methodology is emphasized by using multi-variable data fusion technology coupled with a grey comprehensive evaluation method. In this method, to acquire the distinct importance of each sensitive unit and the sensitivity of different time points, the information entropy and analytic hierarchy process method are used, respectively. In order to verify the feasibility of the proposed strategy, a health evaluating experimental system for multifunctional self-validating sensors was designed. The five different health level situations have been discussed. Successful results show that the proposed method is feasible, the HRD could be used to quantitatively indicate the health level and it does have a fast response to the performance changes of multifunctional sensors. PMID:23291576

  19. Microbial community changes in hydraulic fracturing fluids and produced water from shale gas extraction.

    PubMed

    Murali Mohan, Arvind; Hartsock, Angela; Bibby, Kyle J; Hammack, Richard W; Vidic, Radisav D; Gregory, Kelvin B

    2013-11-19

    Microbial communities associated with produced water from hydraulic fracturing are not well understood, and their deleterious activity can lead to significant increases in production costs and adverse environmental impacts. In this study, we compared the microbial ecology in prefracturing fluids (fracturing source water and fracturing fluid) and produced water at multiple time points from a natural gas well in southwestern Pennsylvania using 16S rRNA gene-based clone libraries, pyrosequencing, and quantitative PCR. The majority of the bacterial community in prefracturing fluids constituted aerobic species affiliated with the class Alphaproteobacteria. However, their relative abundance decreased in produced water with an increase in halotolerant, anaerobic/facultative anaerobic species affiliated with the classes Clostridia, Bacilli, Gammaproteobacteria, Epsilonproteobacteria, Bacteroidia, and Fusobacteria. Produced water collected at the last time point (day 187) consisted almost entirely of sequences similar to Clostridia and showed a decrease in bacterial abundance by 3 orders of magnitude compared to the prefracturing fluids and produced water samplesfrom earlier time points. Geochemical analysis showed that produced water contained higher concentrations of salts and total radioactivity compared to prefracturing fluids. This study provides evidence of long-term subsurface selection of the microbial community introduced through hydraulic fracturing, which may include significant implications for disinfection as well as reuse of produced water in future fracturing operations.

  20. 78 FR 16561 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-15

    ... interest. Accordingly, at the $22.00 price point, both the entire amount of B4 and the remaining balance of...-side interest, Exchange systems would cancel the remaining balance of the incoming STPN order that... STPN could execute at multiple price points, the incoming STPN would execute at the multiple prices...

  1. Internal Snapping Hip Syndrome: Incidence of Multiple-Tendon Existence and Outcome After Endoscopic Transcapsular Release.

    PubMed

    Ilizaliturri, Victor M; Suarez-Ahedo, Carlos; Acuña, Marco

    2015-10-01

    To report the frequency of presentation of bifid or multiple iliopsoas tendons in patients who underwent endoscopic release for internal snapping hip syndrome (ISHS) and to compare both groups. A consecutive series of patients with ISHS were treated with endoscopic transcapsular release of the iliopsoas tendon at the central compartment and prospectively followed up. The inclusion criteria were patients with a diagnosis of ISHS with failure of conservative treatment. During the procedure, the presence of a bifid tendon was intentionally looked for. Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) scores were evaluated preoperatively and at last follow-up. Four patients presented with a bifid tendon and one patient had 3 tendons. At a minimum of 12 months' follow-up, the presence of snapping recurrence was evaluated and the WOMAC scores were compared between both groups. Among 279 hip arthroscopies, 28 patients underwent central transcapsular iliopsoas tendon release. The mean age was 29.25 years (range, 16 to 65 years; 6 left and 22 right hips). Group 1 included 5 patients with multiple tendons; the remaining patients formed group 2 (n = 23). None of the patients presented with ISHS recurrence. The mean WOMAC score in group 1 was 39 points (95% confidence interval [CI], 26.2 to 55.4 points) preoperatively and 73.6 points (95% CI, 68.4 to 79.6 points) at last follow-up. In group 2 the mean WOMAC score was 47.21 points (95% CI, 44.4 to 58.2 points) preoperatively and 77.91 points (95% CI, 67.8 to 83.4 points) at last follow-up. We identified a bifid tendon retrospectively on magnetic resonance arthrograms in 3 of the 5 cases that were found to have multiple tendons during surgery. None of these were recognized before the procedures. In this series the surgeon intentionally looked for multiple tendons, which were found in 17.85% of the cases. Clinical results in patients with single- and multiple-tendon snapping seem to be similarly adequate. However, the possibility of a type II error should be considered given the small number of patients. Level IV. Copyright © 2015 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  2. Low-cost standalone multi-sensor thermometer for long time measurements

    NASA Astrophysics Data System (ADS)

    Kumchaiseemak, Nakorn; Hormwantha, Tongchai; Wungmool, Piyachat; Suwanatus, Suchat; Kanjai, Supaporn; Lertkitthaworn, Thitima; Jutamanee, Kanapol; Luengviriya, Chaiya

    2017-09-01

    We present a portable device for long-time recording of the temperature at multiple measuring points. Thermocouple wires are utilized as the sensors attached to the objects. To minimize the production cost, the measured voltage signals are relayed via a multiplexer to a set of amplifiers and finally to a single microcontroller. The observed temperature and the corresponding date and time, obtained from a real-time clock circuit, are recorded in a memory card for further analysis. The device is powered by a rechargeable battery and placed in a rainproof container, thus it can operate under outdoor conditions. A demonstration of the device usage in a mandarin orange cultivation field of the Royal project, located in the northern Thailand, is illustrated.

  3. Dynamic performance of maximum power point tracking circuits using sinusoidal extremum seeking control for photovoltaic generation

    NASA Astrophysics Data System (ADS)

    Leyva, R.; Artillan, P.; Cabal, C.; Estibals, B.; Alonso, C.

    2011-04-01

    The article studies the dynamic performance of a family of maximum power point tracking circuits used for photovoltaic generation. It revisits the sinusoidal extremum seeking control (ESC) technique which can be considered as a particular subgroup of the Perturb and Observe algorithms. The sinusoidal ESC technique consists of adding a small sinusoidal disturbance to the input and processing the perturbed output to drive the operating point at its maximum. The output processing involves a synchronous multiplication and a filtering stage. The filter instance determines the dynamic performance of the MPPT based on sinusoidal ESC principle. The approach uses the well-known root-locus method to give insight about damping degree and settlement time of maximum-seeking waveforms. This article shows the transient waveforms in three different filter instances to illustrate the approach. Finally, an experimental prototype corroborates the dynamic analysis.

  4. Simple Approaches to Minimally-Instrumented, Microfluidic-Based Point-of-Care Nucleic Acid Amplification Tests

    PubMed Central

    Mauk, Michael G.; Song, Jinzhao; Liu, Changchun; Bau, Haim H.

    2018-01-01

    Designs and applications of microfluidics-based devices for molecular diagnostics (Nucleic Acid Amplification Tests, NAATs) in infectious disease testing are reviewed, with emphasis on minimally instrumented, point-of-care (POC) tests for resource-limited settings. Microfluidic cartridges (‘chips’) that combine solid-phase nucleic acid extraction; isothermal enzymatic nucleic acid amplification; pre-stored, paraffin-encapsulated lyophilized reagents; and real-time or endpoint optical detection are described. These chips can be used with a companion module for separating plasma from blood through a combined sedimentation-filtration effect. Three reporter types: Fluorescence, colorimetric dyes, and bioluminescence; and a new paradigm for end-point detection based on a diffusion-reaction column are compared. Multiplexing (parallel amplification and detection of multiple targets) is demonstrated. Low-cost detection and added functionality (data analysis, control, communication) can be realized using a cellphone platform with the chip. Some related and similar-purposed approaches by others are surveyed. PMID:29495424

  5. [Associations of the Employment Status during the First 2 Years Following Medical Rehabilitation and Long Term Occupational Trajectories: Implications for Outcome Measurement].

    PubMed

    Holstiege, J; Kaluscha, R; Jankowiak, S; Krischak, G

    2017-02-01

    Study Objectives: The aim was to investigate the predictive value of the employment status measured in the 6 th , 12 th , 18 th and 24 th month after medical rehabilitation for long-term employment trajectories during 4 years. Methods: A retrospective study was conducted based on a 20%-sample of all patients receiving inpatient rehabilitation funded by the German pension fund. Patients aged <62 years who were treated due to musculoskeletal, cardiovascular or psychosomatic disorders during the years 2002-2005 were included and followed for 4 consecutive years. The predictive value of the employment status in 4 predefined months after discharge (6 th , 12 th , 18 th and 24 th month), for the total number of months in employment in 4 years following rehabilitative treatment was analyzed using multiple linear regression. Per time point, separate regression analyses were conducted, including the employment status (employed vs. unemployed) at the respective point in time as explanatory variable, besides a standard set of additional prognostic variables. Results: A total of 252 591 patients were eligible for study inclusion. The level of explained variance of the regression models increased with the point in time used to measure the employment status, included as explanatory variable. Overall the R²-measure increased by 30% from the regression model that included the employment status in the 6 th month (R²=0.60) to the model that included the work status in the 24 th month (R²=0.78). Conclusion: The degree of accuracy in the prognosis of long-term employment biographies increases with the point in time used to measure employment in the first 2 years following rehabilitation. These findings should be taken into consideration for the predefinition of time points used to measure the employment status in future studies. © Georg Thieme Verlag KG Stuttgart · New York.

  6. Floating-Point Units and Algorithms for field-programmable gate arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Underwood, Keith D.; Hemmert, K. Scott

    2005-11-01

    The software that we are attempting to copyright is a package of floating-point unit descriptions and example algorithm implementations using those units for use in FPGAs. The floating point units are best-in-class implementations of add, multiply, divide, and square root floating-point operations. The algorithm implementations are sample (not highly flexible) implementations of FFT, matrix multiply, matrix vector multiply, and dot product. Together, one could think of the collection as an implementation of parts of the BLAS library or something similar to the FFTW packages (without the flexibility) for FPGAs. Results from this work has been published multiple times and wemore » are working on a publication to discuss the techniques we use to implement the floating-point units, For some more background, FPGAS are programmable hardware. "Programs" for this hardware are typically created using a hardware description language (examples include Verilog, VHDL, and JHDL). Our floating-point unit descriptions are written in JHDL, which allows them to include placement constraints that make them highly optimized relative to some other implementations of floating-point units. Many vendors (Nallatech from the UK, SRC Computers in the US) have similar implementations, but our implementations seem to be somewhat higher performance. Our algorithm implementations are written in VHDL and models of the floating-point units are provided in VHDL as well. FPGA "programs" make multiple "calls" (hardware instantiations) to libraries of intellectual property (IP), such as the floating-point unit library described here. These programs are then compiled using a tool called a synthesizer (such as a tool from Synplicity, Inc.). The compiled file is a netlist of gates and flip-flops. This netlist is then mapped to a particular type of FPGA by a mapper and then a place- and-route tool. These tools assign the gates in the netlist to specific locations on the specific type of FPGA chip used and constructs the required routes between them. The result is a "bitstream" that is analogous to a compiled binary. The bitstream is loaded into the FPGA to create a specific hardware configuration.« less

  7. Are abrupt climate changes predictable?

    NASA Astrophysics Data System (ADS)

    Ditlevsen, Peter

    2013-04-01

    It is taken for granted that the limited predictability in the initial value problem, the weather prediction, and the predictability of the statistics are two distinct problems. Lorenz (1975) dubbed this predictability of the first and the second kind respectively. Predictability of the first kind in a chaotic dynamical system is limited due to the well-known critical dependence on initial conditions. Predictability of the second kind is possible in an ergodic system, where either the dynamics is known and the phase space attractor can be characterized by simulation or the system can be observed for such long times that the statistics can be obtained from temporal averaging, assuming that the attractor does not change in time. For the climate system the distinction between predictability of the first and the second kind is fuzzy. This difficulty in distinction between predictability of the first and of the second kind is related to the lack of scale separation between fast and slow components of the climate system. The non-linear nature of the problem furthermore opens the possibility of multiple attractors, or multiple quasi-steady states. As the ice-core records show, the climate has been jumping between different quasi-stationary climates, stadials and interstadials through the Dansgaard-Oechger events. Such a jump happens very fast when a critical tipping point has been reached. The question is: Can such a tipping point be predicted? This is a new kind of predictability: the third kind. If the tipping point is reached through a bifurcation, where the stability of the system is governed by some control parameter, changing in a predictable way to a critical value, the tipping is predictable. If the sudden jump occurs because internal chaotic fluctuations, noise, push the system across a barrier, the tipping is as unpredictable as the triggering noise. In order to hint at an answer to this question, a careful analysis of the high temporal resolution NGRIP isotope record is presented. The result of the analysis points to a fundamental limitation in predictability of the third kind. Reference: P. D. Ditlevsen and S. Johnsen, "Tipping points: Early warning and wishful thinking", Geophys. Res. Lett., 37, 2010

  8. Synaptic pathology in the cerebellar dentate nucleus in chronic multiple sclerosis.

    PubMed

    Albert, Monika; Barrantes-Freer, Alonso; Lohrberg, Melanie; Antel, Jack P; Prineas, John W; Palkovits, Miklós; Wolff, Joachim R; Brück, Wolfgang; Stadelmann, Christine

    2017-11-01

    In multiple sclerosis, cerebellar symptoms are associated with clinical impairment and an increased likelihood of progressive course. Cortical atrophy and synaptic dysfunction play a prominent role in cerebellar pathology and although the dentate nucleus is a predilection site for lesion development, structural synaptic changes in this region remain largely unexplored. Moreover, the mechanisms leading to synaptic dysfunction have not yet been investigated at an ultrastructural level in multiple sclerosis. Here, we report on synaptic changes of dentate nuclei in post-mortem cerebella of 16 multiple sclerosis patients and eight controls at the histological level as well as an electron microscopy evaluation of afferent synapses of the cerebellar dentate and pontine nuclei of one multiple sclerosis patient and one control. We found a significant reduction of afferent dentate synapses in multiple sclerosis, irrespective of the presence of demyelination, and a close relationship between glial processes and dentate synapses. Ultrastructurally, we show autophagosomes containing degradation products of synaptic vesicles within dendrites, residual bodies within intact-appearing axons and free postsynaptic densities opposed to astrocytic appendages. Our study demonstrates loss of dentate afferent synapses and provides, for the first time, ultrastructural evidence pointing towards neuron-autonomous and neuroglia-mediated mechanisms of synaptic degradation in chronic multiple sclerosis. © 2016 International Society of Neuropathology.

  9. Robust estimation of pulse wave transit time using group delay.

    PubMed

    Meloni, Antonella; Zymeski, Heather; Pepe, Alessia; Lombardi, Massimo; Wood, John C

    2014-03-01

    To evaluate the efficiency of a novel transit time (Δt) estimation method from cardiovascular magnetic resonance flow curves. Flow curves were estimated from phase contrast images of 30 patients. Our method (TT-GD: transit time group delay) operates in the frequency domain and models the ascending aortic waveform as an input passing through a discrete-component "filter," producing the observed descending aortic waveform. The GD of the filter represents the average time delay (Δt) across individual frequency bands of the input. This method was compared with two previously described time-domain methods: TT-point using the half-maximum of the curves and TT-wave using cross-correlation. High temporal resolution flow images were studied at multiple downsampling rates to study the impact of differences in temporal resolution. Mean Δts obtained with the three methods were comparable. The TT-GD method was the most robust to reduced temporal resolution. While the TT-GD and the TT-wave produced comparable results for velocity and flow waveforms, the TT-point resulted in significant shorter Δts when calculated from velocity waveforms (difference: 1.8±2.7 msec; coefficient of variability: 8.7%). The TT-GD method was the most reproducible, with an intraobserver variability of 3.4% and an interobserver variability of 3.7%. Compared to the traditional TT-point and TT-wave methods, the TT-GD approach was more robust to the choice of temporal resolution, waveform type, and observer. Copyright © 2013 Wiley Periodicals, Inc.

  10. Decision tree analysis in subarachnoid hemorrhage: prediction of outcome parameters during the course of aneurysmal subarachnoid hemorrhage using decision tree analysis.

    PubMed

    Hostettler, Isabel Charlotte; Muroi, Carl; Richter, Johannes Konstantin; Schmid, Josef; Neidert, Marian Christoph; Seule, Martin; Boss, Oliver; Pangalu, Athina; Germans, Menno Robbert; Keller, Emanuela

    2018-01-19

    OBJECTIVE The aim of this study was to create prediction models for outcome parameters by decision tree analysis based on clinical and laboratory data in patients with aneurysmal subarachnoid hemorrhage (aSAH). METHODS The database consisted of clinical and laboratory parameters of 548 patients with aSAH who were admitted to the Neurocritical Care Unit, University Hospital Zurich. To examine the model performance, the cohort was randomly divided into a derivation cohort (60% [n = 329]; training data set) and a validation cohort (40% [n = 219]; test data set). The classification and regression tree prediction algorithm was applied to predict death, functional outcome, and ventriculoperitoneal (VP) shunt dependency. Chi-square automatic interaction detection was applied to predict delayed cerebral infarction on days 1, 3, and 7. RESULTS The overall mortality was 18.4%. The accuracy of the decision tree models was good for survival on day 1 and favorable functional outcome at all time points, with a difference between the training and test data sets of < 5%. Prediction accuracy for survival on day 1 was 75.2%. The most important differentiating factor was the interleukin-6 (IL-6) level on day 1. Favorable functional outcome, defined as Glasgow Outcome Scale scores of 4 and 5, was observed in 68.6% of patients. Favorable functional outcome at all time points had a prediction accuracy of 71.1% in the training data set, with procalcitonin on day 1 being the most important differentiating factor at all time points. A total of 148 patients (27%) developed VP shunt dependency. The most important differentiating factor was hyperglycemia on admission. CONCLUSIONS The multiple variable analysis capability of decision trees enables exploration of dependent variables in the context of multiple changing influences over the course of an illness. The decision tree currently generated increases awareness of the early systemic stress response, which is seemingly pertinent for prognostication.

  11. Wind farm electrical system

    DOEpatents

    Erdman, William L.; Lettenmaier, Terry M.

    2006-07-04

    An approach to wind farm design using variable speed wind turbines with low pulse number electrical output. The output of multiple wind turbines are aggregated to create a high pulse number electrical output at a point of common coupling with a utility grid network. Power quality at each individual wind turbine falls short of utility standards, but the aggregated output at the point of common coupling is within acceptable tolerances for utility power quality. The approach for aggregating low pulse number electrical output from multiple wind turbines relies upon a pad mounted transformer at each wind turbine that performs phase multiplication on the output of each wind turbine. Phase multiplication converts a modified square wave from the wind turbine into a 6 pulse output. Phase shifting of the 6 pulse output from each wind turbine allows the aggregated output of multiple wind turbines to be a 24 pulse approximation of a sine wave. Additional filtering and VAR control is embedded within the wind farm to take advantage of the wind farm's electrical impedence characteristics to further enhance power quality at the point of common coupling.

  12. Estimating occupancy and abundance using aerial images with imperfect detection

    USGS Publications Warehouse

    Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Bower, Michael R.

    2017-01-01

    Species distribution and abundance are critical population characteristics for efficient management, conservation, and ecological insight. Point process models are a powerful tool for modelling distribution and abundance, and can incorporate many data types, including count data, presence-absence data, and presence-only data. Aerial photographic images are a natural tool for collecting data to fit point process models, but aerial images do not always capture all animals that are present at a site. Methods for estimating detection probability for aerial surveys usually include collecting auxiliary data to estimate the proportion of time animals are available to be detected.We developed an approach for fitting point process models using an N-mixture model framework to estimate detection probability for aerial occupancy and abundance surveys. Our method uses multiple aerial images taken of animals at the same spatial location to provide temporal replication of sample sites. The intersection of the images provide multiple counts of individuals at different times. We examined this approach using both simulated and real data of sea otters (Enhydra lutris kenyoni) in Glacier Bay National Park, southeastern Alaska.Using our proposed methods, we estimated detection probability of sea otters to be 0.76, the same as visual aerial surveys that have been used in the past. Further, simulations demonstrated that our approach is a promising tool for estimating occupancy, abundance, and detection probability from aerial photographic surveys.Our methods can be readily extended to data collected using unmanned aerial vehicles, as technology and regulations permit. The generality of our methods for other aerial surveys depends on how well surveys can be designed to meet the assumptions of N-mixture models.

  13. Predicting fatty acid profiles in blood based on food intake and the FADS1 rs174546 SNP.

    PubMed

    Hallmann, Jacqueline; Kolossa, Silvia; Gedrich, Kurt; Celis-Morales, Carlos; Forster, Hannah; O'Donovan, Clare B; Woolhead, Clara; Macready, Anna L; Fallaize, Rosalind; Marsaux, Cyril F M; Lambrinou, Christina-Paulina; Mavrogianni, Christina; Moschonis, George; Navas-Carretero, Santiago; San-Cristobal, Rodrigo; Godlewska, Magdalena; Surwiłło, Agnieszka; Mathers, John C; Gibney, Eileen R; Brennan, Lorraine; Walsh, Marianne C; Lovegrove, Julie A; Saris, Wim H M; Manios, Yannis; Martinez, Jose Alfredo; Traczyk, Iwona; Gibney, Michael J; Daniel, Hannelore

    2015-12-01

    A high intake of n-3 PUFA provides health benefits via changes in the n-6/n-3 ratio in blood. In addition to such dietary PUFAs, variants in the fatty acid desaturase 1 (FADS1) gene are also associated with altered PUFA profiles. We used mathematical modeling to predict levels of PUFA in whole blood, based on multiple hypothesis testing and bootstrapped LASSO selected food items, anthropometric and lifestyle factors, and the rs174546 genotypes in FADS1 from 1607 participants (Food4Me Study). The models were developed using data from the first reported time point (training set) and their predictive power was evaluated using data from the last reported time point (test set). Among other food items, fish, pizza, chicken, and cereals were identified as being associated with the PUFA profiles. Using these food items and the rs174546 genotypes as predictors, models explained 26-43% of the variability in PUFA concentrations in the training set and 22-33% in the test set. Selecting food items using multiple hypothesis testing is a valuable contribution to determine predictors, as our models' predictive power is higher compared to analogue studies. As unique feature, we additionally confirmed our models' power based on a test set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Zero-Point Energy Constraint for Unimolecular Dissociation Reactions. Giving Trajectories Multiple Chances To Dissociate Correctly.

    PubMed

    Paul, Amit K; Hase, William L

    2016-01-28

    A zero-point energy (ZPE) constraint model is proposed for classical trajectory simulations of unimolecular decomposition and applied to CH4* → H + CH3 decomposition. With this model trajectories are not allowed to dissociate unless they have ZPE in the CH3 product. If not, they are returned to the CH4* region of phase space and, if necessary, given additional opportunities to dissociate with ZPE. The lifetime for dissociation of an individual trajectory is the time it takes to dissociate with ZPE in CH3, including multiple possible returns to CH4*. With this ZPE constraint the dissociation of CH4* is exponential in time as expected for intrinsic RRKM dynamics and the resulting rate constant is in good agreement with the harmonic quantum value of RRKM theory. In contrast, a model that discards trajectories without ZPE in the reaction products gives a CH4* → H + CH3 rate constant that agrees with the classical and not quantum RRKM value. The rate constant for the purely classical simulation indicates that anharmonicity may be important and the rate constant from the ZPE constrained classical trajectory simulation may not represent the complete anharmonicity of the RRKM quantum dynamics. The ZPE constraint model proposed here is compared with previous models for restricting ZPE flow in intramolecular dynamics, and connecting product and reactant/product quantum energy levels in chemical dynamics simulations.

  15. Real-time synchronized multiple-sensor IR/EO scene generation utilizing the SGI Onyx2

    NASA Astrophysics Data System (ADS)

    Makar, Robert J.; O'Toole, Brian E.

    1998-07-01

    An approach to utilize the symmetric multiprocessing environment of the Silicon Graphics Inc.R (SGI) Onyx2TM has been developed to support the generation of IR/EO scenes in real-time. This development, supported by the Naval Air Warfare Center Aircraft Division (NAWC/AD), focuses on high frame rate hardware-in-the-loop testing of multiple sensor avionics systems. In the past, real-time IR/EO scene generators have been developed as custom architectures that were often expensive and difficult to maintain. Previous COTS scene generation systems, designed and optimized for visual simulation, could not be adapted for accurate IR/EO sensor stimulation. The new Onyx2 connection mesh architecture made it possible to develop a more economical system while maintaining the fidelity needed to stimulate actual sensors. An SGI based Real-time IR/EO Scene Simulator (RISS) system was developed to utilize the Onyx2's fast multiprocessing hardware to perform real-time IR/EO scene radiance calculations. During real-time scene simulation, the multiprocessors are used to update polygon vertex locations and compute radiometrically accurate floating point radiance values. The output of this process can be utilized to drive a variety of scene rendering engines. Recent advancements in COTS graphics systems, such as the Silicon Graphics InfiniteRealityR make a total COTS solution possible for some classes of sensors. This paper will discuss the critical technologies that apply to infrared scene generation and hardware-in-the-loop testing using SGI compatible hardware. Specifically, the application of RISS high-fidelity real-time radiance algorithms on the SGI Onyx2's multiprocessing hardware will be discussed. Also, issues relating to external real-time control of multiple synchronized scene generation channels will be addressed.

  16. Metabolic profiling of gestational diabetes in obese women during pregnancy.

    PubMed

    White, Sara L; Pasupathy, Dharmintra; Sattar, Naveed; Nelson, Scott M; Lawlor, Debbie A; Briley, Annette L; Seed, Paul T; Welsh, Paul; Poston, Lucilla

    2017-10-01

    Antenatal obesity and associated gestational diabetes (GDM) are increasing worldwide. While pre-existing insulin resistance is implicated in GDM in obese women, the responsible metabolic pathways remain poorly described. Our aim was to compare metabolic profiles in blood of obese pregnant women with and without GDM 10 weeks prior to and at the time of diagnosis by OGTT. We investigated 646 women, of whom 198 developed GDM, in this prospective cohort study, a secondary analysis of UK Pregnancies Better Eating and Activity Trial (UPBEAT), a multicentre randomised controlled trial of a complex lifestyle intervention in obese pregnant women. Multivariate regression analyses adjusted for multiple testing, and accounting for appropriate confounders including study intervention, were performed to compare obese women with GDM with obese non-GDM women. We measured 163 analytes in serum, plasma or whole blood, including 147 from a targeted NMR metabolome, at time point 1 (mean gestational age 17 weeks 0 days) and time point 2 (mean gestational age 27 weeks 5 days, at time of OGTT) and compared them between groups. Multiple significant differences were observed in women who developed GDM compared with women without GDM (false discovery rate corrected p values <0.05). Most were evident prior to diagnosis. Women with GDM demonstrated raised lipids and lipoprotein constituents in VLDL subclasses, greater triacylglycerol enrichment across lipoprotein particles, higher branched-chain and aromatic amino acids and different fatty acid, ketone body, adipokine, liver and inflammatory marker profiles compared with those without GDM. Among obese pregnant women, differences in metabolic profile, including exaggerated dyslipidaemia, are evident at least 10 weeks prior to a diagnosis of GDM in the late second trimester.

  17. Real object-based 360-degree integral-floating display using multiple depth camera

    NASA Astrophysics Data System (ADS)

    Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam

    2015-03-01

    A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.

  18. A review of radiative detachment studies in tokamak advanced magnetic divertor configurations

    DOE PAGES

    Soukhanovskii, V. A.

    2017-04-28

    The present vision for a plasma–material interface in the tokamak is an axisymmetric poloidal magnetic X-point divertor. Four tasks are accomplished by the standard poloidal X-point divertor: plasma power exhaust; particle control (D/T and He pumping); reduction of impurity production (source); and impurity screening by the divertor scrape-off layer. A low-temperature, low heat flux divertor operating regime called radiative detachment is viewed as the main option that addresses these tasks for present and future tokamaks. Advanced magnetic divertor configuration has the capability to modify divertor parallel and cross-field transport, radiative and dissipative losses, and detachment front stability. Advanced magnetic divertormore » configurations are divided into four categories based on their salient qualitative features: (1) multiple standard X-point divertors; (2) divertors with higher order nulls; (3) divertors with multiple X-points; and (4) long poloidal leg divertors (and also with multiple X-points). As a result, this paper reviews experiments and modeling in the area of radiative detachment in the advanced magnetic divertor configurations.« less

  19. A review of radiative detachment studies in tokamak advanced magnetic divertor configurations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soukhanovskii, V. A.

    The present vision for a plasma–material interface in the tokamak is an axisymmetric poloidal magnetic X-point divertor. Four tasks are accomplished by the standard poloidal X-point divertor: plasma power exhaust; particle control (D/T and He pumping); reduction of impurity production (source); and impurity screening by the divertor scrape-off layer. A low-temperature, low heat flux divertor operating regime called radiative detachment is viewed as the main option that addresses these tasks for present and future tokamaks. Advanced magnetic divertor configuration has the capability to modify divertor parallel and cross-field transport, radiative and dissipative losses, and detachment front stability. Advanced magnetic divertormore » configurations are divided into four categories based on their salient qualitative features: (1) multiple standard X-point divertors; (2) divertors with higher order nulls; (3) divertors with multiple X-points; and (4) long poloidal leg divertors (and also with multiple X-points). As a result, this paper reviews experiments and modeling in the area of radiative detachment in the advanced magnetic divertor configurations.« less

  20. Reviving common standards in point-count surveys for broad inference across studies

    USGS Publications Warehouse

    Matsuoka, Steven M.; Mahon, C. Lisa; Handel, Colleen M.; Solymos, Peter; Bayne, Erin M.; Fontaine, Patricia C.; Ralph, C.J.

    2014-01-01

    We revisit the common standards recommended by Ralph et al. (1993, 1995a) for conducting point-count surveys to assess the relative abundance of landbirds breeding in North America. The standards originated from discussions among ornithologists in 1991 and were developed so that point-count survey data could be broadly compared and jointly analyzed by national data centers with the goals of monitoring populations and managing habitat. Twenty years later, we revisit these standards because (1) they have not been universally followed and (2) new methods allow estimation of absolute abundance from point counts, but these methods generally require data beyond the original standards to account for imperfect detection. Lack of standardization and the complications it introduces for analysis become apparent from aggregated data. For example, only 3% of 196,000 point counts conducted during the period 1992-2011 across Alaska and Canada followed the standards recommended for the count period and count radius. Ten-minute, unlimited-count-radius surveys increased the number of birds detected by >300% over 3-minute, 50-m-radius surveys. This effect size, which could be eliminated by standardized sampling, was ≥10 times the published effect sizes of observers, time of day, and date of the surveys. We suggest that the recommendations by Ralph et al. (1995a) continue to form the common standards when conducting point counts. This protocol is inexpensive and easy to follow but still allows the surveys to be adjusted for detection probabilities. Investigators might optionally collect additional information so that they can analyze their data with more flexible forms of removal and time-of-detection models, distance sampling, multiple-observer methods, repeated counts, or combinations of these methods. Maintaining the common standards as a base protocol, even as these study-specific modifications are added, will maximize the value of point-count data, allowing compilation and analysis by regional and national data centers.

  1. [Operative treatment strategies for multiple trauma patients : early total care versus damage control].

    PubMed

    Klüter, T; Lippross, S; Oestern, S; Weuster, M; Seekamp, A

    2013-09-01

    The treatment of multiple trauma patients is a great challenge for an interdisciplinary team. After preclinical care and subsequent treatment in the emergency room the order of the interventions is prioritized depending of the individual risk stratification. For planning the surgery management it is essential to distinguish between absolutely essential operations to prevent life-threatening situations for the patient and interventions with shiftable indications, depending on the general condition of the patient. All interventions need to be done without causing significant secondary damage to prohibit hyperinflammation and systemic inflammatory response syndrome. The challenge consists in determination of the appropriate treatment at the right point in time. In general the early primary intervention, early total care, is differentiated from the damage control concept.

  2. Solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators.

    PubMed

    Zhao, Jing; Zong, Haili

    2018-01-01

    In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.

  3. 78 FR 16544 - Self-Regulatory Organizations; NYSE MKT LLC; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-15

    ... interest. Accordingly, at the $22.00 price point, both the entire amount of B4 and the remaining balance of...-STP opposite-side interest, Exchange systems would cancel the remaining balance of the incoming STPN.... If an STPN could execute at multiple price points, the incoming STPN would execute at the multiple...

  4. Assisting People with Disabilities Improves Their Collaborative Pointing Efficiency through the Use of the Mouse Scroll Wheel

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang

    2013-01-01

    This study provided that people with multiple disabilities can have a collaborative working chance in computer operations through an Enhanced Multiple Cursor Dynamic Pointing Assistive Program (EMCDPAP, a new kind of software that replaces the standard mouse driver, changes a mouse wheel into a thumb/finger poke detector, and manages mouse…

  5. Assisting People with Multiple Disabilities by Improving Their Computer Pointing Efficiency with an Automatic Target Acquisition Program

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang; Shih, Ching-Tien; Peng, Chin-Ling

    2011-01-01

    This study evaluated whether two people with multiple disabilities would be able to improve their pointing performance through an Automatic Target Acquisition Program (ATAP) and a newly developed mouse driver (i.e. a new mouse driver replaces standard mouse driver, and is able to monitor mouse movement and intercept click action). Initially, both…

  6. Multi-point laser ignition device

    DOEpatents

    McIntyre, Dustin L.; Woodruff, Steven D.

    2017-01-17

    A multi-point laser device comprising a plurality of optical pumping sources. Each optical pumping source is configured to create pumping excitation energy along a corresponding optical path directed through a high-reflectivity mirror and into substantially different locations within the laser media thereby producing atomic optical emissions at substantially different locations within the laser media and directed along a corresponding optical path of the optical pumping source. An output coupler and one or more output lenses are configured to produce a plurality of lasing events at substantially different times, locations or a combination thereof from the multiple atomic optical emissions produced at substantially different locations within the laser media. The laser media is a single continuous media, preferably grown on a single substrate.

  7. Unconstrained Capacities of Quantum Key Distribution and Entanglement Distillation for Pure-Loss Bosonic Broadcast Channels.

    PubMed

    Takeoka, Masahiro; Seshadreesan, Kaushik P; Wilde, Mark M

    2017-10-13

    We consider quantum key distribution (QKD) and entanglement distribution using a single-sender multiple-receiver pure-loss bosonic broadcast channel. We determine the unconstrained capacity region for the distillation of bipartite entanglement and secret key between the sender and each receiver, whenever they are allowed arbitrary public classical communication. A practical implication of our result is that the capacity region demonstrated drastically improves upon rates achievable using a naive time-sharing strategy, which has been employed in previously demonstrated network QKD systems. We show a simple example of a broadcast QKD protocol overcoming the limit of the point-to-point strategy. Our result is thus an important step toward opening a new framework of network channel-based quantum communication technology.

  8. An Automated Blur Detection Method for Histological Whole Slide Imaging

    PubMed Central

    Moles Lopez, Xavier; D'Andrea, Etienne; Barbot, Paul; Bridoux, Anne-Sophie; Rorive, Sandrine; Salmon, Isabelle; Debeir, Olivier; Decaestecker, Christine

    2013-01-01

    Whole slide scanners are novel devices that enable high-resolution imaging of an entire histological slide. Furthermore, the imaging is achieved in only a few minutes, which enables image rendering of large-scale studies involving multiple immunohistochemistry biomarkers. Although whole slide imaging has improved considerably, locally poor focusing causes blurred regions of the image. These artifacts may strongly affect the quality of subsequent analyses, making a slide review process mandatory. This tedious and time-consuming task requires the scanner operator to carefully assess the virtual slide and to manually select new focus points. We propose a statistical learning method that provides early image quality feedback and automatically identifies regions of the image that require additional focus points. PMID:24349343

  9. Sequence analysis of human rhinovirus aspirated from the nasopharynx of patients with relapsing-remitting MS.

    PubMed

    Kneider, M; Bergström, T; Gustafsson, C; Nenonen, N; Ahlgren, C; Nilsson, S; Andersen, O

    2009-04-01

    Upper respiratory infections were reported to trigger multiple sclerosis relapses. A relationship between picornavirus infections and MS relapses was recently reported. To evaluate whether human rhinovirus is associated with multiple sclerosis relapses and whether any particular strain is predominant. Nasopharyngeal fluid was aspirated from 36 multiple sclerosis patients at pre-defined critical time points. Reverse-transcriptase-PCR was performed to detect human rhinovirus-RNA. Positive amplicons were sequenced. We found that rhinovirus RNA was present in 17/40 (43%) of specimens obtained at the onset of a URTI in 19 patients, in 1/21 specimens during convalescence after URTI in 14 patients, in 0/6 specimens obtained in 5 patients on average a week after the onset of an "at risk" relapse, occurring within a window in time from one week before to three weeks after an infection, and in 0/17 specimens obtained after the onset of a "not at risk" relapse not associated with any infection in 12 patients. Fifteen specimens from healthy control persons not associated with URTI were negative. The frequency of HRV presence in URTI was similar to that reported for community infections. Eight amplicons from patients represented 5 different HRV strains. We were unable to reproduce previous findings of association between HRV infections and multiple sclerosis relapses. HRV was not present in nasopharyngeal aspirates obtained during "at risk" or "not at risk" relapses. Sequencing of HRV obtained from patients during URTI did not reveal any strain with predominance in multiple sclerosis.

  10. MetaRNA-Seq: An Interactive Tool to Browse and Annotate Metadata from RNA-Seq Studies.

    PubMed

    Kumar, Pankaj; Halama, Anna; Hayat, Shahina; Billing, Anja M; Gupta, Manish; Yousri, Noha A; Smith, Gregory M; Suhre, Karsten

    2015-01-01

    The number of RNA-Seq studies has grown in recent years. The design of RNA-Seq studies varies from very simple (e.g., two-condition case-control) to very complicated (e.g., time series involving multiple samples at each time point with separate drug treatments). Most of these publically available RNA-Seq studies are deposited in NCBI databases, but their metadata are scattered throughout four different databases: Sequence Read Archive (SRA), Biosample, Bioprojects, and Gene Expression Omnibus (GEO). Although the NCBI web interface is able to provide all of the metadata information, it often requires significant effort to retrieve study- or project-level information by traversing through multiple hyperlinks and going to another page. Moreover, project- and study-level metadata lack manual or automatic curation by categories, such as disease type, time series, case-control, or replicate type, which are vital to comprehending any RNA-Seq study. Here we describe "MetaRNA-Seq," a new tool for interactively browsing, searching, and annotating RNA-Seq metadata with the capability of semiautomatic curation at the study level.

  11. Fluorescence from Multiple Chromophore Hydrogen-Bonding States in the Far-Red Protein TagRFP675.

    PubMed

    Konold, Patrick E; Yoon, Eunjin; Lee, Junghwa; Allen, Samantha L; Chapagain, Prem P; Gerstman, Bernard S; Regmi, Chola K; Piatkevich, Kiryl D; Verkhusha, Vladislav V; Joo, Taiha; Jimenez, Ralph

    2016-08-04

    Far-red fluorescent proteins are critical for in vivo imaging applications, but the relative importance of structure versus dynamics in generating large Stokes-shifted emission is unclear. The unusually red-shifted emission of TagRFP675, a derivative of mKate, has been attributed to the multiple hydrogen bonds with the chromophore N-acylimine carbonyl. We characterized TagRFP675 and point mutants designed to perturb these hydrogen bonds with spectrally resolved transient grating and time-resolved fluorescence (TRF) spectroscopies supported by molecular dynamics simulations. TRF results for TagRFP675 and the mKate/M41Q variant show picosecond time scale red-shifts followed by nanosecond time blue-shifts. Global analysis of the TRF spectra reveals spectrally distinct emitting states that do not interconvert during the S1 lifetime. These dynamics originate from photoexcitation of a mixed ground-state population of acylimine hydrogen bond conformers. Strategically tuning the chromophore environment in TagRFP675 might stabilize the most red-shifted conformation and result in a variant with a larger Stokes shift.

  12. The impact of racial discrimination on the health of Australian Indigenous children aged 5-10 years: analysis of national longitudinal data.

    PubMed

    Shepherd, Carrington C J; Li, Jianghong; Cooper, Matthew N; Hopkins, Katrina D; Farrant, Brad M

    2017-07-03

    A growing body of literature highlights that racial discrimination has negative impacts on child health, although most studies have been limited to an examination of direct forms of racism using cross-sectional data. We aim to provide further insights on the impact of early exposure to racism on child health using longitudinal data among Indigenous children in Australia and multiple indicators of racial discrimination. We used data on 1239 Indigenous children aged 5-10 years from Waves 1-6 (2008-2013) of Footprints in Time, a longitudinal study of Indigenous children across Australia. We examined associations between three dimensions of carer-reported racial discrimination (measuring the direct experiences of children and vicarious exposure by their primary carer and family) and a range of physical and mental health outcomes. Analysis was conducted using multivariate logistic regression within a multilevel framework. Two-fifths (40%) of primary carers, 45% of families and 14% of Indigenous children aged 5-10 years were reported to have experienced racial discrimination at some point in time, with 28-40% of these experiencing it persistently (reported at multiple time points). Primary carer and child experiences of racial discrimination were each associated with poor child mental health status (high risk of clinically significant emotional or behavioural difficulties), sleep difficulties, obesity and asthma, but not with child general health or injury. Children exposed to persistent vicarious racial discrimination were more likely to have sleep difficulties and asthma in multivariate models than those with a time-limited exposure. The findings indicate that direct and persistent vicarious racial discrimination are detrimental to the physical and mental health of Indigenous children in Australia, and suggest that prolonged and more frequent exposure to racial discrimination that starts in the early lifecourse can impact on multiple domains of health in later life. Tackling and reducing racism should be an integral part of policy and intervention aimed at improving the health of Australian Indigenous children and thereby reducing health disparities between Indigenous and non-Indigenous children.

  13. Formation of multiple energy dispersion of H+, He+, and O+ ions in the inner magnetosphere in response to interplanetary shock

    NASA Astrophysics Data System (ADS)

    Tsuji, H.; Ebihara, Y.; Tanaka, T.

    2017-04-01

    An interplanetary (IP) shock has a large impact on magnetospheric ions. Satellite observations have shown that soon after arrival of the IP shock, overall intensity of the ions rapidly increases and multiple energy dispersion appears in an energy-time spectrogram of the ions. In order to understand the response of the magnetospheric ions to IP shock, we have performed test particle simulation under the electric and magnetic fields provided by the global magnetohydrodynamic simulation. We reconstructed the differential flux of H+, He+, and O+ ions at (7, 0, 0) Re in GSM coordinates by means of the semi-Lagrangian (phase space mapping) method. Simulation results show that the ions respond to the IP shock in two different ways. First, overall intensity of the flux gradually increases at all pitch angles. As the compressional wave propagates tailward, the magnetic field increases, which accelerates the ions due to the gyrobetatron. Second, multiple energy-time dispersion appears in the reconstructed spectrograms of the ion flux. The energy-time dispersion is caused by the ion moving toward mirror point together with tailward propagating compressional wave at off-equator. The ions are primarily accelerated by the drift betatron under the strong electric field looking dawnward. The dispersion is absent in the spectrogram of equatorially mirroring ions. The dispersion appears at higher energy for heavier ions. These features are consistent with the satellite observations. Because the acceleration depends on bounce phase, the bounce-averaged approximation is probably invalid for the ions during the interval of geomagnetic sudden commencement.Plain Language SummarySolar storm can cause a significant compression of the magnetosphere on the dayside. The compression starts at the subsolar point and propagates toward the nightside in the magnetosphere. Some ions bouncing between the Northern Hemisphere and the Southern Hemisphere are found to be accelerated selectively when the ions move together with the propagation of the compressional wave. As a consequence, striped structures appear in the energy versus time spectrum of the ion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5621137','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5621137"><span>Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yan, Li; Xie, Hong; Chen, Changjun</p> <p>2017-01-01</p> <p>Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%. PMID:28850100</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28850100','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28850100"><span>Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yan, Li; Tan, Junxiang; Liu, Hua; Xie, Hong; Chen, Changjun</p> <p>2017-08-29</p> <p>Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014CNSNS..19.2850G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014CNSNS..19.2850G"><span>Complex dynamics in the Leslie-Gower type of the food chain system with multiple delays</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Guo, Lei; Song, Zi-Gen; Xu, Jian</p> <p>2014-08-01</p> <p>In this paper, we present a Leslie-Gower type of food chain system composed of three species, which are resource, consumer, and predator, respectively. The digestion time delays corresponding to consumer-eat-resource and predator-eat-consumer are introduced for more realistic consideration. It is called the resource digestion delay (RDD) and consumer digestion delay (CDD) for simplicity. Analyzing the corresponding characteristic equation, the stabilities of the boundary and interior equilibrium points are studied. The food chain system exhibits the species coexistence for the small values of digestion delays. Large RDD/CDD may destabilize the species coexistence and induce the system dynamic into recurrent bloom or system collapse. Further, the present of multiple delays can control species population into the stable coexistence. To investigate the effect of time delays on the recurrent bloom of species population, the Hopf bifurcation and periodic solution are investigated in detail in terms of the central manifold reduction and normal form method. Finally, numerical simulations are performed to display some complex dynamics, which include multiple periodic solution and chaos motion for the different values of system parameters. The system dynamic behavior evolves into the chaos motion by employing the period-doubling bifurcation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1983PhDT........17E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1983PhDT........17E"><span>Multiple Steady States of Buoyancy Induced Flow in Cold Water and Their Stability.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>El-Henawy, Ibrahim Mahmoud</p> <p></p> <p>In Chapters 1 and 2 the physical background and the literature related to buoyancy-induced flows are reviewed. An accurate representation, based upon experimental data, of the motion-causing buoyancy force, in the vicinity of maximum density in pure water at low temperatures, is used. This representation is an accurate and quite simple formulation due to Gebhart and Mollendorf (1977). Using the representation, we study, numerically, Chapter 3, a model for the laminar, boundary-layer flow arising from natural convection adjacent to a vertical isothermal flat surface submerged in quiescent cold water. The results demonstrate for the first time the existence of multiple steady-state solutions in a natural convection flow. The existence of these new multiple steady-state solutions led to an investigation of their stability. This is carried out in Chapter 4 by a mathematical method, different from that of the usual hydrodynamic stability approach, Lin (1955) and Razinand and Reid (1982). Three real eigenvalue and eigenvector pairs corresponding to the new steady-state -solutions were found. Each of these eigenvalues changes its algebraic sign at a particular limit point (point of vertical tangency, nose, knee) in the bifurcation diagrams found in Chapter 3. The results indicate that the new steady-state solutions are unstable and that the previously found steady-state solutions, Carey, Gebhart, and Mollendorf (1980), may be stable.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29518008','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29518008"><span>Pathogens and Disease Play Havoc on the Host Epiproteome-The "First Line of Response" Role for Proteomic Changes Influenced by Disorder.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rikkerink, Erik H A</p> <p>2018-03-08</p> <p>Organisms face stress from multiple sources simultaneously and require mechanisms to respond to these scenarios if they are to survive in the long term. This overview focuses on a series of key points that illustrate how disorder and post-translational changes can combine to play a critical role in orchestrating the response of organisms to the stress of a changing environment. Increasingly, protein complexes are thought of as dynamic multi-component molecular machines able to adapt through compositional, conformational and/or post-translational modifications to control their largely metabolic outputs. These metabolites then feed into cellular physiological homeostasis or the production of secondary metabolites with novel anti-microbial properties. The control of adaptations to stress operates at multiple levels including the proteome and the dynamic nature of proteomic changes suggests a parallel with the equally dynamic epigenetic changes at the level of nucleic acids. Given their properties, I propose that some disordered protein platforms specifically enable organisms to sense and react rapidly as the first line of response to change. Using examples from the highly dynamic host-pathogen and host-stress response, I illustrate by example how disordered proteins are key to fulfilling the need for multiple levels of integration of response at different time scales to create robust control points.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21703569','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21703569"><span>CDP-choline as a biological supplement during neurorecovery: a focused review.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Arenth, Patricia M; Russell, Kathryn C; Ricker, Joseph H; Zafonte, Ross D</p> <p>2011-06-01</p> <p>Cytidine 5'-diphosphocholine (CDP-choline or citicoline) is a highly bioavailable compound with potential benefits for aiding neural repair and increasing acetylcholine levels in the central and peripheral nervous system. As a result, many researchers have investigated the use of CDP-choline for various types of neurological insult or conditions, including stroke, traumatic brain injury, and Alzheimer disease. Despite the fact that the safety of the compound has been verified across multiple international studies, evidence for efficacy remains less clear. This may be attributable, at least in part, to several issues, including a lack of randomized clinical trials, a lack of availability of the compound in the United States, and statistical power issues in reported trials. In addition, the fact that CDP-choline has multiple potential points of therapeutic impact makes it an exciting treatment option in theory but also complicates the analysis of efficacy in the sense that multiple mechanisms and time points must be evaluated. Although some clinical conditions do not appear to benefit from CDP-choline treatment, the majority of findings to date have suggested at least minor benefits of treatment. In this review we will examine the evidence in the published literature pertaining to use of CDP-choline in rehabilitation populations and briefly consider the work yet to be done. Copyright © 2011 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1329903-contaminant-point-source-localization-error-estimates-functions-data-quantity-model-quality','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1329903-contaminant-point-source-localization-error-estimates-functions-data-quantity-model-quality"><span>Contaminant point source localization error estimates as functions of data quantity and model quality</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Hansen, Scott K.; Vesselinov, Velimir Valentinov</p> <p>2016-10-01</p> <p>We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>