Sample records for time points analyzed

  1. [Automated analyzer of enzyme immunoassay].

    PubMed

    Osawa, S

    1995-09-01

    Automated analyzers for enzyme immunoassay can be classified by several points of view: the kind of labeled antibodies or enzymes, detection methods, the number of tests per unit time, analytical time and speed per run. In practice, it is important for us consider the several points such as detection limits, the number of tests per unit time, analytical range, and precision. Most of the automated analyzers on the market can randomly access and measure samples. I will describe the recent advance of automated analyzers reviewing their labeling antibodies and enzymes, the detection methods, the number of test per unit time and analytical time and speed per test.

  2. A 640-MHz 32-megachannel real-time polyphase-FFT spectrum analyzer

    NASA Technical Reports Server (NTRS)

    Zimmerman, G. A.; Garyantes, M. F.; Grimm, M. J.; Charny, B.

    1991-01-01

    A polyphase fast Fourier transform (FFT) spectrum analyzer being designed for NASA's Search for Extraterrestrial Intelligence (SETI) Sky Survey at the Jet Propulsion Laboratory is described. By replacing the time domain multiplicative window preprocessing with polyphase filter processing, much of the processing loss of windowed FFTs can be eliminated. Polyphase coefficient memory costs are minimized by effective use of run length compression. Finite word length effects are analyzed, producing a balanced system with 8 bit inputs, 16 bit fixed point polyphase arithmetic, and 24 bit fixed point FFT arithmetic. Fixed point renormalization midway through the computation is seen to be naturally accommodated by the matrix FFT algorithm proposed. Simulation results validate the finite word length arithmetic analysis and the renormalization technique.

  3. Experimental determination of material damping using vibration analyzer

    NASA Technical Reports Server (NTRS)

    Chowdhury, Mostafiz R.; Chowdhury, Farida

    1990-01-01

    Structural damping is an important dynamic characteristic of engineering materials that helps to damp vibrations by reducing their amplitudes. In this investigation, an experimental method is illustrated to determine the damping characteristics of engineering materials using a dual channel Fast Fourier Transform (FFT) analyzer. A portable Compaq III computer which houses the analyzer, is used to collect the dynamic responses of three metal rods. Time-domain information is analyzed to obtain the logarithmic decrement of their damping. The damping coefficients are then compared to determine the variation of damping from material to material. The variations of damping from one point to another of the same material, due to a fixed point excitation, and the variable damping at a fixed point due to excitation at different points, are also demonstrated.

  4. Analysis of Giardin expression during encystation of Giardia lamblia

    USDA-ARS?s Scientific Manuscript database

    The present study analyzed giardin transcription in trophozoites and cysts during encystation of Giardia lamblia. Encystment was induced using standard methods, and the number of trophozoites and cysts were counted at various time-points during encystation. At all time points, RNA from both stages...

  5. Do They Know Their ABCs? Letter-Name Knowledge of Urban Preschoolers

    ERIC Educational Resources Information Center

    Edwards, Liesl

    2012-01-01

    This study analyzed the performance and growth in letter knowledge and letter identification skills of children across an academic year. Repeated measures analyses of variance were conducted on letter name knowledge measures administered at three time points for all participating children (N = 177) and seven time points for children (n = 106)…

  6. Ruminal bacteria and protozoa composition, digestibility, and amino acid profile determined by multiple hydrolysis times.

    PubMed

    Fessenden, S W; Hackmann, T J; Ross, D A; Foskolos, A; Van Amburgh, M E

    2017-09-01

    Microbial samples from 4 independent experiments in lactating dairy cattle were obtained and analyzed for nutrient composition, AA digestibility, and AA profile after multiple hydrolysis times ranging from 2 to 168 h. Similar bacterial and protozoal isolation techniques were used for all isolations. Omasal bacteria and protozoa samples were analyzed for AA digestibility using a new in vitro technique. Multiple time point hydrolysis and least squares nonlinear regression were used to determine the AA content of omasal bacteria and protozoa, and equivalency comparisons were made against single time point hydrolysis. Formalin was used in 1 experiment, which negatively affected AA digestibility and likely limited the complete release of AA during acid hydrolysis. The mean AA digestibility was 87.8 and 81.6% for non-formalin-treated bacteria and protozoa, respectively. Preservation of microbe samples in formalin likely decreased recovery of several individual AA. Results from the multiple time point hydrolysis indicated that Ile, Val, and Met hydrolyzed at a slower rate compared with other essential AA. Singe time point hydrolysis was found to be nonequivalent to multiple time point hydrolysis when considering biologically important changes in estimated microbial AA profiles. Several AA, including Met, Ile, and Val, were underpredicted using AA determination after a single 24-h hydrolysis. Models for predicting postruminal supply of AA might need to consider potential bias present in postruminal AA flow literature when AA determinations are performed after single time point hydrolysis and when using formalin as a preservative for microbial samples. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  7. Mind Mirror Projects: A Tool for Integrating Critical Thinking into the English Language Classroom

    ERIC Educational Resources Information Center

    Tully, Matthew M.

    2009-01-01

    Identifying a point of view can be a complex task in any language. By analyzing what characters say, think, and do throughout a story, readers can observe how points of view tend to change over time. Easier said than done, this ability to climb inside the mind of a character can help students as they analyze personalities found in literature,…

  8. Efficient Algorithms for Segmentation of Item-Set Time Series

    NASA Astrophysics Data System (ADS)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  9. Twisted Gastrulation as a BMP Modulator during Mammary Gland Development and Tumorigenesis

    DTIC Science & Technology

    2014-05-01

    present at the onset of puberty , roughly 6 weeks of age, but not at later time points we began our Q-PCR analysis at this time point. Analyzing...rudimentary ductal tree that during puberty , pregnancy and lactation undergoes complex morphological changes (Hens and Wysolmerski, 2005; Hovey and Trott...proliferates and elongates into the developing fat pad forming a rudimentary tree. Development is arrested at this point until puberty . At puberty , terminal

  10. Barriers and dispersal surfaces in minimum-time interception

    NASA Technical Reports Server (NTRS)

    Rajan, N.; Ardema, M. D.

    1982-01-01

    Minimum time interception of a target moving in a horizontal plane is analyzed as a one-player differential game. Dispersal points and points on the barrier are located for a class of pursuit evasion and interception problems. These points are determined by constructing cross sections of the isochrones and hence obtaining the barrier, dispersal, and control level surfaces. The game solution maps the controls as a function of the state within the capture region.

  11. Intrinsic time quantum geometrodynamics

    NASA Astrophysics Data System (ADS)

    Ita, Eyo Eyo; Soo, Chopin; Yu, Hoi-Lai

    2015-08-01

    Quantum geometrodynamics with intrinsic time development and momentric variables is presented. An underlying SU(3) group structure at each spatial point regulates the theory. The intrinsic time behavior of the theory is analyzed, together with its ground state and primordial quantum fluctuations. Cotton-York potential dominates at early times when the universe was small; the ground state naturally resolves Penrose's Weyl curvature hypothesis, and thermodynamic and gravitational "arrows of time" point in the same direction. Ricci scalar potential corresponding to Einstein's general relativity emerges as a zero-point energy contribution. A new set of fundamental commutation relations without Planck's constant emerges from the unification of gravitation and quantum mechanics.

  12. Multiplicative point process as a model of trading activity

    NASA Astrophysics Data System (ADS)

    Gontis, V.; Kaulakys, B.

    2004-11-01

    Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.

  13. Single-case experimental design yielded an effect estimate corresponding to a randomized controlled trial.

    PubMed

    Shadish, William R; Rindskopf, David M; Boyajian, Jonathan G

    2016-08-01

    We reanalyzed data from a previous randomized crossover design that administered high or low doses of intravenous immunoglobulin (IgG) to 12 patients with hypogammaglobulinaemia over 12 time points, with crossover after time 6. The objective was to see if results corresponded when analyzed as a set of single-case experimental designs vs. as a usual randomized controlled trial (RCT). Two blinded statisticians independently analyzed results. One analyzed the RCT comparing mean outcomes of group A (high dose IgG) to group B (low dose IgG) at the usual trial end point (time 6 in this case). The other analyzed all 12 time points for the group B patients as six single-case experimental designs analyzed together in a Bayesian nonlinear framework. In the randomized trial, group A [M = 794.93; standard deviation (SD) = 90.48] had significantly higher serum IgG levels at time six than group B (M = 283.89; SD = 71.10) (t = 10.88; df = 10; P < 0.001), yielding a mean difference of MD = 511.05 [standard error (SE) = 46.98]. For the single-case experimental designs, the effect from an intrinsically nonlinear regression was also significant and comparable in size with overlapping confidence intervals: MD = 495.00, SE = 54.41, and t = 495.00/54.41 = 9.10. Subsequent exploratory analyses indicated that how trend was modeled made a difference to these conclusions. The results of single-case experimental designs accurately approximated results from an RCT, although more work is needed to understand the conditions under which this holds. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Analyzing survival curves at a fixed point in time for paired and clustered right-censored data

    PubMed Central

    Su, Pei-Fang; Chi, Yunchan; Lee, Chun-Yi; Shyr, Yu; Liao, Yi-De

    2018-01-01

    In clinical trials, information about certain time points may be of interest in making decisions about treatment effectiveness. Rather than comparing entire survival curves, researchers can focus on the comparison at fixed time points that may have a clinical utility for patients. For two independent samples of right-censored data, Klein et al. (2007) compared survival probabilities at a fixed time point by studying a number of tests based on some transformations of the Kaplan-Meier estimators of the survival function. However, to compare the survival probabilities at a fixed time point for paired right-censored data or clustered right-censored data, their approach would need to be modified. In this paper, we extend the statistics to accommodate the possible within-paired correlation and within-clustered correlation, respectively. We use simulation studies to present comparative results. Finally, we illustrate the implementation of these methods using two real data sets. PMID:29456280

  15. Spatial Correlation of Solar-Wind Turbulence from Two-Point Measurements

    NASA Technical Reports Server (NTRS)

    Matthaeus, W. H.; Milano, L. J.; Dasso, S.; Weygand, J. M.; Smith, C. W.; Kivelson, M. G.

    2005-01-01

    Interplanetary turbulence, the best studied case of low frequency plasma turbulence, is the only directly quantified instance of astrophysical turbulence. Here, magnetic field correlation analysis, using for the first time only proper two-point, single time measurements, provides a key step in unraveling the space-time structure of interplanetary turbulence. Simultaneous magnetic field data from the Wind, ACE, and Cluster spacecraft are analyzed to determine the correlation (outer) scale, and the Taylor microscale near Earth's orbit.

  16. Coupled continuous time-random walks in quenched random environment

    NASA Astrophysics Data System (ADS)

    Magdziarz, M.; Szczotka, W.

    2018-02-01

    We introduce a coupled continuous-time random walk with coupling which is characteristic for Lévy walks. Additionally we assume that the walker moves in a quenched random environment, i.e. the site disorder at each lattice point is fixed in time. We analyze the scaling limit of such a random walk. We show that for large times the behaviour of the analyzed process is exactly the same as in the case of uncoupled quenched trap model for Lévy flights.

  17. Evaluating Mass Analyzers as Candidates for Small, Portable, Rugged Single Point Mass Spectrometers for Analysis of Permanent Gases

    NASA Technical Reports Server (NTRS)

    Arkin, C. Richard; Ottens, Andrew K.; Diaz, Jorge A.; Griffin, Timothy P.; Follestein, Duke; Adams, Fredrick; Steinrock, T. (Technical Monitor)

    2001-01-01

    For Space Shuttle launch safety, there is a need to monitor the concentration of H2, He, O2 and Ar around the launch vehicle. Currently a large mass spectrometry system performs this task, using long transport lines to draw in samples. There is great interest in replacing this stationary system with several miniature, portable, rugged mass spectrometers which act as point sensors which can be placed at the sampling point. Five commercial and two non-commercial analyzers are evaluated. The five commercial systems include the Leybold Inficon XPR-2 linear quadrupole, the Stanford Research (SRS-100) linear quadrupole, the Ferran linear quadrupole array, the ThermoQuest Polaris-Q quadrupole ion trap, and the IonWerks Time-of-Flight (TOF). The non-commercial systems include a compact double focusing sector (CDFMS) developed at the University of Minnesota, and a quadrupole ion trap (UF-IT) developed at the University of Florida. The System Volume is determined by measuring the entire system volume including the mass analyzer, its associated electronics, the associated vacuum system, the high vacuum pump and rough pump. Also measured are any ion gauge controllers or other required equipment. Computers are not included. Scan Time is the time required for one scan to be acquired and the data to be transferred. It is determined by measuring the time required acquiring a known number of scans and dividing by said number of scans. Limit of Detection is determined first by performing a zero-span calibration (using a 10-point data set). Then the limit of detection (LOD) is defined as 3 times the standard deviation of the zero data set. (An LOD of 10 ppm or less is considered acceptable.)

  18. Parallel Fixed Point Implementation of a Radial Basis Function Network in an FPGA

    PubMed Central

    de Souza, Alisson C. D.; Fernandes, Marcelo A. C.

    2014-01-01

    This paper proposes a parallel fixed point radial basis function (RBF) artificial neural network (ANN), implemented in a field programmable gate array (FPGA) trained online with a least mean square (LMS) algorithm. The processing time and occupied area were analyzed for various fixed point formats. The problems of precision of the ANN response for nonlinear classification using the XOR gate and interpolation using the sine function were also analyzed in a hardware implementation. The entire project was developed using the System Generator platform (Xilinx), with a Virtex-6 xc6vcx240t-1ff1156 as the target FPGA. PMID:25268918

  19. Effects of Varying Epoch Lengths, Wear Time Algorithms, and Activity Cut-Points on Estimates of Child Sedentary Behavior and Physical Activity from Accelerometer Data.

    PubMed

    Banda, Jorge A; Haydel, K Farish; Davila, Tania; Desai, Manisha; Bryson, Susan; Haskell, William L; Matheson, Donna; Robinson, Thomas N

    2016-01-01

    To examine the effects of accelerometer epoch lengths, wear time (WT) algorithms, and activity cut-points on estimates of WT, sedentary behavior (SB), and physical activity (PA). 268 7-11 year-olds with BMI ≥ 85th percentile for age and sex wore accelerometers on their right hips for 4-7 days. Data were processed and analyzed at epoch lengths of 1-, 5-, 10-, 15-, 30-, and 60-seconds. For each epoch length, WT minutes/day was determined using three common WT algorithms, and minutes/day and percent time spent in SB, light (LPA), moderate (MPA), and vigorous (VPA) PA were determined using five common activity cut-points. ANOVA tested differences in WT, SB, LPA, MPA, VPA, and MVPA when using the different epoch lengths, WT algorithms, and activity cut-points. WT minutes/day varied significantly by epoch length when using the NHANES WT algorithm (p < .0001), but did not vary significantly by epoch length when using the ≥ 20 minute consecutive zero or Choi WT algorithms. Minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA varied significantly by epoch length for all sets of activity cut-points tested with all three WT algorithms (all p < .0001). Across all epoch lengths, minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA also varied significantly across all sets of activity cut-points with all three WT algorithms (all p < .0001). The common practice of converting WT algorithms and activity cut-point definitions to match different epoch lengths may introduce significant errors. Estimates of SB and PA from studies that process and analyze data using different epoch lengths, WT algorithms, and/or activity cut-points are not comparable, potentially leading to very different results, interpretations, and conclusions, misleading research and public policy.

  20. Statistical properties of several models of fractional random point processes

    NASA Astrophysics Data System (ADS)

    Bendjaballah, C.

    2011-08-01

    Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.

  1. Modeling seasonal detection patterns for burrowing owl surveys

    Treesearch

    Quresh S. Latif; Kathleen D. Fleming; Cameron Barrows; John T. Rotenberry

    2012-01-01

    To guide monitoring of burrowing owls (Athene cunicularia) in the Coachella Valley, California, USA, we analyzed survey-method-specific seasonal variation in detectability. Point-based call-broadcast surveys yielded high early season detectability that then declined through time, whereas detectability on driving surveys increased through the season. Point surveys...

  2. Error Mitigation of Point-to-Point Communication for Fault-Tolerant Computing

    NASA Technical Reports Server (NTRS)

    Akamine, Robert L.; Hodson, Robert F.; LaMeres, Brock J.; Ray, Robert E.

    2011-01-01

    Fault tolerant systems require the ability to detect and recover from physical damage caused by the hardware s environment, faulty connectors, and system degradation over time. This ability applies to military, space, and industrial computing applications. The integrity of Point-to-Point (P2P) communication, between two microcontrollers for example, is an essential part of fault tolerant computing systems. In this paper, different methods of fault detection and recovery are presented and analyzed.

  3. A Bayesian Approach for Analyzing Longitudinal Structural Equation Models

    ERIC Educational Resources Information Center

    Song, Xin-Yuan; Lu, Zhao-Hua; Hser, Yih-Ing; Lee, Sik-Yum

    2011-01-01

    This article considers a Bayesian approach for analyzing a longitudinal 2-level nonlinear structural equation model with covariates, and mixed continuous and ordered categorical variables. The first-level model is formulated for measures taken at each time point nested within individuals for investigating their characteristics that are dynamically…

  4. Multivariate random regression analysis for body weight and main morphological traits in genetically improved farmed tilapia (Oreochromis niloticus).

    PubMed

    He, Jie; Zhao, Yunfeng; Zhao, Jingli; Gao, Jin; Han, Dandan; Xu, Pao; Yang, Runqing

    2017-11-02

    Because of their high economic importance, growth traits in fish are under continuous improvement. For growth traits that are recorded at multiple time-points in life, the use of univariate and multivariate animal models is limited because of the variable and irregular timing of these measures. Thus, the univariate random regression model (RRM) was introduced for the genetic analysis of dynamic growth traits in fish breeding. We used a multivariate random regression model (MRRM) to analyze genetic changes in growth traits recorded at multiple time-point of genetically-improved farmed tilapia. Legendre polynomials of different orders were applied to characterize the influences of fixed and random effects on growth trajectories. The final MRRM was determined by optimizing the univariate RRM for the analyzed traits separately via penalizing adaptively the likelihood statistical criterion, which is superior to both the Akaike information criterion and the Bayesian information criterion. In the selected MRRM, the additive genetic effects were modeled by Legendre polynomials of three orders for body weight (BWE) and body length (BL) and of two orders for body depth (BD). By using the covariance functions of the MRRM, estimated heritabilities were between 0.086 and 0.628 for BWE, 0.155 and 0.556 for BL, and 0.056 and 0.607 for BD. Only heritabilities for BD measured from 60 to 140 days of age were consistently higher than those estimated by the univariate RRM. All genetic correlations between growth time-points exceeded 0.5 for either single or pairwise time-points. Moreover, correlations between early and late growth time-points were lower. Thus, for phenotypes that are measured repeatedly in aquaculture, an MRRM can enhance the efficiency of the comprehensive selection for BWE and the main morphological traits.

  5. Dual keel Space Station payload pointing system design and analysis feasibility study

    NASA Technical Reports Server (NTRS)

    Smagala, Tom; Class, Brian F.; Bauer, Frank H.; Lebair, Deborah A.

    1988-01-01

    A Space Station attached Payload Pointing System (PPS) has been designed and analyzed. The PPS is responsible for maintaining fixed payload pointing in the presence of disturbance applied to the Space Station. The payload considered in this analysis is the Solar Optical Telescope. System performance is evaluated via digital time simulations by applying various disturbance forces to the Space Station. The PPS meets the Space Station articulated pointing requirement for all disturbances except Shuttle docking and some centrifuge cases.

  6. A novel dynamic sensing of wearable digital textile sensor with body motion analysis.

    PubMed

    Yang, Chang-Ming; Lin, Zhan-Sheng; Hu, Chang-Lin; Chen, Yu-Shih; Ke, Ling-Yi; Chen, Yin-Rui

    2010-01-01

    This work proposes an innovative textile sensor system to monitor dynamic body movement and human posture by attaching wearable digital sensors to analyze body motion. The proposed system can display and analyze signals when individuals are walking, running, veering around, walking up and down stairs, as well as falling down with a wearable monitoring system, which reacts to the coordination between the body and feet. Several digital sensor designs are embedded in clothing and wear apparel. Any pressure point can determine which activity is underway. Importantly, wearable digital sensors and a wearable monitoring system allow adaptive, real-time postures, real time velocity, acceleration, non-invasive, transmission healthcare, and point of care (POC) for home and non-clinical environments.

  7. CASOAR - An infrared active wave front sensor for atmospheric turbulence analysis

    NASA Astrophysics Data System (ADS)

    Cariou, Jean-Pierre; Dolfi, Agnes

    1992-12-01

    Knowledge of deformation of every point of a wave front over time allows statistical turbulence parameters to be analyzed, and the definition of real time adaptive optics to be designed. An optical instrumentation was built to meet this need. Integrated in a compact enclosure for experiments on outdoor sites, the CASOAR allows the deformations of a wave front to be measured rapidly (100 Hz) and with accuracy (1 deg). The CASOAR is an active system: it includes its own light source (CW CO2 laser), making it self-contained, self-aligned and insensitive to spurious light rays. After being reflected off a mirror located beyond the atmospheric layer to be analyzed (range of several kilometers), the beam is received and detected by coherent mixing. Electronic phase is converted in optical phase and recorded or displayed in real time on a monitor. Experimental results are shown, pointing out the capabilities of this device.

  8. Chatter detection in turning using persistent homology

    NASA Astrophysics Data System (ADS)

    Khasawneh, Firas A.; Munch, Elizabeth

    2016-03-01

    This paper describes a new approach for ascertaining the stability of stochastic dynamical systems in their parameter space by examining their time series using topological data analysis (TDA). We illustrate the approach using a nonlinear delayed model that describes the tool oscillations due to self-excited vibrations in turning. Each time series is generated using the Euler-Maruyama method and a corresponding point cloud is obtained using the Takens embedding. The point cloud can then be analyzed using a tool from TDA known as persistent homology. The results of this study show that the described approach can be used for analyzing datasets of delay dynamical systems generated both from numerical simulation and experimental data. The contributions of this paper include presenting for the first time a topological approach for investigating the stability of a class of nonlinear stochastic delay equations, and introducing a new application of TDA to machining processes.

  9. Quantification of HIV-1 DNA using real-time recombinase polymerase amplification.

    PubMed

    Crannell, Zachary Austin; Rohrman, Brittany; Richards-Kortum, Rebecca

    2014-06-17

    Although recombinase polymerase amplification (RPA) has many advantages for the detection of pathogenic nucleic acids in point-of-care applications, RPA has not yet been implemented to quantify sample concentration using a standard curve. Here, we describe a real-time RPA assay with an internal positive control and an algorithm that analyzes real-time fluorescence data to quantify HIV-1 DNA. We show that DNA concentration and the onset of detectable amplification are correlated by an exponential standard curve. In a set of experiments in which the standard curve and algorithm were used to analyze and quantify additional DNA samples, the algorithm predicted an average concentration within 1 order of magnitude of the correct concentration for all HIV-1 DNA concentrations tested. These results suggest that quantitative RPA (qRPA) may serve as a powerful tool for quantifying nucleic acids and may be adapted for use in single-sample point-of-care diagnostic systems.

  10. An Interim Report on a Fifteen Point Plan to Reduce Racial Isolation and Provide Quality Integrated Education.

    ERIC Educational Resources Information Center

    Rochester City School District, NY.

    This is an interim report of the second full year of the "Fifteen Point Plan." Although majority of comparisons between groups shows no significant statistical differences, differentials in achievement may become more noticeable as the program effects are reinforced with time, and through cumulative experience. From data analyzed after…

  11. A wide-band, high-resolution spectrum analyzer

    NASA Technical Reports Server (NTRS)

    Wilck, H. C.; Quirk, M. P.; Grimm, M. J.

    1985-01-01

    A million-channel, 20 MHz-bandwidth, digital spectrum analyzer under evelopment for use in the SETI Sky Survey and other applications in the Deep Space Network is described. The analyzer digitizes an analog input, performs a 2(20)-point Radix-2, Fast Fourier Transform, accumulates the output power, and normalizes the output to remove frequency-dependent gain. The effective speed of the real-time hardware is 2.2 GigaFLOPS.

  12. Demonstration of analyzers for multimode photonic time-bin qubits

    NASA Astrophysics Data System (ADS)

    Jin, Jeongwan; Agne, Sascha; Bourgoin, Jean-Philippe; Zhang, Yanbao; Lütkenhaus, Norbert; Jennewein, Thomas

    2018-04-01

    We demonstrate two approaches for unbalanced interferometers as time-bin qubit analyzers for quantum communication, robust against mode distortions and polarization effects as expected from free-space quantum communication systems including wavefront deformations, path fluctuations, pointing errors, and optical elements. Despite strong spatial and temporal distortions of the optical mode of a time-bin qubit, entangled with a separate polarization qubit, we verify entanglement using the Negative Partial Transpose, with the measured visibility of up to 0.85 ±0.01 . The robustness of the analyzers is further demonstrated for various angles of incidence up to 0 .2∘ . The output of the interferometers is coupled into multimode fiber yielding a high system throughput of 0.74. Therefore, these analyzers are suitable and efficient for quantum communication over multimode optical channels.

  13. Quality assessment of expert answers to lay questions about cystic fibrosis from various language zones in Europe: the ECORN-CF project.

    PubMed

    d'Alquen, Daniela; De Boeck, Kris; Bradley, Judy; Vávrová, Věra; Dembski, Birgit; Wagner, Thomas O F; Pfalz, Annette; Hebestreit, Helge

    2012-02-06

    The European Centres of Reference Network for Cystic Fibrosis (ECORN-CF) established an Internet forum which provides the opportunity for CF patients and other interested people to ask experts questions about CF in their mother language. The objectives of this study were to: 1) develop a detailed quality assessment tool to analyze quality of expert answers, 2) evaluate the intra- and inter-rater agreement of this tool, and 3) explore changes in the quality of expert answers over the time frame of the project. The quality assessment tool was developed by an expert panel. Five experts within the ECORN-CF project used the quality assessment tool to analyze the quality of 108 expert answers published on ECORN-CF from six language zones. 25 expert answers were scored at two time points, one year apart. Quality of answers was also assessed at an early and later period of the project. Individual rater scores and group mean scores were analyzed for each expert answer. A scoring system and training manual were developed analyzing two quality categories of answers: content and formal quality. For content quality, the grades based on group mean scores for all raters showed substantial agreement between two time points, however this was not the case for the grades based on individual rater scores. For formal quality the grades based on group mean scores showed only slight agreement between two time points and there was also poor agreement between time points for the individual grades. The inter-rater agreement for content quality was fair (mean kappa value 0.232 ± 0.036, p < 0.001) while only slight agreement was observed for the grades of the formal quality (mean kappa value 0.105 ± 0.024, p < 0.001). The quality of expert answers was rated high (four language zones) or satisfactory (two language zones) and did not change over time. The quality assessment tool described in this study was feasible and reliable when content quality was assessed by a group of raters. Within ECORN-CF, the tool will help ensure that CF patients all over Europe have equal possibility of access to high quality expert advice on their illness. © 2012 d’Alquen et al; licensee BioMed Central Ltd.

  14. Time-Dependent Alterations of MMPs, TIMPs and Tendon Structure in Human Achilles Tendons after Acute Rupture

    PubMed Central

    Minkwitz, Susann; Schmock, Aysha; Kurtoglu, Alper; Tsitsilonis, Serafeim; Manegold, Sebastian; Klatte-Schulz, Franka

    2017-01-01

    A balance between matrix metalloproteinases (MMPs) and their inhibitors (TIMPs) is required to maintain tendon homeostasis. Variation in this balance over time might impact on the success of tendon healing. This study aimed to analyze structural changes and the expression profile of MMPs and TIMPs in human Achilles tendons at different time-points after rupture. Biopsies from 37 patients with acute Achilles tendon rupture were taken at surgery and grouped according to time after rupture: early (2–4 days), middle (5–6 days), and late (≥7 days), and intact Achilles tendons served as control. The histological score increased from the early to the late time-point after rupture, indicating the progression towards a more degenerative status. In comparison to intact tendons, qRT-PCR analysis revealed a significantly increased expression of MMP-1, -2, -13, TIMP-1, COL1A1, and COL3A1 in ruptured tendons, whereas TIMP-3 decreased. Comparing the changes over time post rupture, the expression of MMP-9, -13, and COL1A1 significantly increased, whereas MMP-3 and -10 expression decreased. TIMP expression was not significantly altered over time. MMP staining by immunohistochemistry was positive in the ruptured tendons exemplarily analyzed from early and late time-points. The study demonstrates a pivotal contribution of all investigated MMPs and TIMP-1, but a minor role of TIMP-2, -3, and -4, in the early human tendon healing process. PMID:29053586

  15. Four-Year Cross-Lagged Associations between Physical and Mental Health in the Medical Outcomes Study.

    ERIC Educational Resources Information Center

    Hays, Ron D.; And Others

    1994-01-01

    Applied structural equation modeling to evaluation of cross-lagged panel models. Self-reports of physical and mental health at three time points spanning four-year interval were analyzed to illustrate cross-lagged analysis methodology. Data were analyzed from 856 patients with hypertension, diabetes, heart disease, or depression. Cross-lagged…

  16. Numerical modeling of a point-source image under relative motion of radiation receiver and atmosphere

    NASA Astrophysics Data System (ADS)

    Kucherov, A. N.; Makashev, N. K.; Ustinov, E. V.

    1994-02-01

    A procedure is proposed for numerical modeling of instantaneous and averaged (over various time intervals) distant-point-source images perturbed by a turbulent atmosphere that moves relative to the radiation receiver. Examples of image calculations under conditions of the significant effect of atmospheric turbulence in an approximation of geometrical optics are presented and analyzed.

  17. [Multiple time scales analysis of spatial differentiation characteristics of non-point source nitrogen loss within watershed].

    PubMed

    Liu, Mei-bing; Chen, Xing-wei; Chen, Ying

    2015-07-01

    Identification of the critical source areas of non-point source pollution is an important means to control the non-point source pollution within the watershed. In order to further reveal the impact of multiple time scales on the spatial differentiation characteristics of non-point source nitrogen loss, a SWAT model of Shanmei Reservoir watershed was developed. Based on the simulation of total nitrogen (TN) loss intensity of all 38 subbasins, spatial distribution characteristics of nitrogen loss and critical source areas were analyzed at three time scales of yearly average, monthly average and rainstorms flood process, respectively. Furthermore, multiple linear correlation analysis was conducted to analyze the contribution of natural environment and anthropogenic disturbance on nitrogen loss. The results showed that there were significant spatial differences of TN loss in Shanmei Reservoir watershed at different time scales, and the spatial differentiation degree of nitrogen loss was in the order of monthly average > yearly average > rainstorms flood process. TN loss load mainly came from upland Taoxi subbasin, which was identified as the critical source area. At different time scales, land use types (such as farmland and forest) were always the dominant factor affecting the spatial distribution of nitrogen loss, while the effect of precipitation and runoff on the nitrogen loss was only taken in no fertilization month and several processes of storm flood at no fertilization date. This was mainly due to the significant spatial variation of land use and fertilization, as well as the low spatial variability of precipitation and runoff.

  18. Long-range correlations in time series generated by time-fractional diffusion: A numerical study

    NASA Astrophysics Data System (ADS)

    Barbieri, Davide; Vivoli, Alessandro

    2005-09-01

    Time series models showing power law tails in autocorrelation functions are common in econometrics. A special non-Markovian model for such kind of time series is provided by the random walk introduced by Gorenflo et al. as a discretization of time fractional diffusion. The time series so obtained are analyzed here from a numerical point of view in terms of autocorrelations and covariance matrices.

  19. A point-of-care chemistry test for reduction of turnaround and clinical decision time.

    PubMed

    Lee, Eui Jung; Shin, Sang Do; Song, Kyoung Jun; Kim, Seong Chun; Cho, Jin Seong; Lee, Seung Chul; Park, Ju Ok; Cha, Won Chul

    2011-06-01

    Our study compared clinical decision time between patients managed with a point-of-care chemistry test (POCT) and patients managed with the traditional central laboratory test (CLT). This was a randomized controlled multicenter trial in the emergency departments (EDs) of 5 academic teaching hospitals. We randomly assigned patients to POCT or CLT stratified by the Emergency Severity Index. A POCT chemistry analyzer (Piccolo; Abaxis, Inc, Union City, Calif), which is able to test liver panel, renal panel, pancreas enzymes, lipid panel, electrolytes, and blood gases, was set up in each ED. Primary and secondary end point was turnaround time and door-to-clinical-decision time. The total 2323 patients were randomly assigned to the POCT group (n = 1167) or to the CLT group (n = 1156). All of the basic characteristics were similar in the 2 groups. The turnaround time (median, interquartile range [IQR]) of the POCT group was shorter than that of the CLT group (14, 12-19 versus 55, 45-69 minutes; P < .0001). The median (IQR) door-to-clinical-decision time was also shorter in the POCT compared with the CLT group (46, 33-61 versus 86, 68-107 minutes; P < .0001). The proportion of patients who had new decisions within 60 minutes was 72.8% for the POCT group and 12.5% for the CLT group (P < .0001). A POCT chemistry analyzer in the ED shortens the test turnaround and ED clinical decision times compared with CLT. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. The four fixed points of scale invariant single field cosmological models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, BingKan, E-mail: bxue@princeton.edu

    2012-10-01

    We introduce a new set of flow parameters to describe the time dependence of the equation of state and the speed of sound in single field cosmological models. A scale invariant power spectrum is produced if these flow parameters satisfy specific dynamical equations. We analyze the flow of these parameters and find four types of fixed points that encompass all known single field models. Moreover, near each fixed point we uncover new models where the scale invariance of the power spectrum relies on having simultaneously time varying speed of sound and equation of state. We describe several distinctive new modelsmore » and discuss constraints from strong coupling and superluminality.« less

  1. SGR 1822-1606: Constant Spin Period

    NASA Astrophysics Data System (ADS)

    Serim, M.; Baykal, A.; Inam, S. C.

    2011-08-01

    We have analyzed light curve of the new source SGR 1822-1606 (Cummings et al. GCN 12159) using the real time data of RXTE observations. We have extracted light curve for 11 pointings with a time span of about 20 days and employed pulse timing analysis using the harmonic representation of pulses. Using the cross correlation of harmonic representation of pulses, we have obtained pulse arrival times.

  2. Platelet-activated clotting time does not measure platelet reactivity during cardiac surgery.

    PubMed

    Shore-Lesserson, L; Ammar, T; DePerio, M; Vela-Cantos, F; Fisher, C; Sarier, K

    1999-08-01

    Platelet dysfunction is a major contributor to bleeding after cardiopulmonary bypass (CPB), yet it remains difficult to diagnose. A point-of-care monitor, the platelet-activated clotting time (PACT), measures accelerated shortening of the kaolin-activated clotting time by addition of platelet activating factor. The authors sought to evaluate the clinical utility of the PACT by conducting serial measurements of PACT during cardiac surgery and correlating postoperative measurements with blood loss. In 50 cardiac surgical patients, blood was sampled at 10 time points to measure PACT. Simultaneously, platelet reactivity was measured by the thrombin receptor agonist peptide-induced expression of P-selectin, using flow cytometry. These tests were temporally analyzed. PACT values, P-selectin expression, and other coagulation tests were analyzed for correlation with postoperative chest tube drainage. PACT and P-selectin expression were maximally reduced after protamine administration. Changes in PACT did not correlate with changes in P-selectin expression at any time interval. Total 8-h chest tube drainage did not correlate with any coagulation test at any time point except with P-selectin expression after protamine administration (r = -0.4; P = 0.03). The platelet dysfunction associated with CPB may be a result of depressed platelet reactivity, as shown by thrombin receptor activating peptide-induced P-selectin expression. Changes in PACT did not correlate with blood loss or with changes in P-selectin expression suggesting that PACT is not a specific measure of platelet reactivity.

  3. A patient-specific segmentation framework for longitudinal MR images of traumatic brain injury

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Prastawa, Marcel; Irimia, Andrei; Chambers, Micah C.; Vespa, Paul M.; Van Horn, John D.; Gerig, Guido

    2012-02-01

    Traumatic brain injury (TBI) is a major cause of death and disability worldwide. Robust, reproducible segmentations of MR images with TBI are crucial for quantitative analysis of recovery and treatment efficacy. However, this is a significant challenge due to severe anatomy changes caused by edema (swelling), bleeding, tissue deformation, skull fracture, and other effects related to head injury. In this paper, we introduce a multi-modal image segmentation framework for longitudinal TBI images. The framework is initialized through manual input of primary lesion sites at each time point, which are then refined by a joint approach composed of Bayesian segmentation and construction of a personalized atlas. The personalized atlas construction estimates the average of the posteriors of the Bayesian segmentation at each time point and warps the average back to each time point to provide the updated priors for Bayesian segmentation. The difference between our approach and segmenting longitudinal images independently is that we use the information from all time points to improve the segmentations. Given a manual initialization, our framework automatically segments healthy structures (white matter, grey matter, cerebrospinal fluid) as well as different lesions such as hemorrhagic lesions and edema. Our framework can handle different sets of modalities at each time point, which provides flexibility in analyzing clinical scans. We show results on three subjects with acute baseline scans and chronic follow-up scans. The results demonstrate that joint analysis of all the points yields improved segmentation compared to independent analysis of the two time points.

  4. [Development of the automatic dental X-ray film processor].

    PubMed

    Bai, J; Chen, H

    1999-07-01

    This paper introduces a multiple-point detecting technique of the density of dental X-ray films. With the infrared ray multiple-point detecting technique, a single-chip microcomputer control system is used to analyze the effectiveness of the film-developing in real time in order to achieve a good image. Based on the new technology, We designed the intelligent automatic dental X-ray film processing.

  5. Change in the Embedding Dimension as an Indicator of an Approaching Transition

    PubMed Central

    Neuman, Yair; Marwan, Norbert; Cohen, Yohai

    2014-01-01

    Predicting a transition point in behavioral data should take into account the complexity of the signal being influenced by contextual factors. In this paper, we propose to analyze changes in the embedding dimension as contextual information indicating a proceeding transitive point, called OPtimal Embedding tRANsition Detection (OPERAND). Three texts were processed and translated to time-series of emotional polarity. It was found that changes in the embedding dimension proceeded transition points in the data. These preliminary results encourage further research into changes in the embedding dimension as generic markers of an approaching transition point. PMID:24979691

  6. Reduction of Averaging Time for Evaluation of Human Exposure to Radiofrequency Electromagnetic Fields from Cellular Base Stations

    NASA Astrophysics Data System (ADS)

    Kim, Byung Chan; Park, Seong-Ook

    In order to determine exposure compliance with the electromagnetic fields from a base station's antenna in the far-field region, we should calculate the spatially averaged field value in a defined space. This value is calculated based on the measured value obtained at several points within the restricted space. According to the ICNIRP guidelines, at each point in the space, the reference levels are averaged over any 6min (from 100kHz to 10GHz) for the general public. Therefore, the more points we use, the longer the measurement time becomes. For practical application, it is very advantageous to spend less time for measurement. In this paper, we analyzed the difference of average values between 6min and lesser periods and compared it with the standard uncertainty for measurement drift. Based on the standard deviation from the 6min averaging value, the proposed minimum averaging time is 1min.

  7. Points of Transition: Understanding the Constructed Identities of L2 Learners/Users across Time and Space

    ERIC Educational Resources Information Center

    Adawu, Anthony; Martin-Beltran, Melinda

    2012-01-01

    Using sociocultural and poststructuralist theoretical lenses, this study examines the narrative construction of language-learner identity across time and space. We applied cross-narrative methodologies to analyze language-learning autobiographies and interview data from three English users who had recently transitioned to a U.S. context for…

  8. Using blue mussels (Mytilus spp.) as biosentinels of Cryptosporidium spp. and Toxoplasma gondii contamination in marine aquatic environments

    USDA-ARS?s Scientific Manuscript database

    Methods to monitor microbial contamination typically involve collecting discrete samples at specific time-points and analyzing for a single contaminant. While informative, many of these methods suffer from poor recovery rates and only provide a snapshot of the microbial load at the time of collectio...

  9. Combined Vocal Exercises for Rehabilitation After Supracricoid Laryngectomy: Evaluation of Different Execution Times.

    PubMed

    Silveira, Hevely Saray Lima; Simões-Zenari, Marcia; Kulcsar, Marco Aurélio; Cernea, Claudio Roberto; Nemr, Kátia

    2017-10-27

    The supracricoid partial laryngectomy allows the preservation of laryngeal functions with good local cancer control. To assess laryngeal configuration and voice analysis data following the performance of a combination of two vocal exercises: the prolonged /b/vocal exercise combined with the vowel /e/ using chest and arm pushing with different durations among individuals who have undergone supracricoid laryngectomy. Eleven patients undergoing partial laryngectomy supracricoid with cricohyoidoepiglottopexy (CHEP) were evaluated using voice recording. Four judges performed separately a perceptive-vocal analysis of hearing voices, with random samples. For the analysis of intrajudge reliability, repetitions of 70% of the voices were done. Intraclass correlation coefficient was used to analyze the reliability of the judges. For an analysis of each judge to the comparison between zero time (time point 0), after the first series of exercises (time point 1), after the second series (time point 2), after the third series (time point 3), after the fourth series (time point 4), and after the fifth and final series (time point 5), the Friedman test was used with a significance level of 5%. The data relative to the configuration of the larynx were subjected to a descriptive analysis. In the evaluation, were considered the judge results 1 which have greater reliability. There was an improvement in the general level of vocal, roughness, and breathiness deviations from time point 4 [T4]. The prolonged /b/vocal exercise, combined with the vowel /e/ using chest- and arm-pushing exercises, was associated with an improvement in the overall grade of vocal deviation, roughness, and breathiness starting at minute 4 among patients who had undergone supracricoid laryngectomy with CHEP reconstruction. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  10. Monitoring urban subsidence based on SAR lnterferometric point target analysis

    USGS Publications Warehouse

    Zhang, Y.; Zhang, Jiahua; Gong, W.; Lu, Z.

    2009-01-01

    lnterferometric point target analysis (IPTA) is one of the latest developments in radar interferometric processing. It is achieved by analysis of the interferometric phases of some individual point targets, which are discrete and present temporarily stable backscattering characteristics, in long temporal series of interferometric SAR images. This paper analyzes the interferometric phase model of point targets, and then addresses two key issues within IPTA process. Firstly, a spatial searching method is proposed to unwrap the interferometric phase difference between two neighboring point targets. The height residual error and linear deformation rate of each point target can then be calculated, when a global reference point with known height correction and deformation history is chosen. Secondly, a spatial-temporal filtering scheme is proposed to further separate the atmosphere phase and nonlinear deformation phase from the residual interferometric phase. Finally, an experiment of the developed IPTA methodology is conducted over Suzhou urban area. Totally 38 ERS-1/2 SAR scenes are analyzed, and the deformation information over 3 546 point targets in the time span of 1992-2002 are generated. The IPTA-derived deformation shows very good agreement with the published result, which demonstrates that the IPTA technique can be developed into an operational tool to map the ground subsidence over urban area.

  11. Lessons Learned over Four Benchmark Exercises from the Community Structure-Activity Resource

    PubMed Central

    Carlson, Heather A.

    2016-01-01

    Preparing datasets and analyzing the results is difficult and time-consuming, and I hope the points raised here will help other scientists avoid some of the thorny issues we wrestled with. PMID:27345761

  12. A Case Example of the Implementation of Schoolwide Positive Behavior Support in a High School Setting Using Change Point Test Analysis

    ERIC Educational Resources Information Center

    Bohanon, Hank; Fenning, Pamela; Hicks, Kira; Weber, Stacey; Thier, Kimberly; Aikins, Brigit; Morrissey, Kelly; Briggs, Alissa; Bartucci, Gina; McArdle, Lauren; Hoeper, Lisa; Irvin, Larry

    2012-01-01

    The purpose of this case study was to expand the literature base regarding the application of high school schoolwide positive behavior support in an urban setting for practitioners and policymakers to address behavior issues. In addition, the study describes the use of the Change Point Test as a method for analyzing time series data that are…

  13. Tracking of physical activity during adolescence: the 1993 Pelotas Birth Cohort, Brazil

    PubMed Central

    Azevedo, Mario Renato; Menezes, Ana Maria; Assunção, Maria Cecília; Gonçalves, Helen; Arumi, Ignasi; Horta, Bernardo Lessa; Hallal, Pedro Curi

    2014-01-01

    OBJECTIVE To analyze physical activity during adolescence in participants of the 1993 Pelotas Birth Cohort Study, Brazil. METHODS Data on leisure time physical activity at 11, 15, and 18 years of age were analyzed. At each visit, a cut-off point of 300 min/week was used to classify adolescents as active or inactive. A total of 3,736 participants provided data on physical activity at each of the three age points. RESULTS A significant decline in the proportion of active adolescents was observed from 11 to 18 years of age, particularly among girls (from 32.9% to 21.7%). The proportions of girls and boys who were active at all three age points were 28.0% and 55.1%, respectively. After adjustment for sex, economic status, and skin color, participants who were active at 11 and 15 years of age were 58.0% more likely to be active at 18 years of age compared with those who were inactive at 11 and 15 years of age. CONCLUSIONS Physical activity declined during adolescence and inactivity tended to track over time. Our findings reinforce the need to promote physical activity at early stages of life, because active behavior established early tends to be maintained over time. PMID:26039395

  14. Modeling and analysis of the effect of training on V O2 kinetics and anaerobic capacity.

    PubMed

    Stirling, J R; Zakynthinaki, M S; Billat, V

    2008-07-01

    In this paper, we present an application of a number of tools and concepts for modeling and analyzing raw, unaveraged, and unedited breath-by-breath oxygen uptake data. A method for calculating anaerobic capacity is used together with a model, in the form of a set of coupled nonlinear ordinary differential equations to make predictions of the VO(2) kinetics, the time to achieve a percentage of a certain constant oxygen demand, and the time limit to exhaustion at intensities other than those in which we have data. Speeded oxygen kinetics and increased time limit to exhaustion are also investigated using the eigenvalues of the fixed points of our model. We also use a way of analyzing the oxygen uptake kinetics using a plot of V O(2)(t) vs V O(2)(t) which allows one to observe both the fixed point solutions and also the presence of speeded oxygen kinetics following training. A method of plotting the eigenvalue versus oxygen demand is also used which allows one to observe where the maximum amplitude of the so-called slow component will be and also how training has changed the oxygen uptake kinetics by changing the strength of the attracting fixed point for a particular demand.

  15. Reviving common standards in point-count surveys for broad inference across studies

    USGS Publications Warehouse

    Matsuoka, Steven M.; Mahon, C. Lisa; Handel, Colleen M.; Solymos, Peter; Bayne, Erin M.; Fontaine, Patricia C.; Ralph, C.J.

    2014-01-01

    We revisit the common standards recommended by Ralph et al. (1993, 1995a) for conducting point-count surveys to assess the relative abundance of landbirds breeding in North America. The standards originated from discussions among ornithologists in 1991 and were developed so that point-count survey data could be broadly compared and jointly analyzed by national data centers with the goals of monitoring populations and managing habitat. Twenty years later, we revisit these standards because (1) they have not been universally followed and (2) new methods allow estimation of absolute abundance from point counts, but these methods generally require data beyond the original standards to account for imperfect detection. Lack of standardization and the complications it introduces for analysis become apparent from aggregated data. For example, only 3% of 196,000 point counts conducted during the period 1992-2011 across Alaska and Canada followed the standards recommended for the count period and count radius. Ten-minute, unlimited-count-radius surveys increased the number of birds detected by >300% over 3-minute, 50-m-radius surveys. This effect size, which could be eliminated by standardized sampling, was ≥10 times the published effect sizes of observers, time of day, and date of the surveys. We suggest that the recommendations by Ralph et al. (1995a) continue to form the common standards when conducting point counts. This protocol is inexpensive and easy to follow but still allows the surveys to be adjusted for detection probabilities. Investigators might optionally collect additional information so that they can analyze their data with more flexible forms of removal and time-of-detection models, distance sampling, multiple-observer methods, repeated counts, or combinations of these methods. Maintaining the common standards as a base protocol, even as these study-specific modifications are added, will maximize the value of point-count data, allowing compilation and analysis by regional and national data centers.

  16. TaqMan based real time PCR assay targeting EML4-ALK fusion transcripts in NSCLC.

    PubMed

    Robesova, Blanka; Bajerova, Monika; Liskova, Kvetoslava; Skrickova, Jana; Tomiskova, Marcela; Pospisilova, Sarka; Mayer, Jiri; Dvorakova, Dana

    2014-07-01

    Lung cancer with the ALK rearrangement constitutes only a small fraction of patients with non-small cell lung cancer (NSCLC). However, in the era of molecular-targeted therapy, efficient patient selection is crucial for successful treatment. In this context, an effective method for EML4-ALK detection is necessary. We developed a new highly sensitive variant specific TaqMan based real time PCR assay applicable to RNA from formalin-fixed paraffin-embedded tissue (FFPE). This assay was used to analyze the EML4-ALK gene in 96 non-selected NSCLC specimens and compared with two other methods (end-point PCR and break-apart FISH). EML4-ALK was detected in 33/96 (34%) specimens using variant specific real time PCR, whereas in only 23/96 (24%) using end-point PCR. All real time PCR positive samples were confirmed with direct sequencing. A total of 46 specimens were subsequently analyzed by all three detection methods. Using variant specific real time PCR we identified EML4-ALK transcript in 17/46 (37%) specimens, using end-point PCR in 13/46 (28%) specimens and positive ALK rearrangement by FISH was detected in 8/46 (17.4%) specimens. Moreover, using variant specific real time PCR, 5 specimens showed more than one EML4-ALK variant simultaneously (in 2 cases the variants 1+3a+3b, in 2 specimens the variants 1+3a and in 1 specimen the variant 1+3b). In one case of 96 EML4-ALK fusion gene and EGFR mutation were detected. All simultaneous genetic variants were confirmed using end-point PCR and direct sequencing. Our variant specific real time PCR assay is highly sensitive, fast, financially acceptable, applicable to FFPE and seems to be a valuable tool for the rapid prescreening of NSCLC patients in clinical practice, so, that most patients able to benefit from targeted therapy could be identified. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. Improvements of the two-dimensional FDTD method for the simulation of normal- and superconducting planar waveguides using time series analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofschen, S.; Wolff, I.

    1996-08-01

    Time-domain simulation results of two-dimensional (2-D) planar waveguide finite-difference time-domain (FDTD) analysis are normally analyzed using Fourier transform. The introduced method of time series analysis to extract propagation and attenuation constants reduces the desired computation time drastically. Additionally, a nonequidistant discretization together with an adequate excitation technique is used to reduce the number of spatial grid points. Therefore, it is possible to reduce the number of spatial grid points. Therefore, it is possible to simulate normal- and superconducting planar waveguide structures with very thin conductors and small dimensions, as they are used in MMIC technology. The simulation results are comparedmore » with measurements and show good agreement.« less

  18. A wideband, high-resolution spectrum analyzer

    NASA Technical Reports Server (NTRS)

    Quirk, M. P.; Wilck, H. C.; Garyantes, M. F.; Grimm, M. J.

    1988-01-01

    A two-million-channel, 40 MHz bandwidth, digital spectrum analyzer under development at the Jet Propulsion Laboratory is described. The analyzer system will serve as a prototype processor for the sky survey portion of NASA's Search for Extraterrestrial Intelligence program and for other applications in the Deep Space Network. The analyzer digitizes an analog input, performs a 2 (sup 21) point Discrete Fourier Transform, accumulates the output power, normalizes the output to remove frequency-dependent gain, and automates simple signal detection algorithms. Due to its built-in frequency-domain processing functions and configuration flexibility, the analyzer is a very powerful tool for real-time signal analysis.

  19. A wide-band high-resolution spectrum analyzer

    NASA Technical Reports Server (NTRS)

    Quirk, Maureen P.; Garyantes, Michael F.; Wilck, Helmut C.; Grimm, Michael J.

    1988-01-01

    A two-million-channel, 40 MHz bandwidth, digital spectrum analyzer under development at the Jet Propulsion Laboratory is described. The analyzer system will serve as a prototype processor for the sky survey portion of NASA's Search for Extraterrestrial Intelligence program and for other applications in the Deep Space Network. The analyzer digitizes an analog input, performs a 2 (sup 21) point Discrete Fourier Transform, accumulates the output power, normalizes the output to remove frequency-dependent gain, and automates simple detection algorithms. Due to its built-in frequency-domain processing functions and configuration flexibility, the analyzer is a very powerful tool for real-time signal analysis.

  20. A wide-band high-resolution spectrum analyzer.

    PubMed

    Quirk, M P; Garyantes, M F; Wilck, H C; Grimm, M J

    1988-12-01

    This paper describes a two-million-channel 40-MHz-bandwidth, digital spectrum analyzer under development at the Jet Propulsion Laboratory. The analyzer system will serve as a prototype processor for the sky survey portion of NASA's Search for Extraterrestrial Intelligence program and for other applications in the Deep Space Network. The analyzer digitizes an analog input, performs a 2(21)-point, Discrete Fourier Transform, accumulates the output power, normalizes the output to remove frequency-dependent gain, and automates simple signal detection algorithms. Due to its built-in frequency-domain processing functions and configuration flexibility, the analyzer is a very powerful tool for real-time signal analysis and detection.

  1. Using the NASTRAN Thermal Analyzer to simulate a flight scientific instrument package

    NASA Technical Reports Server (NTRS)

    Lee, H.-P.; Jackson, C. E., Jr.

    1974-01-01

    The NASTRAN Thermal Analyzer has proven to be a unique and useful tool for thermal analyses involving large and complex structures where small, thermally induced deformations are critical. Among its major advantages are direct grid point-to-grid point compatibility with large structural models; plots of the model that may be generated for both conduction and boundary elements; versatility of applying transient thermal loads especially to repeat orbital cycles; on-line printer plotting of temperatures and rate of temperature changes as a function of time; and direct matrix input to solve linear differential equations on-line. These features provide a flexibility far beyond that available in most finite-difference thermal analysis computer programs.

  2. Values in Prime Time Alcoholic Beverage Commercials.

    ERIC Educational Resources Information Center

    Frazer, Charles F.

    Content analysis was used to study the values evident in televised beer and wine commercials. Seventy-seven prime time commercials, 7.6% of a week's total, were analyzed along value dimensions adapted from Gallup's measure of popular social values. The intensity of each value was coded on a five-point scale. None of the commercials in the beer and…

  3. Who Stays and for How Long: Examining Attrition in Canadian Graduate Programs

    ERIC Educational Resources Information Center

    DeClou, Lindsay

    2016-01-01

    Attrition from Canadian graduate programs is a point of concern on a societal, institutional, and individual level. To improve retention in graduate school, a better understanding of what leads to withdrawal needs to be reached. This paper uses logistic regression and discrete-time survival analysis with time-varying covariates to analyze data…

  4. Harmonic Fluxes and Electromagnetic Forces of Concentric Winding Brushless Permanent Magnet Motor

    NASA Astrophysics Data System (ADS)

    Ishibashi, Fuminori; Takemasa, Ryo; Matsushita, Makoto; Nishizawa, Takashi; Noda, Shinichi

    Brushless permanent magnet motors have been widely used in home applications and industrial fields. These days, high efficiency and low noise motors are demanded from the view point of environment. Electromagnetic noise and iron loss of the motor are produced by the harmonic fluxes and electromagnetic forces. However, order and space pattern of these have not been discussed in detail. In this paper, fluxes, electromagnetic forces and magneto-motive forces of brushless permanent magnet motors with concentric winding were analyzed analytically, experimentally and numerically. Time harmonic fluxes and time electromagnetic forces in the air gap were measured by search coils on the inner surface of the stator teeth and analyzed by FEM. Space pattern of time harmonic fluxes and time electromagnetic forces were worked out with experiments and FEM. Magneto motive forces due to concentric winding were analyzed with equations and checked by FEM.

  5. Cross-over assessment of serum bactericidal activity of moxifloxacin and levofloxacin versus penicillin-susceptible and penicillin-resistant Streptococcus pneumoniae in healthy volunteers.

    PubMed

    Hart, Daniel; Weinstein, Melvin P

    2007-07-01

    We compared the serum bactericidal activity (SBA) of moxifloxacin and levofloxacin against penicillin-susceptible and penicillin-resistant Streptococcus pneumoniae in 12 healthy volunteers. Each subject received 3 days of oral moxifloxacin 400 mg daily and levofloxacin 750 mg daily, respectively, with a 2- to 4-week washout period between regimens. Blood was drawn at 6 time points after the third dose of each antibiotic. Mean serum bactericidal titers (MSBTRs) for moxifloxacin were 4-fold higher than the mean titers for levofloxacin at each time point. For each drug, MSBTRs at each time point were the same or within one 2-fold dilution when analyzed according to the penicillin susceptibility of the strains or the sex of the subjects. The difference in SBA of the 2 drugs may have implications for the emergence of resistance and clinical outcome.

  6. How to use the Sun-Earth Lagrange points for fundamental physics and navigation

    NASA Astrophysics Data System (ADS)

    Tartaglia, A.; Lorenzini, E. C.; Lucchesi, D.; Pucacco, G.; Ruggiero, M. L.; Valko, P.

    2018-01-01

    We illustrate the proposal, nicknamed LAGRANGE, to use spacecraft, located at the Sun-Earth Lagrange points, as a physical reference frame. Performing time of flight measurements of electromagnetic signals traveling on closed paths between the points, we show that it would be possible: (a) to refine gravitational time delay knowledge due both to the Sun and the Earth; (b) to detect the gravito-magnetic frame dragging of the Sun, so deducing information about the interior of the star; (c) to check the possible existence of a galactic gravitomagnetic field, which would imply a revision of the properties of a dark matter halo; (d) to set up a relativistic positioning and navigation system at the scale of the inner solar system. The paper presents estimated values for the relevant quantities and discusses the feasibility of the project analyzing the behavior of the space devices close to the Lagrange points.

  7. Time-optimal aircraft pursuit-evasion with a weapon envelope constraint

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Duke, E. L.

    1990-01-01

    The optimal pursuit-evasion problem between two aircraft, including nonlinear point-mass vehicle models and a realistic weapon envelope, is analyzed. Using a linear combination of flight time and the square of the vehicle acceleration as the performance index, a closed-form solution is obtained in nonlinear feedback form. Due to its modest computational requirements, this guidance law can be used for onboard real-time implementation.

  8. Projectile Motion Revisited.

    ERIC Educational Resources Information Center

    Lucie, Pierre

    1979-01-01

    Analyzes projectile motion using symmetry and simple geometry. Deduces the direction of velocity at any point, range, time of flight, maximum height, safety parabola, and maximum range for a projectile launched upon a plane inclined at any angle with respect to the horizontal. (Author/GA)

  9. Real-time EEG-based detection of fatigue driving danger for accident prediction.

    PubMed

    Wang, Hong; Zhang, Chi; Shi, Tianwei; Wang, Fuwang; Ma, Shujun

    2015-03-01

    This paper proposes a real-time electroencephalogram (EEG)-based detection method of the potential danger during fatigue driving. To determine driver fatigue in real time, wavelet entropy with a sliding window and pulse coupled neural network (PCNN) were used to process the EEG signals in the visual area (the main information input route). To detect the fatigue danger, the neural mechanism of driver fatigue was analyzed. The functional brain networks were employed to track the fatigue impact on processing capacity of brain. The results show the overall functional connectivity of the subjects is weakened after long time driving tasks. The regularity is summarized as the fatigue convergence phenomenon. Based on the fatigue convergence phenomenon, we combined both the input and global synchronizations of brain together to calculate the residual amount of the information processing capacity of brain to obtain the dangerous points in real time. Finally, the danger detection system of the driver fatigue based on the neural mechanism was validated using accident EEG. The time distributions of the output danger points of the system have a good agreement with those of the real accident points.

  10. Evaluation of the i-STAT Point-of-Care Analyzer in Critically Ill Adult Patients

    PubMed Central

    Steinfelder-Visscher, Jacoline; Teerenstra, Steven; Klein Gunnewiek, Jacqueline M.T.; Weerwind, Patrick W.

    2008-01-01

    Abstract: Point-of-care analyzers may benefit therapeutic decision making by reducing turn-around-time for samples. This is especially true when biochemical parameters exceed the clinical reference range, in which acute and effective treatment is essential. We therefore evaluated the analytical performance of the i-STAT point-of-care analyzer in two critically ill adult patient populations. During a 3-month period, 48 blood samples from patients undergoing cardiac surgery with cardiopulmonary bypass (CPB) and 42 blood samples from non-cardiac patients who needed intensive care treatment were analyzed on both the i-STAT analyzer (CPB and non-CPB mode, respectively) and our laboratory analyzers (RapidLab 865/Sysmex XE-2100 instrument). The agreement analysis for quantitative data was used to compare i-STAT to RapidLab for blood gas/electrolytes and for hematocrit with the Sysmex instrument. Point-of-care electrolytes and blood gases had constant deviation, except for pH, pO2, and hematocrit. A clear linear trend in deviation of i-STAT from RapidLab was noticed for pH during CPB (r = 0.32, p = .03) and for pO2 > 10 kPa during CPB (r = −0.59, p < .0001 when 10 < pO2 <30 kPa) and in the intensive care unit (r = −0.61, p < .001 when 10 < pO2 <30 kPa). In the normal pO2 range (10.6 < pO2 <13.3 kPa), the performance of the i-STAT was comparable to the RapidLab. In contrast to hematocrit measured during CPB, hematocrit using the non-CPB mode in the non-cardiac intensive care population showed an underestimation up to 2.2% (p < .0001) in the hematocrit range below 25% (n = 11) using the i-STAT. The i-STAT analyzer is suitable for point-of-care testing of electrolytes and blood gases in critically ill patients, except for high pO2. However, the discrepancy in hematocrit bias shows that accuracy established in one patient population cannot be automatically extrapolated to other patient populations, thus stressing the need for separate evaluation. PMID:18389666

  11. Sedentary Behaviour Profiling of Office Workers: A Sensitivity Analysis of Sedentary Cut-Points

    PubMed Central

    Boerema, Simone T.; Essink, Gerard B.; Tönis, Thijs M.; van Velsen, Lex; Hermens, Hermie J.

    2015-01-01

    Measuring sedentary behaviour and physical activity with wearable sensors provides detailed information on activity patterns and can serve health interventions. At the basis of activity analysis stands the ability to distinguish sedentary from active time. As there is no consensus regarding the optimal cut-point for classifying sedentary behaviour, we studied the consequences of using different cut-points for this type of analysis. We conducted a battery of sitting and walking activities with 14 office workers, wearing the Promove 3D activity sensor to determine the optimal cut-point (in counts per minute (m·s−2)) for classifying sedentary behaviour. Then, 27 office workers wore the sensor for five days. We evaluated the sensitivity of five sedentary pattern measures for various sedentary cut-points and found an optimal cut-point for sedentary behaviour of 1660 × 10−3 m·s−2. Total sedentary time was not sensitive to cut-point changes within ±10% of this optimal cut-point; other sedentary pattern measures were not sensitive to changes within the ±20% interval. The results from studies analyzing sedentary patterns, using different cut-points, can be compared within these boundaries. Furthermore, commercial, hip-worn activity trackers can implement feedback and interventions on sedentary behaviour patterns, using these cut-points. PMID:26712758

  12. Application of dynamic topic models to toxicogenomics data.

    PubMed

    Lee, Mikyung; Liu, Zhichao; Huang, Ruili; Tong, Weida

    2016-10-06

    All biological processes are inherently dynamic. Biological systems evolve transiently or sustainably according to sequential time points after perturbation by environment insults, drugs and chemicals. Investigating the temporal behavior of molecular events has been an important subject to understand the underlying mechanisms governing the biological system in response to, such as, drug treatment. The intrinsic complexity of time series data requires appropriate computational algorithms for data interpretation. In this study, we propose, for the first time, the application of dynamic topic models (DTM) for analyzing time-series gene expression data. A large time-series toxicogenomics dataset was studied. It contains over 3144 microarrays of gene expression data corresponding to rat livers treated with 131 compounds (most are drugs) at two doses (control and high dose) in a repeated schedule containing four separate time points (4-, 8-, 15- and 29-day). We analyzed, with DTM, the topics (consisting of a set of genes) and their biological interpretations over these four time points. We identified hidden patterns embedded in this time-series gene expression profiles. From the topic distribution for compound-time condition, a number of drugs were successfully clustered by their shared mode-of-action such as PPARɑ agonists and COX inhibitors. The biological meaning underlying each topic was interpreted using diverse sources of information such as functional analysis of the pathways and therapeutic uses of the drugs. Additionally, we found that sample clusters produced by DTM are much more coherent in terms of functional categories when compared to traditional clustering algorithms. We demonstrated that DTM, a text mining technique, can be a powerful computational approach for clustering time-series gene expression profiles with the probabilistic representation of their dynamic features along sequential time frames. The method offers an alternative way for uncovering hidden patterns embedded in time series gene expression profiles to gain enhanced understanding of dynamic behavior of gene regulation in the biological system.

  13. Efficient characterization of inhomogeneity in contraction strain pattern.

    PubMed

    Nazzal, Christina M; Mulligan, Lawrence J; Criscione, John C

    2012-05-01

    Cardiac dyssynchrony often accompanies patients with heart failure (HF) and can lead to an increase in mortality rate. Cardiac resynchronization therapy (CRT) has been shown to provide substantial benefits to the HF population with ventricular dyssynchrony; however, there still exists a group of patients who do not respond to this treatment. In order to better understand patient response to CRT, it is necessary to quantitatively characterize both electrical and mechanical dyssynchrony. The quantification of mechanical dyssynchrony via characterization of contraction strain field inhomogeneity is the focus of this modeling investigation. Raw data from a 3D finite element (FE) model were received from Roy Kerckhoffs et al. and analyzed in MATLAB. The FE model consisted of canine left and right ventricles coupled to a closed circulation with the effects of the pericardium acting as a pressure on the epicardial surface. For each of three simulations (normal synchronous, SYNC, right ventricular apical pacing, RVA, and left ventricular free wall pacing, LVFW) the Gauss point locations and values were used to generate lookup tables (LUTs) with each entry representing a location in the heart. In essence, we employed piecewise cubic interpolation to generate a fine point cloud (LUTs) from a course point cloud (Gauss points). Strain was calculated in the fiber direction and was then displayed in multiple ways to better characterize strain inhomogeneity. By plotting average strain and standard deviation over time, the point of maximum contraction and the point of maximal inhomogeneity were found for each simulation. Strain values were organized into seven strain bins to show operative strain ranges and extent of inhomogeneity throughout the heart wall. In order to visualize strain propagation, magnitude, and inhomogeneity over time, we created 2D area maps displaying strain over the entire cardiac cycle. To visualize spatial strain distribution at the time point of maximum inhomogeneity, a 3D point cloud was created for each simulation, and a CURE index was calculated. We found that both the RVA and LFVW simulations took longer to reach maximum contraction than the SYNC simulation, while also exhibiting larger disparities in strain values during contraction. Strain in the hoop direction was also analyzed and was found to be similar to the fiber strain results. It was found that our method of analyzing contraction strain pattern yielded more detailed spacial and temporal information about fiber strain in the heart over the cardiac cycle than the more conventional CURE index method. We also observed that our method of strain binning aids in visualization of the strain fields, and in particular, the separation of the mass points into separate images associated with each strain bin allows the strain pattern to be explicitly compartmentalized.

  14. Evaluation of the Nova StatSensor® XpressTM Creatinine Point-Of-Care Handheld Analyzer

    PubMed Central

    Kosack, Cara Simone; de Kieviet, Wim; Bayrak, Kubra; Milovic, Anastacija; Page, Anne Laure

    2015-01-01

    Creatinine is a parameter that is required to monitor renal function and is important to follow in patients under treatment with potentially toxic renal drugs, such as the anti-HIV drug Tenofovir. A point of care instrument to measure creatinine would be useful for patients monitoring in resource-limited settings, where more instruments that are sophisticated are not available. The StatSensor Xpress Creatinine (Nova Biomedical Cooperation, Waltham, MA, USA) point of care analyzer was evaluated for its diagnostic performance in indicating drug therapy change. Creatinine was measured in parallel using the Nova StatSensor Xpress Creatinine analyzer and the Vitros 5,1FS (Ortho Clinical Diagnostics, Inc, Rochester, USA), which served as reference standard. The precision (i.e., repeatability and reproducibility) and accuracy of the StatSensor Xpress Creatinine analyzer were calculated using a panel of specimens with normal, low pathological and high pathological values. Two different Nova StatSensor Xpress Creatinine analyzers were used for the assessment of accuracy using repeated measurements. The coefficient of variation of the StatSensor Xpress Creatinine analyzers ranged from 2.3 to 5.9% for repeatability and from 4.2 to 9.0% for between-run reproducibility. The concordance correlation agreement was good except for high values (>600 µmol/L). The Bland-Altman analysis in high pathological specimens suggests that the Nova StatSensor Xpress Creatinine test tends to underestimate high creatinine values (i.e., >600 µmol/L). The Nova StatSensor Xpress Creatinine analyzers showed acceptable to good results in terms of repeatability, inter-device reproducibility and between-run reproducibility over time using quality control reagents. The analyzer was found sufficiently accurate for detecting pathological values in patients (age >10 year) and can be used with a moderate risk of misclassification. PMID:25886375

  15. Evaluation of the Nova StatSensor® Xpress(TM) Creatinine point-of-care handheld analyzer.

    PubMed

    Kosack, Cara Simone; de Kieviet, Wim; Bayrak, Kubra; Milovic, Anastacija; Page, Anne Laure

    2015-01-01

    Creatinine is a parameter that is required to monitor renal function and is important to follow in patients under treatment with potentially toxic renal drugs, such as the anti-HIV drug Tenofovir. A point of care instrument to measure creatinine would be useful for patients monitoring in resource-limited settings, where more instruments that are sophisticated are not available. The StatSensor Xpress Creatinine (Nova Biomedical Cooperation, Waltham, MA, USA) point of care analyzer was evaluated for its diagnostic performance in indicating drug therapy change. Creatinine was measured in parallel using the Nova StatSensor Xpress Creatinine analyzer and the Vitros 5,1FS (Ortho Clinical Diagnostics, Inc, Rochester, USA), which served as reference standard. The precision (i.e., repeatability and reproducibility) and accuracy of the StatSensor Xpress Creatinine analyzer were calculated using a panel of specimens with normal, low pathological and high pathological values. Two different Nova StatSensor Xpress Creatinine analyzers were used for the assessment of accuracy using repeated measurements. The coefficient of variation of the StatSensor Xpress Creatinine analyzers ranged from 2.3 to 5.9% for repeatability and from 4.2 to 9.0% for between-run reproducibility. The concordance correlation agreement was good except for high values (>600 µmol/L). The Bland-Altman analysis in high pathological specimens suggests that the Nova StatSensor Xpress Creatinine test tends to underestimate high creatinine values (i.e., >600 µmol/L). The Nova StatSensor Xpress Creatinine analyzers showed acceptable to good results in terms of repeatability, inter-device reproducibility and between-run reproducibility over time using quality control reagents. The analyzer was found sufficiently accurate for detecting pathological values in patients (age >10 year) and can be used with a moderate risk of misclassification.

  16. The Academic RVU: Ten Years Developing a Metric for and Financially Incenting Academic Productivity at Oregon Health & Science University.

    PubMed

    Ma, O John; Hedges, Jerris R; Newgard, Craig D

    2017-08-01

    Established metrics reward academic faculty for clinical productivity. Few data have analyzed a bonus model to measure and reward academic productivity. This study's objective was to describe development and use of a departmental academic bonus system for incenting faculty scholarly and educational productivity. This cross-sectional study analyzed a departmental bonus system among emergency medicine academic faculty at Oregon Health & Science University, including growth from 2005 to 2015. All faculty members with a primary appointment were eligible for participation. Each activity was awarded points based on a predetermined education or scholarly point scale. Faculty members accumulated points based on their activity (numerator), and the cumulative points of all faculty were the denominator. Variables were individual faculty member (deidentified), academic year, bonus system points, bonus amounts awarded, and measures of academic productivity. Data were analyzed using descriptive statistics, including measures of variance. The total annual financial bonus pool ranged from $211,622 to $274,706. The median annual per faculty academic bonus remained fairly constant over time ($3,980 in 2005-2006 vs. $4,293 in 2014-2015), with most change at the upper quartile of academic bonus (max bonus $16,920 in 2005-2006 vs. $39,207 in 2014-2015). Bonuses rose linearly among faculty in the bottom three quartiles of academic productivity, but increased exponentially in the 75th to 100th percentile. Faculty academic productivity can be measured and financially rewarded according to an objective academic bonus system. The "academic point" used to measure productivity functions as an "academic relative value unit."

  17. Dynamic discrete tomography

    NASA Astrophysics Data System (ADS)

    Alpers, Andreas; Gritzmann, Peter

    2018-03-01

    We consider the problem of reconstructing the paths of a set of points over time, where, at each of a finite set of moments in time the current positions of points in space are only accessible through some small number of their x-rays. This particular particle tracking problem, with applications, e.g. in plasma physics, is the basic problem in dynamic discrete tomography. We introduce and analyze various different algorithmic models. In particular, we determine the computational complexity of the problem (and various of its relatives) and derive algorithms that can be used in practice. As a byproduct we provide new results on constrained variants of min-cost flow and matching problems.

  18. Diffusion limit of Lévy-Lorentz gas is Brownian motion

    NASA Astrophysics Data System (ADS)

    Magdziarz, Marcin; Szczotka, Wladyslaw

    2018-07-01

    In this paper we analyze asymptotic behaviour of a stochastic process called Lévy-Lorentz gas. This process is aspecial kind of continuous-time random walk in which walker moves in the fixed environment composed of scattering points. Upon each collision the walker performs a flight to the nearest scattering point. This type of dynamics is observed in Lévy glasses or long quenched polymers. We show that the diffusion limit of Lévy-Lorentz gas with finite mean distance between scattering centers is the standard Brownian motion. Thus, for long times the behaviour of the Lévy-Lorentz gas is close to the diffusive regime.

  19. Recurrence Density Enhanced Complex Networks for Nonlinear Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Costa, Diego G. De B.; Reis, Barbara M. Da F.; Zou, Yong; Quiles, Marcos G.; Macau, Elbert E. N.

    We introduce a new method, which is entitled Recurrence Density Enhanced Complex Network (RDE-CN), to properly analyze nonlinear time series. Our method first transforms a recurrence plot into a figure of a reduced number of points yet preserving the main and fundamental recurrence properties of the original plot. This resulting figure is then reinterpreted as a complex network, which is further characterized by network statistical measures. We illustrate the computational power of RDE-CN approach by time series by both the logistic map and experimental fluid flows, which show that our method distinguishes different dynamics sufficiently well as the traditional recurrence analysis. Therefore, the proposed methodology characterizes the recurrence matrix adequately, while using a reduced set of points from the original recurrence plots.

  20. Standard deviation of vertical two-point longitudinal velocity differences in the atmospheric boundary layer.

    NASA Technical Reports Server (NTRS)

    Fichtl, G. H.

    1971-01-01

    Statistical estimates of wind shear in the planetary boundary layer are important in the design of V/STOL aircraft, and for the design of the Space Shuttle. The data analyzed in this study consist of eleven sets of longitudinal turbulent velocity fluctuation time histories digitized at 0.2 sec intervals with approximately 18,000 data points per time history. The longitudinal velocity fluctuations were calculated with horizontal wind and direction data collected at the 18-, 30-, 60-, 90-, 120-, and 150-m levels. The data obtained confirm the result that Eulerian time spectra transformed to wave-number spectra with Taylor's frozen eddy hypothesis possess inertial-like behavior at wave-numbers well out of the inertial subrange.

  1. Early maternal language use during book sharing in families from low-income environments.

    PubMed

    Abraham, Linzy M; Crais, Elizabeth; Vernon-Feagans, Lynne

    2013-02-01

    The authors examined the language used by mothers from low-income and rural environments with their infants at ages 6 and 15 months to identify predictors of maternal language use at the 15-month time point. Maternal language use by 82 mothers with their children was documented during book-sharing interactions within the home in a prospective longitudinal study. The authors analyzed transcripts for maternal language strategies and maternal language productivity. Analyses indicated variability across mothers in their language use and revealed some stability within mothers, as maternal language use at the 6-month time point significantly predicted later maternal language. Mothers who used more language strategies at the 6-month time point were likely to use more of these language strategies at the 15-month time point, even after accounting for maternal education, family income, maternal language productivity, and children's communicative attempts. Mothers' language use with their children was highly predictive of later maternal language use, as early as age 6 months. Children's communication also influenced concurrent maternal language productivity. Thus, programs to enhance maternal language use would need to begin in infancy, promoting varied and increased maternal language use and also encouraging children's communication.

  2. Signs and stability in higher-derivative gravity

    NASA Astrophysics Data System (ADS)

    Narain, Gaurav

    2018-02-01

    Perturbatively renormalizable higher-derivative gravity in four space-time dimensions with arbitrary signs of couplings has been considered. Systematic analysis of the action with arbitrary signs of couplings in Lorentzian flat space-time for no-tachyons, fixes the signs. Feynman + i𝜖 prescription for these signs further grants necessary convergence in path-integral, suppressing the field modes with large action. This also leads to a sensible wick rotation where quantum computation can be performed. Running couplings for these sign of parameters make the massive tensor ghost innocuous leading to a stable and ghost-free renormalizable theory in four space-time dimensions. The theory has a transition point arising from renormalization group (RG) equations, where the coefficient of R2 diverges without affecting the perturbative quantum field theory (QFT). Redefining this coefficient gives a better handle over the theory around the transition point. The flow equations push the flow of parameters across the transition point. The flow beyond the transition point is analyzed using the one-loop RG equations which shows that the regime beyond the transition point has unphysical properties: there are tachyons, the path-integral loses positive definiteness, Newton’s constant G becomes negative and large, and perturbative parameters become large. These shortcomings indicate a lack of completeness beyond the transition point and need of a nonperturbative treatment of the theory beyond the transition point.

  3. A robust interrupted time series model for analyzing complex health care intervention data.

    PubMed

    Cruz, Maricela; Bender, Miriam; Ombao, Hernando

    2017-12-20

    Current health policy calls for greater use of evidence-based care delivery services to improve patient quality and safety outcomes. Care delivery is complex, with interacting and interdependent components that challenge traditional statistical analytic techniques, in particular, when modeling a time series of outcomes data that might be "interrupted" by a change in a particular method of health care delivery. Interrupted time series (ITS) is a robust quasi-experimental design with the ability to infer the effectiveness of an intervention that accounts for data dependency. Current standardized methods for analyzing ITS data do not model changes in variation and correlation following the intervention. This is a key limitation since it is plausible for data variability and dependency to change because of the intervention. Moreover, present methodology either assumes a prespecified interruption time point with an instantaneous effect or removes data for which the effect of intervention is not fully realized. In this paper, we describe and develop a novel robust interrupted time series (robust-ITS) model that overcomes these omissions and limitations. The robust-ITS model formally performs inference on (1) identifying the change point; (2) differences in preintervention and postintervention correlation; (3) differences in the outcome variance preintervention and postintervention; and (4) differences in the mean preintervention and postintervention. We illustrate the proposed method by analyzing patient satisfaction data from a hospital that implemented and evaluated a new nursing care delivery model as the intervention of interest. The robust-ITS model is implemented in an R Shiny toolbox, which is freely available to the community. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Comparison study of global and local approaches describing critical phenomena on the Polish stock exchange market

    NASA Astrophysics Data System (ADS)

    Czarnecki, Łukasz; Grech, Dariusz; Pamuła, Grzegorz

    2008-12-01

    We confront global and local methods to analyze the financial crash-like events on the Polish financial market from the critical phenomena point of view. These methods are based on the analysis of log-periodicity and the local fractal properties of financial time series in the vicinity of phase transitions (crashes). The whole history (1991-2008) of Warsaw Stock Exchange Index (WIG) describing the largest developing financial market in Europe, is analyzed in a daily time horizon. We find that crash-like events on the Polish financial market are described better by the log-divergent price model decorated with log-periodic behavior than the corresponding power-law-divergent price model. Predictions coming from log-periodicity scenario are verified for all main crashes that took place in WIG history. It is argued that crash predictions within log-periodicity model strongly depend on the amount of data taken to make a fit and therefore are likely to contain huge inaccuracies. Turning to local fractal description, we calculate the so-called local (time dependent) Hurst exponent H for the WIG time series and we find the dependence between the behavior of the local fractal properties of the WIG time series and the crashes appearance on the financial market. The latter method seems to work better than the global approach - both for developing as for developed markets. The current situation on the market, particularly related to the Fed intervention in September’07 and the situation on the market immediately after this intervention is also analyzed from the fractional Brownian motion point of view.

  5. Study protocol to examine the effects of spaceflight and a spaceflight analog on neurocognitive performance: extent, longevity, and neural bases.

    PubMed

    Koppelmans, Vincent; Erdeniz, Burak; De Dios, Yiri E; Wood, Scott J; Reuter-Lorenz, Patricia A; Kofman, Igor; Bloomberg, Jacob J; Mulavara, Ajitkumar P; Seidler, Rachael D

    2013-12-18

    Long duration spaceflight (i.e., 22 days or longer) has been associated with changes in sensorimotor systems, resulting in difficulties that astronauts experience with posture control, locomotion, and manual control. The microgravity environment is an important causal factor for spaceflight induced sensorimotor changes. Whether spaceflight also affects other central nervous system functions such as cognition is yet largely unknown, but of importance in consideration of the health and performance of crewmembers both in- and post-flight. We are therefore conducting a controlled prospective longitudinal study to investigate the effects of spaceflight on the extent, longevity and neural bases of sensorimotor and cognitive performance changes. Here we present the protocol of our study. This study includes three groups (astronauts, bed rest subjects, ground-based control subjects) for which each the design is single group with repeated measures. The effects of spaceflight on the brain will be investigated in astronauts who will be assessed at two time points pre-, at three time points during-, and at four time points following a spaceflight mission of six months. To parse out the effect of microgravity from the overall effects of spaceflight, we investigate the effects of seventy days head-down tilted bed rest. Bed rest subjects will be assessed at two time points before-, two time points during-, and three time points post-bed rest. A third group of ground based controls will be measured at four time points to assess reliability of our measures over time. For all participants and at all time points, except in flight, measures of neurocognitive performance, fine motor control, gait, balance, structural MRI (T1, DTI), task fMRI, and functional connectivity MRI will be obtained. In flight, astronauts will complete some of the tasks that they complete pre- and post flight, including tasks measuring spatial working memory, sensorimotor adaptation, and fine motor performance. Potential changes over time and associations between cognition, motor-behavior, and brain structure and function will be analyzed. This study explores how spaceflight induced brain changes impact functional performance. This understanding could aid in the design of targeted countermeasures to mitigate the negative effects of long-duration spaceflight.

  6. Geospatial Field Methods: An Undergraduate Course Built Around Point Cloud Construction and Analysis to Promote Spatial Learning and Use of Emerging Technology in Geoscience

    NASA Astrophysics Data System (ADS)

    Bunds, M. P.

    2017-12-01

    Point clouds are a powerful data source in the geosciences, and the emergence of structure-from-motion (SfM) photogrammetric techniques has allowed them to be generated quickly and inexpensively. Consequently, applications of them as well as methods to generate, manipulate, and analyze them warrant inclusion in undergraduate curriculum. In a new course called Geospatial Field Methods at Utah Valley University, students in small groups use SfM to generate a point cloud from imagery collected with a small unmanned aerial system (sUAS) and use it as a primary data source for a research project. Before creating their point clouds, students develop needed technical skills in laboratory and class activities. The students then apply the skills to construct the point clouds, and the research projects and point cloud construction serve as a central theme for the class. Intended student outcomes for the class include: technical skills related to acquiring, processing, and analyzing geospatial data; improved ability to carry out a research project; and increased knowledge related to their specific project. To construct the point clouds, students first plan their field work by outlining the field site, identifying locations for ground control points (GCPs), and loading them onto a handheld GPS for use in the field. They also estimate sUAS flight elevation, speed, and the flight path grid spacing required to produce a point cloud with the resolution required for their project goals. In the field, the students place the GCPs using handheld GPS, and survey the GCP locations using post-processed-kinematic (PPK) or real-time-kinematic (RTK) methods. The students pilot the sUAS and operate its camera according to the parameters that they estimated in planning their field work. Data processing includes obtaining accurate locations for the PPK/RTK base station and GCPs, and SfM processing with Agisoft Photoscan. The resulting point clouds are rasterized into digital surface models, assessed for accuracy, and analyzed in Geographic Information System software. Student projects have included mapping and analyzing landslide morphology, fault scarps, and earthquake ground surface rupture. Students have praised the geospatial skills they learn, whereas helping them stay on schedule to finish their projects is a challenge.

  7. Single Cell Analysis to locate the Restriction Point with respect to E2F Expression

    NASA Astrophysics Data System (ADS)

    Pimienta, R.; Johnson, A.

    2011-12-01

    The restriction point is a G1-phase checkpoint that regulates passage through the cell cycle and is misregulated in all known types of cancer. The Rb-E2F switch is thought to be one of the most relevant molecular mechanisms which regulate the restriction point in mammalian cells. However, recent experiments have brought the timing of the restriction point into question. In previous studies, cells were analyzed as populations and this prevented an accurate determination of the restriction point. By creating and analyzing an E2F-GFP reporter in single cells, we can pinpoint the timing of E2F activation and determine whether it coincides with the restriction point. Using calcium phosphate and Fugene,we transfected human embryonic kidney (293T) cells with a CMV-GFP plasmid and an E2F-GFP reporter. Based on our results, it appears that calcium phosphate is more effective than Fugene at transfecting mammalian cells. The calcium phosphate transfection had 9.59% more fluorescent cells than Fugene. However, this result only occurred with the CMV-GFP plasmid and not the E2F-GFP reporter, which was not properly expressed in human embryonic kidney (293T) cells. We will continue troubleshooting to fix this reporter as we proceed with our research. Once the reporter is properly cloned, we will transfect it into retinal pigmented epithelial (RPE1-hTERT) cells using the calcium phosphate method. RPE1-hTERT cells are an immortalized with telomerase and are more close to normal cells than tumor-derived cell lines. Through this research we will better comprehend commitment to the mammalian cell cycle.

  8. Composite analysis for Escherichia coli at coastal beaches

    USGS Publications Warehouse

    Bertke, E.E.

    2007-01-01

    At some coastal beaches, concentrations of fecal-indicator bacteria can differ substantially between multiple points at the same beach at the same time. Because of this spatial variability, the recreational water quality at beaches is sometimes determined by stratifying a beach into several areas and collecting a sample from each area to analyze for the concentration of fecal-indicator bacteria. The average concentration of bacteria from those points is often used to compare to the recreational standard for advisory postings. Alternatively, if funds are limited, a single sample is collected to represent the beach. Compositing the samples collected from each section of the beach may yield equally accurate data as averaging concentrations from multiple points, at a reduced cost. In the study described herein, water samples were collected at multiple points from three Lake Erie beaches and analyzed for Escherichia coli on modified mTEC agar (EPA Method 1603). From the multiple-point samples, a composite sample (n = 116) was formed at each beach by combining equal aliquots of well-mixed water from each point. Results from this study indicate that E. coli concentrations from the arithmetic average of multiple-point samples and from composited samples are not significantly different (t = 1.59, p = 0.1139) and yield similar measures of recreational water quality; additionally, composite samples could result in a significant cost savings.

  9. Human dynamics of spending: Longitudinal study of a coalition loyalty program

    NASA Astrophysics Data System (ADS)

    Yi, Il Gu; Jeong, Hyang Min; Choi, Woosuk; Jang, Seungkwon; Lee, Heejin; Kim, Beom Jun

    2014-09-01

    Large-scale data of a coalition loyalty program is analyzed in terms of the temporal dynamics of customers' behaviors. We report that the two main activities of a loyalty program, earning and redemption of points, exhibit very different behaviors. It is also found that as customers become older from their early 20's, both male and female customers increase their earning and redemption activities until they arrive at the turning points, beyond which both activities decrease. The positions of turning points as well as the maximum earned and redeemed points are found to differ for males and females. On top of these temporal behaviors, we identify that there exists a learning effect and customers learn how to earn and redeem points as their experiences accumulate in time.

  10. Dynamical topology and statistical properties of spatiotemporal chaos.

    PubMed

    Zhuang, Quntao; Gao, Xun; Ouyang, Qi; Wang, Hongli

    2012-12-01

    For spatiotemporal chaos described by partial differential equations, there are generally locations where the dynamical variable achieves its local extremum or where the time partial derivative of the variable vanishes instantaneously. To a large extent, the location and movement of these topologically special points determine the qualitative structure of the disordered states. We analyze numerically statistical properties of the topologically special points in one-dimensional spatiotemporal chaos. The probability distribution functions for the number of point, the lifespan, and the distance covered during their lifetime are obtained from numerical simulations. Mathematically, we establish a probabilistic model to describe the dynamics of these topologically special points. In spite of the different definitions in different spatiotemporal chaos, the dynamics of these special points can be described in a uniform approach.

  11. Distant memories: a prospective study of vantage point of trauma memories.

    PubMed

    Kenny, Lucy M; Bryant, Richard A; Silove, Derrick; Creamer, Mark; O'Donnell, Meaghan; McFarlane, Alexander C

    2009-09-01

    Adopting an observer perspective to recall trauma memories may function as a form of avoidance that maintains posttraumatic stress disorder (PTSD). We conducted a prospective study to analyze the relationship between memory vantage point and PTSD symptoms. Participants (N= 947) identified the vantage point of their trauma memory and reported PTSD symptoms within 4 weeks of the trauma; 730 participants repeated this process 12 months later. Initially recalling the trauma from an observer vantage point was related to more severe PTSD symptoms at that time and 12 months later. Shifting from a field to an observer perspective a year after trauma was associated with greater PTSD severity at 12 months. These results suggest that remembering trauma from an observer vantage point is related to both immediate and ongoing PTSD symptoms.

  12. The impact of science notebook writing on ELL and low-SES students' science language development and conceptual understanding

    NASA Astrophysics Data System (ADS)

    Huerta, Margarita

    This quantitative study explored the impact of literacy integration in a science inquiry classroom involving the use of science notebooks on the academic language development and conceptual understanding of students from diverse (i.e., English Language Learners, or ELLs) and low socio-economic status (low-SES) backgrounds. The study derived from a randomized, longitudinal, field-based NSF funded research project (NSF Award No. DRL - 0822343) targeting ELL and non-ELL students from low-SES backgrounds in a large urban school district in Southeast Texas. The study used a scoring rubric (modified and tested for validity and reliability) to analyze fifth-grade school students' science notebook entries. Scores for academic language quality (or, for brevity, language ) were used to compare language growth over time across three time points (i.e., beginning, middle, and end of the school year) and to compare students across categories (ELL, former ELL, non-ELL, and gender) using descriptive statistics and mixed between-within subjects analysis of variance (ANOVA). Scores for conceptual understanding (or, for brevity, concept) were used to compare students across categories (ELL, former ELL, non-ELL, and gender) in three domains using descriptive statistics and ANOVA. A correlational analysis was conducted to explore the relationship, if any, between language scores and concept scores for each group. Students demonstrated statistically significant growth over time in their academic language as reflected by science notebook scores. While ELL students scored lower than former ELL and non-ELL students at the first two time points, they caught up to their peers by the third time point. Similarly, females outperformed males in language scores in the first two time points, but males caught up to females in the third time point. In analyzing conceptual scores, ELLs had statistically significant lower scores than former-ELL and non-ELL students, and females outperformed males in the first two domains. These differences, however, were not statistically significant in the last domain. Last, correlations between language and concept scores were overall, positive, large, and significant across domains and groups. The study presents a rubric useful for quantifying diverse students' science notebook entries, and findings add to the sparse research on the impact of writing in diverse students' language development and conceptual understanding in science.

  13. Application of wireless sensor network technology in logistics information system

    NASA Astrophysics Data System (ADS)

    Xu, Tao; Gong, Lina; Zhang, Wei; Li, Xuhong; Wang, Xia; Pan, Wenwen

    2017-04-01

    This paper introduces the basic concepts of active RFID (WSN-ARFID) based on wireless sensor networks and analyzes the shortcomings of the existing RFID-based logistics monitoring system. Integrated wireless sensor network technology and the scrambling point of RFID technology. A new real-time logistics detection system based on WSN and RFID, a model of logistics system based on WSN-ARFID is proposed, and the feasibility of this technology applied to logistics field is analyzed.

  14. On a fourth order accurate implicit finite difference scheme for hyperbolic conservation laws. II - Five-point schemes

    NASA Technical Reports Server (NTRS)

    Harten, A.; Tal-Ezer, H.

    1981-01-01

    This paper presents a family of two-level five-point implicit schemes for the solution of one-dimensional systems of hyperbolic conservation laws, which generalized the Crank-Nicholson scheme to fourth order accuracy (4-4) in both time and space. These 4-4 schemes are nondissipative and unconditionally stable. Special attention is given to the system of linear equations associated with these 4-4 implicit schemes. The regularity of this system is analyzed and efficiency of solution-algorithms is examined. A two-datum representation of these 4-4 implicit schemes brings about a compactification of the stencil to three mesh points at each time-level. This compact two-datum representation is particularly useful in deriving boundary treatments. Numerical results are presented to illustrate some properties of the proposed scheme.

  15. Ion beam probing of electrostatic fields

    NASA Technical Reports Server (NTRS)

    Persson, H.

    1979-01-01

    The determination of a cylindrically symmetric, time-independent electrostatic potential V in a magnetic field B with the same symmetry by measurements of the deflection of a primary beam of ions is analyzed and substantiated by examples. Special attention is given to the requirements on canonical angular momentum and total energy set by an arbitrary, nonmonotone V, to scaling laws obtained by normalization, and to the analogy with ionospheric sounding. The inversion procedure with the Abel analysis of an equivalent problem with a one-dimensional fictitious potential is used in a numerical experiment with application to the NASA Lewis Modified Penning Discharge. The determination of V from a study of secondary beams of ions with increased charge produced by hot plasma electrons is also analyzed, both from a general point of view and with application to the NASA Lewis SUMMA experiment. Simple formulas and geometrical constructions are given for the minimum energy necessary to reach the axis, the whole plasma, and any point in the magnetic field. The common, simplifying assumption that V is a small perturbation is critically and constructively analyzed; an iteration scheme for successively correcting the orbits and points of ionization for the electrostatic potential is suggested.

  16. An Expansion of Glider Observation STrategies to Systematically Transmit and Analyze Preferred Waypoints of Underwater Gliders

    DTIC Science & Technology

    2015-01-01

    exercis ead over time i e rendezvous C function. nformation ne that set of po antify the relat are combined to as a morpho runs independ rs. EMPath a...Glider pilots do not use the temporal waypoint estimate from GOST in their control of the glider; waypoints are simply treated as a sequence of points...glider pilots’ interpretation of points as only sequential instead of temporal will eliminate this as a hindrance to the system’s use. Further

  17. Artificial neural networks applied to forecasting time series.

    PubMed

    Montaño Moreno, Juan J; Palmer Pol, Alfonso; Muñoz Gracia, Pilar

    2011-04-01

    This study offers a description and comparison of the main models of Artificial Neural Networks (ANN) which have proved to be useful in time series forecasting, and also a standard procedure for the practical application of ANN in this type of task. The Multilayer Perceptron (MLP), Radial Base Function (RBF), Generalized Regression Neural Network (GRNN), and Recurrent Neural Network (RNN) models are analyzed. With this aim in mind, we use a time series made up of 244 time points. A comparative study establishes that the error made by the four neural network models analyzed is less than 10%. In accordance with the interpretation criteria of this performance, it can be concluded that the neural network models show a close fit regarding their forecasting capacity. The model with the best performance is the RBF, followed by the RNN and MLP. The GRNN model is the one with the worst performance. Finally, we analyze the advantages and limitations of ANN, the possible solutions to these limitations, and provide an orientation towards future research.

  18. Analyzing Impact Factors of Airport Taxiing Delay Based on Ads-B Data

    NASA Astrophysics Data System (ADS)

    Li, J.; Wang, X.; Xu, Y.; Li, Q.; He, C.; Li, Y.

    2017-09-01

    Identifying the factors that cause taxiing delay on airports is a prerequisite for optimizing aircraft taxiing schemes, and helps improve the efficiency of taxiing system. Few of current studies had quantified the potential influencing factors and further investigated their intrinsic relationship. In view of these problems, this paper uses ADS-B data to calculate taxiing delay time by restoring taxiing route and identifying key status points, and further analyzes the impact factors of airport taxiing delay by investigating the relationship between delay time and environmental data such as weather, wind, visibility etc. The case study in Guangzhou Baiyun Airport validates the effectiveness of the proposed method.

  19. Systematic identification of an integrative network module during senescence from time-series gene expression.

    PubMed

    Park, Chihyun; Yun, So Jeong; Ryu, Sung Jin; Lee, Soyoung; Lee, Young-Sam; Yoon, Youngmi; Park, Sang Chul

    2017-03-15

    Cellular senescence irreversibly arrests growth of human diploid cells. In addition, recent studies have indicated that senescence is a multi-step evolving process related to important complex biological processes. Most studies analyzed only the genes and their functions representing each senescence phase without considering gene-level interactions and continuously perturbed genes. It is necessary to reveal the genotypic mechanism inferred by affected genes and their interaction underlying the senescence process. We suggested a novel computational approach to identify an integrative network which profiles an underlying genotypic signature from time-series gene expression data. The relatively perturbed genes were selected for each time point based on the proposed scoring measure denominated as perturbation scores. Then, the selected genes were integrated with protein-protein interactions to construct time point specific network. From these constructed networks, the conserved edges across time point were extracted for the common network and statistical test was performed to demonstrate that the network could explain the phenotypic alteration. As a result, it was confirmed that the difference of average perturbation scores of common networks at both two time points could explain the phenotypic alteration. We also performed functional enrichment on the common network and identified high association with phenotypic alteration. Remarkably, we observed that the identified cell cycle specific common network played an important role in replicative senescence as a key regulator. Heretofore, the network analysis from time series gene expression data has been focused on what topological structure was changed over time point. Conversely, we focused on the conserved structure but its context was changed in course of time and showed it was available to explain the phenotypic changes. We expect that the proposed method will help to elucidate the biological mechanism unrevealed by the existing approaches.

  20. Research on Scheduling Algorithm for Multi-satellite and Point Target Task on Swinging Mode

    NASA Astrophysics Data System (ADS)

    Wang, M.; Dai, G.; Peng, L.; Song, Z.; Chen, G.

    2012-12-01

    Nowadays, using satellite in space to observe ground is an important and major method to obtain ground information. With the development of the scientific technology in the field of space, many fields such as military and economic and other areas have more and more requirement of space technology because of the benefits of the satellite's widespread, timeliness and unlimited of area and country. And at the same time, because of the wide use of all kinds of satellites, sensors, repeater satellites and ground receiving stations, ground control system are now facing great challenge. Therefore, how to make the best value of satellite resources so as to make full use of them becomes an important problem of ground control system. Satellite scheduling is to distribute the resource to all tasks without conflict to obtain the scheduling result so as to complete as many tasks as possible to meet user's requirement under considering the condition of the requirement of satellites, sensors and ground receiving stations. Considering the size of the task, we can divide tasks into point task and area task. This paper only considers point targets. In this paper, a description of satellite scheduling problem and a chief introduction of the theory of satellite scheduling are firstly made. We also analyze the restriction of resource and task in scheduling satellites. The input and output flow of scheduling process are also chiefly described in the paper. On the basis of these analyses, we put forward a scheduling model named as multi-variable optimization model for multi-satellite and point target task on swinging mode. In the multi-variable optimization model, the scheduling problem is transformed the parametric optimization problem. The parameter we wish to optimize is the swinging angle of every time-window. In the view of the efficiency and accuracy, some important problems relating the satellite scheduling such as the angle relation between satellites and ground targets, positive and negative swinging angle and the computation of time window are analyzed and discussed. And many strategies to improve the efficiency of this model are also put forward. In order to solve the model, we bring forward the conception of activity sequence map. By using the activity sequence map, the activity choice and the start time of the activity can be divided. We also bring forward three neighborhood operators to search the result space. The front movement remaining time and the back movement remaining time are used to analyze the feasibility to generate solution from neighborhood operators. Lastly, the algorithm to solve the problem and model is put forward based genetic algorithm. Population initialization, crossover operator, mutation operator, individual evaluation, collision decrease operator, select operator and collision elimination operator is designed in the paper. Finally, the scheduling result and the simulation for a practical example on 5 satellites and 100 point targets with swinging mode is given, and the scheduling performances are also analyzed while the swinging angle in 0, 5, 10, 15, 25. It can be shown by the result that the model and the algorithm are more effective than those ones without swinging mode.

  1. Women's HIV Disclosure to Family and Friends

    PubMed Central

    Craft, Shonda M.; Reed, Sandra J.

    2012-01-01

    Abstract Previous researchers have documented rates of HIV disclosure to family at discrete time periods, yet none have taken a dynamic approach to this phenomenon. The purpose of this study is to take the next step and provide a retrospective comparison of rates of women's HIV disclosure to family and friends over a 15-year time span. Of particular interest are the possible influences of social network and relationship characteristics on the time-to-disclosure of serostatus. Time-to-disclosure was analyzed from data provided by 125 HIV-positive women. Participants were primarily married or dating (42%), unemployed (79.2%), African American (68%) women with a high school diploma or less (54.4%). Length of time since diagnosis ranged from 1 month to over 19 years (M=7.1 years). Results pointed to statistically significant differences in time-to-disclosure between family, friends, and sexual partners. Additionally, females and persons with whom the participant had more frequent contact were more likely to be disclosed to, regardless of the type of relationship. The results of this study underscore possible challenges with existing studies which have employed point prevalence designs, and point to new methods which could be helpful in family research. PMID:22313348

  2. Actuator-Assisted Calibration of Freehand 3D Ultrasound System.

    PubMed

    Koo, Terry K; Silvia, Nathaniel

    2018-01-01

    Freehand three-dimensional (3D) ultrasound has been used independently of other technologies to analyze complex geometries or registered with other imaging modalities to aid surgical and radiotherapy planning. A fundamental requirement for all freehand 3D ultrasound systems is probe calibration. The purpose of this study was to develop an actuator-assisted approach to facilitate freehand 3D ultrasound calibration using point-based phantoms. We modified the mathematical formulation of the calibration problem to eliminate the need of imaging the point targets at different viewing angles and developed an actuator-assisted approach/setup to facilitate quick and consistent collection of point targets spanning the entire image field of view. The actuator-assisted approach was applied to a commonly used cross wire phantom as well as two custom-made point-based phantoms (original and modified), each containing 7 collinear point targets, and compared the results with the traditional freehand cross wire phantom calibration in terms of calibration reproducibility, point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time. Results demonstrated that the actuator-assisted single cross wire phantom calibration significantly improved the calibration reproducibility and offered similar point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time with respect to the freehand cross wire phantom calibration. On the other hand, the actuator-assisted modified "collinear point target" phantom calibration offered similar precision and accuracy when compared to the freehand cross wire phantom calibration, but it reduced the data acquisition time by 57%. It appears that both actuator-assisted cross wire phantom and modified collinear point target phantom calibration approaches are viable options for freehand 3D ultrasound calibration.

  3. Actuator-Assisted Calibration of Freehand 3D Ultrasound System

    PubMed Central

    2018-01-01

    Freehand three-dimensional (3D) ultrasound has been used independently of other technologies to analyze complex geometries or registered with other imaging modalities to aid surgical and radiotherapy planning. A fundamental requirement for all freehand 3D ultrasound systems is probe calibration. The purpose of this study was to develop an actuator-assisted approach to facilitate freehand 3D ultrasound calibration using point-based phantoms. We modified the mathematical formulation of the calibration problem to eliminate the need of imaging the point targets at different viewing angles and developed an actuator-assisted approach/setup to facilitate quick and consistent collection of point targets spanning the entire image field of view. The actuator-assisted approach was applied to a commonly used cross wire phantom as well as two custom-made point-based phantoms (original and modified), each containing 7 collinear point targets, and compared the results with the traditional freehand cross wire phantom calibration in terms of calibration reproducibility, point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time. Results demonstrated that the actuator-assisted single cross wire phantom calibration significantly improved the calibration reproducibility and offered similar point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time with respect to the freehand cross wire phantom calibration. On the other hand, the actuator-assisted modified “collinear point target” phantom calibration offered similar precision and accuracy when compared to the freehand cross wire phantom calibration, but it reduced the data acquisition time by 57%. It appears that both actuator-assisted cross wire phantom and modified collinear point target phantom calibration approaches are viable options for freehand 3D ultrasound calibration. PMID:29854371

  4. Quantum point contact displacement transducer for a mechanical resonator at sub-Kelvin temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okazaki, Yuma; Mahboob, Imran; Onomitsu, Koji

    Highly sensitive displacement transduction of a 1.67 MHz mechanical resonator with a quantum point contact (QPC) formed in a GaAs heterostructure is demonstrated. By positioning the QPC at the point of maximum mechanical strain on the resonator and operating at 80 mK, a displacement responsivity of 3.81 A/m is measured, which represents a two order of magnitude improvement on the previous QPC based devices. By further analyzing the QPC transport characteristics, a sub-Poisson-noise-limited displacement sensitivity of 25 fm/Hz{sup 1/2} is determined which corresponds to a position resolution that is 23 times the standard quantum limit.

  5. Film characteristics pertinent to coherent optical data processing systems.

    PubMed

    Thomas, C E

    1972-08-01

    Photographic film is studied quantitatively as the input mechanism for coherent optical data recording and processing systems. The two important film characteristics are the amplitude transmission vs exposure (T(A) - E) curve and the film noise power spectral density. Both functions are measured as a function of the type of film, the type of developer, developer time and temperature, and the exposing and readout light wavelengths. A detailed analysis of a coherent optical spatial frequency analyzer reveals that the optimum do bias point for 649-F film is an amplitude transmission of about 70%. This operating point yields minimum harmonic and intermodulation distortion, whereas the 50% amplitude transmission bias point recommended by holographers yields maximum diffraction efficiency. It is also shown that the effective ac gain or contrast of the film is nearly independent of the development conditions for a given film. Finally, the linear dynamic range of one particular coherent optical spatial frequency analyzer is shown to be about 40-50 dB.

  6. Message survival and decision dynamics in a class of reactive complex systems subject to external fields

    NASA Astrophysics Data System (ADS)

    Rodriguez Lucatero, C.; Schaum, A.; Alarcon Ramos, L.; Bernal-Jaquez, R.

    2014-07-01

    In this study, the dynamics of decisions in complex networks subject to external fields are studied within a Markov process framework using nonlinear dynamical systems theory. A mathematical discrete-time model is derived using a set of basic assumptions regarding the convincement mechanisms associated with two competing opinions. The model is analyzed with respect to the multiplicity of critical points and the stability of extinction states. Sufficient conditions for extinction are derived in terms of the convincement probabilities and the maximum eigenvalues of the associated connectivity matrices. The influences of exogenous (e.g., mass media-based) effects on decision behavior are analyzed qualitatively. The current analysis predicts: (i) the presence of fixed-point multiplicity (with a maximum number of four different fixed points), multi-stability, and sensitivity with respect to the process parameters; and (ii) the bounded but significant impact of exogenous perturbations on the decision behavior. These predictions were verified using a set of numerical simulations based on a scale-free network topology.

  7. Evaluating Mass Analyzers as Candidates for Small, Portable, Rugged Single Point Mass Spectrometers for Analysis of Permanent Gases

    NASA Technical Reports Server (NTRS)

    Arkin, C. Richard; Ottens, Andrew K.; Diaz, Jorge A.; Griffin, Timothy P.; Follestein, Duke; Adams, Fredrick; Steinrock, T. (Technical Monitor)

    2001-01-01

    For Space Shuttle launch safety, there is a need to monitor the concentration Of H2, He, O2, and Ar around the launch vehicle. Currently a large mass spectrometry system performs this task, using long transport lines to draw in samples. There is great interest in replacing this stationary system with several miniature, portable, rugged mass spectrometers which act as point sensors which can be placed at the sampling point. Five commercial and two non-commercial analyzers are evaluated. The five commercial systems include the Leybold Inficon XPR-2 linear quadrupole, the Stanford Research (SRS-100) linear quadrupole, the Ferran linear quadrupole array, the ThermoQuest Polaris-Q quadrupole ion trap, and the IonWerks Time-of-Flight (TOF). The non-commercial systems include a compact double focusing sector (CDFMS) developed at the University of Minnesota, and a quadrupole ion trap (UF-IT) developed at the University of Florida.

  8. Improved detection limits for electrospray ionization on a magnetic sector mass spectrometer by using an array detector.

    PubMed

    Cody, R B; Tamura, J; Finch, J W; Musselman, B D

    1994-03-01

    Array detection was compared with point detection for solutions of hen egg-white lysozyme, equine myoglobin, and ubiquitin analyzed by electrospray ionization with a magnetic sector mass spectrometer. The detection limits for samples analyzed by using the array detector system were at least 10 times lower than could be achieved by using a point detector on the same mass spectrometer. The minimum detectable quantity of protein corresponded to a signal-to-background ratio of approximately 2∶1 for a 500 amol/μL solution of hen egg-white lysozyme. However, the ultimate practical sample concentrations appeared to be in the 10-100 fmol/μL range for the analysis of dilute solutions of relatively pure proteins or simple mixtures.

  9. Effects of topical hyaluronic acid on corneal wound healing in dogs: a pilot study.

    PubMed

    Gronkiewicz, Kristina M; Giuliano, Elizabeth A; Sharma, Ajay; Mohan, Rajiv R

    2017-03-01

    To investigate the efficacy of topical 0.2% hyaluronic acid in canine corneal ulcers in vivo. Six purpose-bred beagles were randomly assigned into two groups (three dogs/group): group A received experimental product (Optimend ™ , containing 0.2% hyaluronic acid, KineticVet ™ ); group B received control product (Optimend ™ without 0.2% hyaluronic acid and supplemented with carboxymethylcellulose). The clinical scorer was masked to product content and subject assignment. Under sedation and topical anesthesia, 6-mm axial corneal epithelial debridements were performed in the left eye. Wounded corneas received standard ulcer treatment and topical product (group A) or control product (group B) three times a day (TID) until ulcers were healed. Slit-lamp biomicroscopy was performed 6 h after wounding and then every 12 h; findings were graded according to modified McDonald-Shadduck scoring system; extraocular photography was performed after fluorescein stain application at all examination time points. Images were analyzed using NIH image j software to quantify rate of corneal epithelialization. Gelatin zymography was used to analyze matrix metalloproteinase (MMP) 2 and 9 protein expression in tears collected at set time points during the study period. No statistical differences in clinical ophthalmic examination scores, rate of corneal epithelialization, or MMP2 or MMP9 protein expression were found between groups at any tested time point. The application of 0.2% hyaluronic acid to standard ulcer medical management is well tolerated. Topical addition of the viscoelastic did not accelerate corneal wound healing compared to a topical control with similar viscosity in this study. © 2016 American College of Veterinary Ophthalmologists.

  10. Single-shot polarimetry imaging of multicore fiber.

    PubMed

    Sivankutty, Siddharth; Andresen, Esben Ravn; Bouwmans, Géraud; Brown, Thomas G; Alonso, Miguel A; Rigneault, Hervé

    2016-05-01

    We report an experimental test of single-shot polarimetry applied to the problem of real-time monitoring of the output polarization states in each core within a multicore fiber bundle. The technique uses a stress-engineered optical element, together with an analyzer, and provides a point spread function whose shape unambiguously reveals the polarization state of a point source. We implement this technique to monitor, simultaneously and in real time, the output polarization states of up to 180 single-mode fiber cores in both conventional and polarization-maintaining fiber bundles. We demonstrate also that the technique can be used to fully characterize the polarization properties of each individual fiber core, including eigen-polarization states, phase delay, and diattenuation.

  11. Stability and Hopf Bifurcation for Two Advertising Systems, Coupled with Delay

    NASA Astrophysics Data System (ADS)

    Sterpu, Mihaela; Rocşoreanu, Carmen

    2007-09-01

    Two advertising systems were linearly coupled via the first variable, with time delay. The stability and the Hopf bifurcation corresponding to the symmetric equilibrium point (the origin) in the 4D system are analyzed. Different types of oscillations corresponding to the limit cycles are compared.

  12. School Climate Reports from Norwegian Teachers: A Methodological and Substantive Study.

    ERIC Educational Resources Information Center

    Kallestad, Jan Helge; Olweus, Dan; Alsaker, Francoise

    1998-01-01

    Explores methodological and substantive issues relating to school climate, using a dataset derived from 42 Norwegian schools at two points of time and a standard definition of organizational climate. Identifies and analyzes four school-climate dimensions. Three dimensions (collegial communication, orientation to change, and teacher influence over…

  13. Post-harvest and post-milling changes in wheat grain and flour quality characteristics

    USDA-ARS?s Scientific Manuscript database

    Soft red winter (SRW) wheat grain immediately after harvest and flour after milling were stored for 26 weeks and analyzed for comprehensive milling and baking quality characteristics at different time points to examine the consistency of the quality test results. Increases in falling number (FN) of ...

  14. Time Series ARIMA Models of Undergraduate Grade Point Average.

    ERIC Educational Resources Information Center

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  15. Relation between Video Game Addiction and Interfamily Relationships on Primary School Students

    ERIC Educational Resources Information Center

    Zorbaz, Selen Demirtas; Ulas, Ozlem; Kizildag, Seval

    2015-01-01

    This study seeks to analyze whether or not the following three variables of "Discouraging Family Relations," "Supportive Family Relations," "Total Time Spent on the Computer," and "Grade Point Average (GPA)" predict elementary school students' video game addiction rates, and whether or not there exists a…

  16. Edge-following algorithm for tracking geological features

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.

    1977-01-01

    Sequential edge-tracking algorithm employs circular scanning to point permit effective real-time tracking of coastlines and rivers from earth resources satellites. Technique eliminates expensive high-resolution cameras. System might also be adaptable for application in monitoring automated assembly lines, inspecting conveyor belts, or analyzing thermographs, or x ray images.

  17. Balloon launched decelerator test program: Post-flight test report, BLDT vehicle AV-2, Viking 1975 project

    NASA Technical Reports Server (NTRS)

    Dickinson, D.; Hicks, F.; Schlemmer, J.; Michel, F.; Moog, R. D.

    1972-01-01

    The pertinent events concerned with the launch, float, and flight of balloon launched decelerator test vehicle AV-2 are discussed. The performance of the decelerator system is analyzed. Data on the flight trajectory and decelerator test points at the time of decelerator deployment are provided. A description of the time history of vehicle events and anomalies encounters during the mission is included.

  18. Balloon launched decelerator test program: Post-flight test report, BLDT vehicle AV-3, Viking 1975 project

    NASA Technical Reports Server (NTRS)

    Dickinson, D.; Hicks, F.; Schlemmer, J.; Michel, F.; Moog, R. D.

    1973-01-01

    The pertinent events concerned with the launch, float, and flight of balloon launched decelerator test vehicle AV-3 are discussed. The performance of the decelerator system is analyzed. Data on the flight trajectory and decelerator test points at the time of decelerator deployment are provided. A description of the time history of vehicle events and anaomalies encounters during the mission is included.

  19. Analyzing latent state-trait and multiple-indicator latent growth curve models as multilevel structural equation models

    PubMed Central

    Geiser, Christian; Bishop, Jacob; Lockhart, Ginger; Shiffman, Saul; Grenard, Jerry L.

    2013-01-01

    Latent state-trait (LST) and latent growth curve (LGC) models are frequently used in the analysis of longitudinal data. Although it is well-known that standard single-indicator LGC models can be analyzed within either the structural equation modeling (SEM) or multilevel (ML; hierarchical linear modeling) frameworks, few researchers realize that LST and multivariate LGC models, which use multiple indicators at each time point, can also be specified as ML models. In the present paper, we demonstrate that using the ML-SEM rather than the SL-SEM framework to estimate the parameters of these models can be practical when the study involves (1) a large number of time points, (2) individually-varying times of observation, (3) unequally spaced time intervals, and/or (4) incomplete data. Despite the practical advantages of the ML-SEM approach under these circumstances, there are also some limitations that researchers should consider. We present an application to an ecological momentary assessment study (N = 158 youths with an average of 23.49 observations of positive mood per person) using the software Mplus (Muthén and Muthén, 1998–2012) and discuss advantages and disadvantages of using the ML-SEM approach to estimate the parameters of LST and multiple-indicator LGC models. PMID:24416023

  20. Which skills and factors better predict winning and losing in high-level men's volleyball?

    PubMed

    Peña, Javier; Rodríguez-Guerra, Jorge; Buscà, Bernat; Serra, Núria

    2013-09-01

    The aim of this study was to determine which skills and factors better predicted the outcomes of regular season volleyball matches in the Spanish "Superliga" and were significant for obtaining positive results in the game. The study sample consisted of 125 matches played during the 2010-11 Spanish men's first division volleyball championship. Matches were played by 12 teams composed of 148 players from 17 different nations from October 2010 to March 2011. The variables analyzed were the result of the game, team category, home/away court factors, points obtained in the break point phase, number of service errors, number of service aces, number of reception errors, percentage of positive receptions, percentage of perfect receptions, reception efficiency, number of attack errors, number of blocked attacks, attack points, percentage of attack points, attack efficiency, and number of blocks performed by both teams participating in the match. The results showed that the variables of team category, points obtained in the break point phase, number of reception errors, and number of blocked attacks by the opponent were significant predictors of winning or losing the matches. Odds ratios indicated that the odds of winning a volleyball match were 6.7 times greater for the teams belonging to higher rankings and that every additional point in Complex II increased the odds of winning a match by 1.5 times. Every reception and blocked ball error decreased the possibility of winning by 0.6 and 0.7 times, respectively.

  1. Identification of Location Specific Feature Points in a Cardiac Cycle Using a Novel Seismocardiogram Spectrum System.

    PubMed

    Lin, Wen-Yen; Chou, Wen-Cheng; Chang, Po-Cheng; Chou, Chung-Chuan; Wen, Ming-Shien; Ho, Ming-Yun; Lee, Wen-Chen; Hsieh, Ming-Jer; Lin, Chung-Chih; Tsai, Tsai-Hsuan; Lee, Ming-Yih

    2018-03-01

    Seismocardiogram (SCG) or mechanocardiography is a noninvasive cardiac diagnostic method; however, previous studies used only a single sensor to detect cardiac mechanical activities that will not be able to identify location-specific feature points in a cardiac cycle corresponding to the four valvular auscultation locations. In this study, a multichannel SCG spectrum measurement system was proposed and examined for cardiac activity monitoring to overcome problems like, position dependency, time delay, and signal attenuation, occurring in traditional single-channel SCG systems. ECG and multichannel SCG signals were simultaneously recorded in 25 healthy subjects. Cardiac echocardiography was conducted at the same time. SCG traces were analyzed and compared with echocardiographic images for feature point identification. Fifteen feature points were identified in the corresponding SCG traces. Among them, six feature points, including left ventricular lateral wall contraction peak velocity, septal wall contraction peak velocity, transaortic peak flow, transpulmonary peak flow, transmitral ventricular relaxation flow, and transmitral atrial contraction flow were identified. These new feature points were not observed in previous studies because the single-channel SCG could not detect the location-specific signals from other locations due to time delay and signal attenuation. As the results, the multichannel SCG spectrum measurement system can record the corresponding cardiac mechanical activities with location-specific SCG signals and six new feature points were identified with the system. This new modality may help clinical diagnoses of valvular heart diseases and heart failure in the future.

  2. Cryptozoology Society

    NASA Astrophysics Data System (ADS)

    Richman, Barbara T.

    Reports of Loch Ness monsters, Bigfoot, and the Yeti spring u p from time to time, sparking scientific controversy about the veracity of these observations. Now an organization has been established to help cull, analyze, and disseminate information on the alleged creatures. The International Society of Cryptozoology, formed at a January meeting at the U.S. National Museum of Natural History of the Smithsonian Institution, will serve as the focal point for the investigation, analysis, publication, and discussion of animals of unexpected form or size or of unexpected occurrences in time or space.

  3. High precision determination of the melting points of water TIP4P/2005 and water TIP4P/Ice models by the direct coexistence technique

    NASA Astrophysics Data System (ADS)

    Conde, M. M.; Rovere, M.; Gallo, P.

    2017-12-01

    An exhaustive study by molecular dynamics has been performed to analyze the factors that enhance the precision of the technique of direct coexistence for a system of ice and liquid water. The factors analyzed are the stochastic nature of the method, the finite size effects, and the influence of the initial ice configuration used. The results obtained show that the precision of estimates obtained through the technique of direct coexistence is markedly affected by the effects of finite size, requiring systems with a large number of molecules to reduce the error bar of the melting point. This increase in size causes an increase in the simulation time, but the estimate of the melting point with a great accuracy is important, for example, in studies on the ice surface. We also verified that the choice of the initial ice Ih configuration with different proton arrangements does not significantly affect the estimate of the melting point. Importantly this study leads us to estimate the melting point at ambient pressure of two of the most popular models of water, TIP4P/2005 and TIP4P/Ice, with the greatest precision to date.

  4. Analysis of the release process of phenylpropanolamine hydrochloride from ethylcellulose matrix granules III. Effects of the dissolution condition on the release process.

    PubMed

    Fukui, Atsuko; Fujii, Ryuta; Yonezawa, Yorinobu; Sunada, Hisakazu

    2006-08-01

    In the pharmaceutical preparation of a controlled release drug, it is very important and necessary to understand the entire release properties. As the first step, the dissolution test under various conditions is selected for the in vitro test, and usually the results are analyzed following Drug Approval and Licensing Procedures. In this test, 3 time points for each release ratio, such as 0.2-0.4, 0.4-0.6, and over 0.7, respectively, should be selected in advance. These are analyzed as to whether their values are inside or outside the prescribed aims at each time point. This method is very simple and useful but the details of the release properties can not be clarified or confirmed. The validity of the dissolution test in analysis using a combination of the square-root time law and cube-root law equations to understand all the drug release properties was confirmed by comparing the simulated value with that measured in the previous papers. Dissolution tests under various conditions affecting drug release properties in the human body were then examined, and the results were analyzed by both methods to identify their strengths and weaknesses. Hereafter, the control of pharmaceutical preparation, the manufacturing process, and understanding the drug release properties will be more efficient. It is considered that analysis using the combination of the square-root time law and cube-root law equations is very useful and efficient. The accuracy of predicting drug release properties in the human body was improved and clarified.

  5. Inequality measures for wealth distribution: Population vs individuals perspective

    NASA Astrophysics Data System (ADS)

    Pascoal, R.; Rocha, H.

    2018-02-01

    Economic inequality is, nowadays, frequently perceived as following a growing trend with impact on political and religious agendas. However, there is a wide range of inequality measures, each of which pointing to a possibly different degree of inequality. Furthermore, regardless of the measure used, it only acknowledges the momentary population inequality, failing to capture the individuals evolution over time. In this paper, several inequality measures were analyzed in order to compare the typical single time instant degree of wealth inequality (population perspective) to the one obtained from the individuals' wealth mean over several time instants (individuals perspective). The proposed generalization of a simple addictive model, for limited time average of individual's wealth, allows us to verify that the typically used inequality measures for a given snapshot instant of the population significantly overestimate the individuals' wealth inequality over time. Moreover, that is more extreme for the ratios than for the indices analyzed.

  6. Time-series analysis of foreign exchange rates using time-dependent pattern entropy

    NASA Astrophysics Data System (ADS)

    Ishizaki, Ryuji; Inoue, Masayoshi

    2013-08-01

    Time-dependent pattern entropy is a method that reduces variations to binary symbolic dynamics and considers the pattern of symbols in a sliding temporal window. We use this method to analyze the instability of daily variations in foreign exchange rates, in particular, the dollar-yen rate. The time-dependent pattern entropy of the dollar-yen rate was found to be high in the following periods: before and after the turning points of the yen from strong to weak or from weak to strong, and the period after the Lehman shock.

  7. Comparison of the effects of 7.2% hypertonic saline and 20% mannitol on whole blood coagulation and platelet function in dogs with suspected intracranial hypertension - a pilot study.

    PubMed

    Yozova, Ivayla D; Howard, Judith; Henke, Diana; Dirkmann, Daniel; Adamik, Katja N

    2017-06-19

    Hyperosmolar therapy with either mannitol or hypertonic saline (HTS) is commonly used in the treatment of intracranial hypertension (ICH). In vitro data indicate that both mannitol and HTS affect coagulation and platelet function in dogs. The aim of this study was to compare the effects of 20% mannitol and 7.2% HTS on whole blood coagulation using rotational thromboelastometry (ROTEM®) and platelet function using a platelet function analyzer (PFA®) in dogs with suspected ICH. Thirty client-owned dogs with suspected ICH needing osmotherapy were randomized to receive either 20% mannitol (5 ml/kg IV over 15 min) or 7.2% HTS (4 ml/kg IV over 5 min). ROTEM® (EXTEM® and FIBTEM® assays) and PFA® analyses (collagen/ADP cartridges) were performed before (T 0 ), as well as 5 (T 5 ), 60 (T 60 ) and 120 (T 120 ) minutes after administration of HTS or mannitol. Data at T 5 , T 60 and T 120 were analyzed as a percentage of values at T 0 for comparison between groups, and as absolute values for comparison between time points, respectively. No significant difference was found between the groups for the percentage change of any parameter at any time point except for FIBTEM® clotting time. Within each group, no significant difference was found between time points for any parameter except for FIBTEM® clotting time in the HTS group, and EXTEM® and FIBTEM® maximum clot firmness in the mannitol group. Median ROTEM® values lay within institutional reference intervals in both groups at all time points, whereas median PFA® values were above the reference intervals at T 5 (both groups) and T 60 (HTS group). Using currently recommended doses, mannitol and HTS do not differ in their effects on whole blood coagulation and platelet function in dogs with suspected ICH. Moreover, no relevant impairment of whole blood coagulation was found following treatment with either solution, whereas a short-lived impairment of platelet function was found after both solutions.

  8. Temporal Dependence of Chromosomal Aberration on Radiation Quality and Cellular Genetic Background

    NASA Technical Reports Server (NTRS)

    Lu, Tao; Zhang, Ye; Krieger, Stephanie; Yeshitla, Samrawit; Goss, Rosalin; Bowler, Deborah; Kadhim, Munira; Wilson, Bobby; Wu, Honglu

    2017-01-01

    Radiation induced cancer risks are driven by genetic instability. It is not well understood how different radiation sources induce genetic instability in cells with different genetic background. Here we report our studies on genetic instability, particularly chromosome instability using fluorescence in situ hybridization (FISH), in human primary lymphocytes, normal human fibroblasts, and transformed human mammary epithelial cells in a temporal manner after exposure to high energy protons and Fe ions. The chromosome spread was prepared 48 hours, 1 week, 2 week, and 1 month after radiation exposure. Chromosome aberrations were analyzed with whole chromosome specific probes (chr. 3 and chr. 6). After exposure to protons and Fe ions of similar cumulative energy (??), Fe ions induced more chromosomal aberrations at early time point (48 hours) in all three types of cells. Over time (after 1 month), more chromosome aberrations were observed in cells exposed to Fe ions than in the same type of cells exposed to protons. While the mammary epithelial cells have higher intrinsic genetic instability and higher rate of initial chromosome aberrations than the fibroblasts, the fibroblasts retained more chromosomal aberration after long term cell culture (1 month) in comparison to their initial frequency of chromosome aberration. In lymphocytes, the chromosome aberration frequency at 1 month after exposure to Fe ions was close to unexposed background, and the chromosome aberration frequency at 1 month after exposure to proton was much higher. In addition to human cells, mouse bone marrow cells isolated from strains CBA/CaH and C57BL/6 were irradiated with proton or Fe ions and were analyzed for chromosome aberration at different time points. Cells from CBA mice showed similar frequency of chromosome aberration at early and late time points, while cells from C57 mice showed very different chromosome aberration rate at early and late time points. Our results suggest that relative biological effectiveness (RBE) of radiation are different for different radiation sources, for different cell types, and for the same cell type with different genetic background at different times after radiation exposure. Caution must be taken in using RBE value to estimate biological effects from radiation exposure.

  9. Local pulse wave velocity estimated from small vibrations measured ultrasonically at multiple points on the arterial wall

    NASA Astrophysics Data System (ADS)

    Ito, Mika; Arakawa, Mototaka; Kanai, Hiroshi

    2018-07-01

    Pulse wave velocity (PWV) is used as a diagnostic criterion for arteriosclerosis, a major cause of heart disease and cerebrovascular disease. However, there are several problems with conventional PWV measurement techniques. One is that a pulse wave is assumed to only have an incident component propagating at a constant speed from the heart to the femoral artery, and another is that PWV is only determined from a characteristic time such as the rise time of the blood pressure waveform. In this study, we noninvasively measured the velocity waveform of small vibrations at multiple points on the carotid arterial wall using ultrasound. Local PWV was determined by analyzing the phase component of the velocity waveform by the least squares method. This method allowed measurement of the time change of the PWV at approximately the arrival time of the pulse wave, which discriminates the period when the reflected component is not contaminated.

  10. Study protocol to examine the effects of spaceflight and a spaceflight analog on neurocognitive performance: extent, longevity, and neural bases

    PubMed Central

    2013-01-01

    Background Long duration spaceflight (i.e., 22 days or longer) has been associated with changes in sensorimotor systems, resulting in difficulties that astronauts experience with posture control, locomotion, and manual control. The microgravity environment is an important causal factor for spaceflight induced sensorimotor changes. Whether spaceflight also affects other central nervous system functions such as cognition is yet largely unknown, but of importance in consideration of the health and performance of crewmembers both in- and post-flight. We are therefore conducting a controlled prospective longitudinal study to investigate the effects of spaceflight on the extent, longevity and neural bases of sensorimotor and cognitive performance changes. Here we present the protocol of our study. Methods/design This study includes three groups (astronauts, bed rest subjects, ground-based control subjects) for which each the design is single group with repeated measures. The effects of spaceflight on the brain will be investigated in astronauts who will be assessed at two time points pre-, at three time points during-, and at four time points following a spaceflight mission of six months. To parse out the effect of microgravity from the overall effects of spaceflight, we investigate the effects of seventy days head-down tilted bed rest. Bed rest subjects will be assessed at two time points before-, two time points during-, and three time points post-bed rest. A third group of ground based controls will be measured at four time points to assess reliability of our measures over time. For all participants and at all time points, except in flight, measures of neurocognitive performance, fine motor control, gait, balance, structural MRI (T1, DTI), task fMRI, and functional connectivity MRI will be obtained. In flight, astronauts will complete some of the tasks that they complete pre- and post flight, including tasks measuring spatial working memory, sensorimotor adaptation, and fine motor performance. Potential changes over time and associations between cognition, motor-behavior, and brain structure and function will be analyzed. Discussion This study explores how spaceflight induced brain changes impact functional performance. This understanding could aid in the design of targeted countermeasures to mitigate the negative effects of long-duration spaceflight. PMID:24350728

  11. Advanced sensor-simulation capability

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Kalman, Linda S.; Keller, Robert A.

    1990-09-01

    This paper provides an overview of an advanced simulation capability currently in use for analyzing visible and infrared sensor systems. The software system, called VISTAS (VISIBLE/INFRARED SENSOR TRADES, ANALYSES, AND SIMULATIONS) combines classical image processing techniques with detailed sensor models to produce static and time dependent simulations of a variety of sensor systems including imaging, tracking, and point target detection systems. Systems modelled to date include space-based scanning line-array sensors as well as staring 2-dimensional array sensors which can be used for either imaging or point source detection.

  12. Travelling Randomly on the Poincaré Half-Plane with a Pythagorean Compass

    NASA Astrophysics Data System (ADS)

    Cammarota, V.; Orsingher, E.

    2008-02-01

    A random motion on the Poincaré half-plane is studied. A particle runs on the geodesic lines changing direction at Poisson-paced times. The hyperbolic distance is analyzed, also in the case where returns to the starting point are admitted. The main results concern the mean hyperbolic distance (and also the conditional mean distance) in all versions of the motion envisaged. Also an analogous motion on orthogonal circles of the sphere is examined and the evolution of the mean distance from the starting point is investigated.

  13. Accuracy of a continuous noninvasive hemoglobin monitor in intensive care unit patients.

    PubMed

    Frasca, Denis; Dahyot-Fizelier, Claire; Catherine, Karen; Levrat, Quentin; Debaene, Bertrand; Mimoz, Olivier

    2011-10-01

    To determine whether noninvasive hemoglobin measurement by Pulse CO-Oximetry could provide clinically acceptable absolute and trend accuracy in critically ill patients, compared to other invasive methods of hemoglobin assessment available at bedside and the gold standard, the laboratory analyzer. Prospective study. Surgical intensive care unit of a university teaching hospital. Sixty-two patients continuously monitored with Pulse CO-Oximetry (Masimo Radical-7). None. Four hundred seventy-one blood samples were analyzed by a point-of-care device (HemoCue 301), a satellite lab CO-Oximeter (Siemens RapidPoint 405), and a laboratory hematology analyzer (Sysmex XT-2000i), which was considered the reference device. Hemoglobin values reported from the invasive methods were compared to the values reported by the Pulse CO-Oximeter at the time of blood draw. When the case-to-case variation was assessed, the bias and limits of agreement were 0.0±1.0 g/dL for the Pulse CO-Oximeter, 0.3±1.3g/dL for the point-of-care device, and 0.9±0.6 g/dL for the satellite lab CO-Oximeter compared to the reference method. Pulse CO-Oximetry showed similar trend accuracy as satellite lab CO-Oximetry, whereas the point-of-care device did not appear to follow the trend of the laboratory analyzer as well as the other test devices. When compared to laboratory reference values, hemoglobin measurement with Pulse CO-Oximetry has absolute accuracy and trending accuracy similar to widely used, invasive methods of hemoglobin measurement at bedside. Hemoglobin measurement with pulse CO-Oximetry has the additional advantages of providing continuous measurements, noninvasively, which may facilitate hemoglobin monitoring in the intensive care unit.

  14. Non-destructive scanning for applied stress by the continuous magnetic Barkhausen noise method

    NASA Astrophysics Data System (ADS)

    Franco Grijalba, Freddy A.; Padovese, L. R.

    2018-01-01

    This paper reports the use of a non-destructive continuous magnetic Barkhausen noise technique to detect applied stress on steel surfaces. The stress profile generated in a sample of 1070 steel subjected to a three-point bending test is analyzed. The influence of different parameters such as pickup coil type, scanner speed, applied magnetic field and frequency band analyzed on the effectiveness of the technique is investigated. A moving smoothing window based on a second-order statistical moment is used to analyze the time signal. The findings show that the technique can be used to detect applied stress profiles.

  15. MISSE-7 MESA Miniaturized Electrostatic Analyzer - Ion Spectra Analysis Preliminary Results

    NASA Astrophysics Data System (ADS)

    Enloe, C. L.; Balthazor, R. L.; McHarg, M. G.; Clark, A. L.; Waite, D.; Wallerstein, A. J.; Wilson, K. A.

    2011-12-01

    The 7th Materials on the International Space Station Experiment (MISSE-7) was launched in November 2009 and retrieved on STS-134 in April 2011. One of the onboard experiments, the Miniaturized Electrostatic Analyzer (MESA), is a small low-cost low-size/weight/power ion and electron spectrometer that was pointed into ram during the majority of the time onboard. Over 800 Mb of data has been obtained by taking spectra every three minutes on-orbit. The data has been analyzed with a novel "parameterizing the parameters" method suitable for on-orbit data analysis using low-cost microcontrollers. Preliminary results are shown.

  16. Moire technique utilization for detection and measurement of scoliosis

    NASA Astrophysics Data System (ADS)

    Zawieska, Dorota; Podlasiak, Piotr

    1993-02-01

    Moire projection method enables non-contact measurement of the shape or deformation of different surfaces and constructions by fringe pattern analysis. The fringe map acquisition of the whole surface of the object under test is one of the main advantages compared with 'point by point' methods. The computer analyzes the shape of the whole surface and next user can selected different points or cross section of the object map. In this paper a few typical examples of an application of the moire technique in solving different medical problems will be presented. We will also present to you the equipment the moire pattern analysis is done in real time using the phase stepping method with CCD camera.

  17. Particle dynamics in the original Schwarzschild metric

    NASA Astrophysics Data System (ADS)

    Fimin, N. N.; Chechetkin, V. M.

    2016-04-01

    The properties of the original Schwarzschild metric for a point gravitating mass are considered. The laws of motion in the corresponding space-time are established, and the transition from the Schwarzschildmetric to the metric of a "dusty universe" are studied. The dynamics of a system of particles in thr post-Newtonian approximation are analyzed.

  18. Temporal and spatial genetic variability among tarnished plant bug, Lygus lineolaris (Hemiptera: Mididae)population in a small geographic area

    USDA-ARS?s Scientific Manuscript database

    Lygus lineolaris (Palisot de Beauvois) populations were sampled from five locations near Stoneville, MS, USA at three time points in May, July, and September 2006. Genotype data obtained from 1418 insects using 13 microsatellite markers were analyzed using standard methods to obtain population gene...

  19. Readerbench: Automated Evaluation of Collaboration Based on Cohesion and Dialogism

    ERIC Educational Resources Information Center

    Dascalu, Mihai; Trausan-Matu, Stefan; McNamara, Danielle S.; Dessus, Philippe

    2015-01-01

    As Computer-Supported Collaborative Learning (CSCL) gains a broader usage, the need for automated tools capable of supporting tutors in the time-consuming process of analyzing conversations becomes more pressing. Moreover, collaboration, which presumes the intertwining of ideas or points of view among participants, is a central element of dialogue…

  20. Using high-content imaging data from ToxCast to analyze toxicological tipping points (TDS)

    EPA Science Inventory

    Translating results obtained from high-throughput screening to risk assessment is vital for reducing dependence on animal testing. We studied the effects of 976 chemicals (ToxCast Phase I and II) in HepG2 cells using high-content imaging (HCI) to measure dose and time-depende...

  1. Differential games.

    NASA Technical Reports Server (NTRS)

    Varaiya, P. P.

    1972-01-01

    General discussion of the theory of differential games with two players and zero sum. Games starting at a fixed initial state and ending at a fixed final time are analyzed. Strategies for the games are defined. The existence of saddle values and saddle points is considered. A stochastic version of a differential game is used to examine the synthesis problem.

  2. GC/MS analysis of pesticides in the Ferrara area (Italy) surface water: a chemometric study.

    PubMed

    Pasti, Luisa; Nava, Elisabetta; Morelli, Marco; Bignami, Silvia; Dondi, Francesco

    2007-01-01

    The development of a network to monitor surface waters is a critical element in the assessment, restoration and protection of water quality. In this study, concentrations of 42 pesticides--determined by GC-MS on samples from 11 points along the Ferrara area rivers--have been analyzed by chemometric tools. The data were collected over a three-year period (2002-2004). Principal component analysis of the detected pesticides was carried out in order to define the best spatial locations for the sampling points. The results obtained have been interpreted in view of agricultural land use. Time series data regarding pesticide contents in surface waters has been analyzed using the Autocorrelation function. This chemometric tool allows for seasonal trends and makes it possible to optimize sampling frequency in order to detect the effective maximum pesticide content.

  3. Analytical and pre-analytical performance characteristics of a novel cartridge-type blood gas analyzer for point-of-care and laboratory testing.

    PubMed

    Oyaert, Matthijs; Van Maerken, Tom; Bridts, Silke; Van Loon, Silvi; Laverge, Heleen; Stove, Veronique

    2018-03-01

    Point-of-care blood gas test results may benefit therapeutic decision making by their immediate impact on patient care. We evaluated the (pre-)analytical performance of a novel cartridge-type blood gas analyzer, the GEM Premier 5000 (Werfen), for the determination of pH, partial carbon dioxide pressure (pCO 2 ), partial oxygen pressure (pO 2 ), sodium (Na + ), potassium (K + ), chloride (Cl - ), ionized calcium ( i Ca 2+ ), glucose, lactate, and total hemoglobin (tHb). Total imprecision was estimated according to the CLSI EP5-A2 protocol. The estimated total error was calculated based on the mean of the range claimed by the manufacturer. Based on the CLSI EP9-A2 evaluation protocol, a method comparison with the Siemens RapidPoint 500 and Abbott i-STAT CG8+ was performed. Obtained data were compared against preset quality specifications. Interference of potential pre-analytical confounders on co-oximetry and electrolyte concentrations were studied. The analytical performance was acceptable for all parameters tested. Method comparison demonstrated good agreement to the RapidPoint 500 and i-STAT CG8+, except for some parameters (RapidPoint 500: pCO 2 , K + , lactate and tHb; i-STAT CG8+: pO 2 , Na + , i Ca 2+ and tHb) for which significant differences between analyzers were recorded. No interference of lipemia or methylene blue on CO-oximetry results was found. On the contrary, significant interference for benzalkonium and hemolysis on electrolyte measurements were found, for which the user is notified by an interferent specific flag. Identification of sample errors from pre-analytical sources, such as interferences and automatic corrective actions, along with the analytical performance, ease of use and low maintenance time of the instrument, makes the evaluated instrument a suitable blood gas analyzer for both POCT and laboratory use. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  4. Long-term hematopoietic stem cell damage in a murine model of the hematopoietic syndrome of the acute radiation syndrome.

    PubMed

    Chua, Hui Lin; Plett, P Artur; Sampson, Carol H; Joshi, Mandar; Tabbey, Rebeka; Katz, Barry P; MacVittie, Thomas J; Orschell, Christie M

    2012-10-01

    Residual bone marrow damage (RBMD) persists for years following exposure to radiation and is believed to be due to decreased self-renewal potential of radiation-damaged hematopoietic stem cells (HSC). Current literature has examined primarily sublethal doses of radiation and time points within a few months of exposure. In this study, the authors examined RBMD in mice surviving lethal doses of total body ionizing irradiation (TBI) in a murine model of the Hematopoietic Syndrome of the Acute Radiation Syndrome (H-ARS). Survivors were analyzed at various time points up to 19 mo post-TBI for hematopoietic function. The competitive bone marrow (BM) repopulating potential of 150 purified c-Kit+ Sca-1+ lineage- CD150+ cells (KSLCD150+) remained severely deficient throughout the study compared to KSLCD150+ cells from non-TBI age-matched controls. The minimal engraftment from these TBI HSCs is predominantly myeloid, with minimal production of lymphocytes both in vitro and in vivo. All classes of blood cells as well as BM cellularity were significantly decreased in TBI mice, especially at later time points as mice aged. Primitive BM hematopoietic cells (KSLCD150+) displayed significantly increased cell cycling in TBI mice at all time points, which may be a physiological attempt to maintain HSC numbers in the post-irradiation state. Taken together, these data suggest that the increased cycling among primitive hematopoietic cells in survivors of lethal radiation may contribute to long-term HSC exhaustion and subsequent RBMD, exacerbated by the added insult of aging at later time points.

  5. BATS: a Bayesian user-friendly software for analyzing time series microarray experiments.

    PubMed

    Angelini, Claudia; Cutillo, Luisa; De Canditiis, Daniela; Mutarelli, Margherita; Pensky, Marianna

    2008-10-06

    Gene expression levels in a given cell can be influenced by different factors, namely pharmacological or medical treatments. The response to a given stimulus is usually different for different genes and may depend on time. One of the goals of modern molecular biology is the high-throughput identification of genes associated with a particular treatment or a biological process of interest. From methodological and computational point of view, analyzing high-dimensional time course microarray data requires very specific set of tools which are usually not included in standard software packages. Recently, the authors of this paper developed a fully Bayesian approach which allows one to identify differentially expressed genes in a 'one-sample' time-course microarray experiment, to rank them and to estimate their expression profiles. The method is based on explicit expressions for calculations and, hence, very computationally efficient. The software package BATS (Bayesian Analysis of Time Series) presented here implements the methodology described above. It allows an user to automatically identify and rank differentially expressed genes and to estimate their expression profiles when at least 5-6 time points are available. The package has a user-friendly interface. BATS successfully manages various technical difficulties which arise in time-course microarray experiments, such as a small number of observations, non-uniform sampling intervals and replicated or missing data. BATS is a free user-friendly software for the analysis of both simulated and real microarray time course experiments. The software, the user manual and a brief illustrative example are freely available online at the BATS website: http://www.na.iac.cnr.it/bats.

  6. Effects of tranexamic acid on coagulation indexes of patients undergoing heart valve replacement surgery under cardiopulmonary bypass

    PubMed Central

    Liu, Fei; Xu, Dong; Zhang, Kefeng; Zhang, Jian

    2016-01-01

    This study aims to explore the effects of tranexamic acid on the coagulation indexes of patients undergoing heart valve replacement surgery under the condition of cardiopulmonary bypass (CPB). One hundred patients who conformed to the inclusive criteria were selected and divided into a tranexamic acid group and a non-tranexamic acid group. They all underwent heart valve replacement surgery under CPB. Patients in the tranexamic acid group were intravenously injected with 1 g of tranexamic acid (100 mL) at the time point after anesthesia induction and before skin incision and at the time point after the neutralization of heparin. Patients in the non-tranexamic acid group were given 100 mL of normal saline at corresponding time points, respectively. Then the coagulation indexes of the two groups were analyzed. The activated blood clotting time (ACT) of the two groups was within normal scope before CPB, while four coagulation indexes including prothrombin time (PT), activated partial thromboplastin time (APTT), international normalized ratio (INR), and fibrinogen (FIB) had significant increases after surgery; the PT and INR of the tranexamic acid group had a remarkable decline after surgery. All the findings suggest that the application of tranexamic acid in heart valve replacement surgery under CPB can effectively reduce intraoperative and postoperative blood loss. PMID:27694613

  7. Point-of-care blood gases, electrolytes, chemistries, hemoglobin, and hematocrit measurement in venous samples from pet rabbits.

    PubMed

    Selleri, Paolo; Di Girolamo, Nicola

    2014-01-01

    Point-of-care testing is an attractive option in rabbit medicine, because it permits rapid analysis of a panel of electrolytes, chemistries, blood gases, hemoglobin, and hematocrit, requiring only 65 μL of blood. The purpose of this study was to evaluate the performance of a portable clinical analyzer for measurement of pH, partial pressure of CO2, Na, chloride, potassium, blood urea nitrogen, glucose, hematocrit, and hemoglobin in healthy and diseased rabbits. Blood samples obtained from 30 pet rabbits were analyzed immediately after collection by the portable clinical analyzer (PCA) and immediately thereafter (time <20 sec) by a reference analyzer. Bland-Altman plots and Passing-Bablok regression analysis were used to compare the results. Limits of agreement were wide for all the variables studied, with the exception of pH. Most variables presented significant proportional and/or constant bias. The current study provides sufficient evidence that the PCA presents reliability for pH, although its low agreement with a reference analyzer for the other variables does not support their interchangeability. Limits of agreement provided for each variable allow researchers to evaluate if the PCA is reliable enough for their scope. To the authors' knowledge, the present is the first report evaluating a PCA in the rabbit.

  8. Factors influencing superimposition error of 3D cephalometric landmarks by plane orientation method using 4 reference points: 4 point superimposition error regression model.

    PubMed

    Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul

    2014-01-01

    Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.

  9. Statistical Characteristics of Wrong-Way Driving Crashes on Illinois Freeways.

    PubMed

    Zhou, Huaguo; Zhao, Jiguang; Pour-Rouholamin, Mahdi; Tobias, Priscilla A

    2015-01-01

    Driving the wrong way on freeways, namely wrong-way driving (WWD), has been found to be a major concern for more than 6 decades. The purpose of this study was to identify characteristics of this type of crash as well as to rank the locations/interchanges according to their vulnerability to WWD entries. The WWD crash data on Illinois freeways were statistically analyzed for a 6-year time period (2004 to 2009) from 3 aspects: crash, vehicle, and person. The temporal distributions, geographical distributions, roadway characteristics, and crash locations were analyzed for WWD crashes. The driver demographic information, physical condition, and injury severity were analyzed for wrong-way drivers. The vehicle characteristics, vehicle operation, and collision results were analyzed for WWD vehicles. A method was brought about to identify wrong-way entry points that was then used to develop a relative-importance technique and rank different interchange types in terms of potential WWD incidents. The findings revealed that a large proportion of WWD crashes occurred during the weekend from midnight to 5 a.m. Approximately 80% of WWD crashes were located in urban areas and nearly 70% of wrong-way vehicles were passenger cars. Approximately 58% of wrong-way drivers were driving under the influence (DUI). Of those, nearly 50% were confirmed to be impaired by alcohol, about 4% were impaired by drugs, and more than 3% had been drinking. The analysis of interchange ranking found that compressed diamond interchanges, single point diamond interchanges (SPDIs), partial cloverleaf interchanges, and freeway feeders had the highest wrong-way crash rates (wrong-way crashes per 100 interchanges per year). The findings of this study call for more attention to WWD crashes from different aspects such as driver age group, time of day, day of week, and DUI drivers. Based on the analysis results of WWD distance, the study explained why a 5-mile radius of WWD crash location should be studied for WWD fatal crashes with unknown entry points.

  10. Economic Feasibility of Wireless Sensor Network-Based Service Provision in a Duopoly Setting with a Monopolist Operator.

    PubMed

    Sanchis-Cano, Angel; Romero, Julián; Sacoto-Cabrera, Erwin J; Guijarro, Luis

    2017-11-25

    We analyze the feasibility of providing Wireless Sensor Network-data-based services in an Internet of Things scenario from an economical point of view. The scenario has two competing service providers with their own private sensor networks, a network operator and final users. The scenario is analyzed as two games using game theory. In the first game, sensors decide to subscribe or not to the network operator to upload the collected sensing-data, based on a utility function related to the mean service time and the price charged by the operator. In the second game, users decide to subscribe or not to the sensor-data-based service of the service providers based on a Logit discrete choice model related to the quality of the data collected and the subscription price. The sinks and users subscription stages are analyzed using population games and discrete choice models, while network operator and service providers pricing stages are analyzed using optimization and Nash equilibrium concepts respectively. The model is shown feasible from an economic point of view for all the actors if there are enough interested final users and opens the possibility of developing more efficient models with different types of services.

  11. Numerical and In Vitro Experimental Investigation of the Hemolytic Performance at the Off-Design Point of an Axial Ventricular Assist Pump.

    PubMed

    Liu, Guang-Mao; Jin, Dong-Hai; Jiang, Xi-Hang; Zhou, Jian-Ye; Zhang, Yan; Chen, Hai-Bo; Hu, Sheng-Shou; Gui, Xing-Min

    The ventricular assist pumps do not always function at the design point; instead, these pumps may operate at unfavorable off-design points. For example, the axial ventricular assist pump FW-2, in which the design point is 5 L/min flow rate against 100 mm Hg pressure increase at 8,000 rpm, sometimes works at off-design flow rates of 1 to 4 L/min. The hemolytic performance of the FW-2 at both the design point and at off-design points was estimated numerically and tested in vitro. Flow characteristics in the pump were numerically simulated and analyzed with special attention paid to the scalar sheer stress and exposure time. An in vitro hemolysis test was conducted to verify the numerical results. The simulation results showed that the scalar shear stress in the rotor region at the 1 L/min off-design point was 70% greater than at the 5 L/min design point. The hemolysis index at the 1 L/min off-design point was 3.6 times greater than at the 5 L/min design point. The in vitro results showed that the normalized index of hemolysis increased from 0.017 g/100 L at the 5 L/min design point to 0.162 g/100 L at the 1 L/min off-design point. The hemolysis comparison between the different blood pump flow rates will be helpful for future pump design point selection and will guide the usage of ventricular assist pumps. The hemolytic performance of the blood pump at the working point in the clinic should receive more focus.

  12. Novel Analytic Methods Needed for Real-Time Continuous Core Body Temperature Data

    PubMed Central

    Hertzberg, Vicki; Mac, Valerie; Elon, Lisa; Mutic, Nathan; Mutic, Abby; Peterman, Katherine; Tovar-Aguilar, J. Antonio; Economos, Jeannie; Flocks, Joan; McCauley, Linda

    2017-01-01

    Affordable measurement of core body temperature, Tc, in a continuous, real-time fashion is now possible. With this advance comes a new data analysis paradigm for occupational epidemiology. We characterize issues arising after obtaining Tc data over 188 workdays for 83 participating farmworkers, a population vulnerable to effects of rising temperatures due to climate change. We describe a novel approach to these data using smoothing and functional data analysis. This approach highlights different data aspects compared to describing Tc at a single time point or summaries of the time course into an indicator function (e.g., did Tc ever exceed 38°C, the threshold limit value for occupational heat exposure). Participants working in ferneries had significantly higher Tc at some point during the workday compared to those working in nurseries, despite a shorter workday for fernery participants. Our results typify the challenges and opportunities in analyzing big data streams from real-time physiologic monitoring. PMID:27756853

  13. Novel Analytic Methods Needed for Real-Time Continuous Core Body Temperature Data.

    PubMed

    Hertzberg, Vicki; Mac, Valerie; Elon, Lisa; Mutic, Nathan; Mutic, Abby; Peterman, Katherine; Tovar-Aguilar, J Antonio; Economos, Eugenia; Flocks, Joan; McCauley, Linda

    2016-10-18

    Affordable measurement of core body temperature (T c ) in a continuous, real-time fashion is now possible. With this advance comes a new data analysis paradigm for occupational epidemiology. We characterize issues arising after obtaining T c data over 188 workdays for 83 participating farmworkers, a population vulnerable to effects of rising temperatures due to climate change. We describe a novel approach to these data using smoothing and functional data analysis. This approach highlights different data aspects compared with describing T c at a single time point or summaries of the time course into an indicator function (e.g., did T c ever exceed 38 °C, the threshold limit value for occupational heat exposure). Participants working in ferneries had significantly higher T c at some point during the workday compared with those working in nurseries, despite a shorter workday for fernery participants. Our results typify the challenges and opportunities in analyzing big data streams from real-time physiologic monitoring. © The Author(s) 2016.

  14. Sediment storage time in a simulated meandering river's floodplain, comparisons of point bar and overbank deposits

    NASA Astrophysics Data System (ADS)

    Ackerman, T. R.; Pizzuto, J. E.

    2016-12-01

    Sediment may be stored briefly or for long periods in alluvial deposits adjacent to rivers. The duration of sediment storage may affect diagenesis, and controls the timing of sediment delivery, affecting the propagation of upland sediment signals caused by tectonics, climate change, and land use, and the efficacy of watershed management strategies designed to reduce sediment loading to estuaries and reservoirs. Understanding the functional form of storage time distributions can help to extrapolate from limited field observations and improve forecasts of sediment loading. We simulate stratigraphy adjacent to a modeled river where meander migration is driven by channel curvature. The basal unit is built immediately as the channel migrates away, analogous to a point bar; rules for overbank (flood) deposition create thicker deposits at low elevations and near the channel, forming topographic features analogous to natural levees, scroll bars, and terraces. Deposit age is tracked everywhere throughout the simulation, and the storage time is recorded when the channel returns and erodes the sediment at each pixel. 210 ky of simulated run time is sufficient for the channel to migrate 10,500 channel widths, but only the final 90 ky are analyzed. Storage time survivor functions are well fit by exponential functions until 500 years (point bar) or 600 years (overbank) representing the youngest 50% of eroded sediment. Then (until an age of 12 ky, representing the next 48% (point bar) or 45% (overbank) of eroding sediment), the distributions are well fit by heavy tailed power functions with slopes of -1 (point bar) and -0.75 (overbank). After 12 ky (6% of model run time) the remainder of the storage time distributions become exponential (light tailed). Point bar sediment has the greatest chance (6%) of eroding at 120 years, as the river reworks recently deposited point bars. Overbank sediment has an 8% chance of eroding after 1 time step, a chance that declines by half after 3 time steps. The high probability of eroding young overbank deposits occurs as the river reworks recently formed natural levees. These results show that depositional environment affects river floodplain storage times shorter than a few centuries, and suggest that a power law distribution with a truncated tail may be the most reasonable functional fit.

  15. Residual settlements detection of ocean reclaimed lands with multi-platform SAR time series and SBAS technique: a case study of Shanghai Pudong International Airport

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Yang, Tianliang; Zhao, Qing; Pepe, Antonio; Dong, Hongbin; Sun, Zhibin

    2017-09-01

    Shanghai Pudong International airport is one of the three major international airports in China. The airport is located at the Yangtze estuary which is a sensitive belt of sea and land interaction region. The majority of the buildings and facilities in the airport are built on ocean-reclaimed lands and silt tidal flat. Residual ground settlement could probably occur after the completion of the airport construction. The current status of the ground settlement of the airport and whether it is within a safe range are necessary to be investigated. In order to continuously monitor the ground settlement of the airport, two Synthetic Aperture Radar (SAR) time series, acquired by X-band TerraSAR-X (TSX) and TanDEM-X (TDX) sensors from December 2009 to December 2010 and from April 2013 to July 2015, were used for analyzing with SBAS technique. We firstly obtained ground deformation measurement of each SAR subset. Both of the measurements show that obvious ground subsidence phenomenon occurred at the airport, especially in the second runway, the second terminal, the sixth cargo plane and the eighth apron. The maximum vertical ground deformation rates of both SAR subset measurements were greater than -30 mm/year, while the cumulative ground deformations reached up to -30 mm and -35 mm respectively. After generation of SBAS-retrieved ground deformation for each SAR subset, we performed a joint analysis to combine time series of each common coherent point by applying a geotechnical model. The results show that three centralized areas of ground deformation existed in the airport, mainly distributed in the sixth cargo plane, the fifth apron and the fourth apron, The maximum vertical cumulative ground subsidence was more than -70 mm. In addition, by analyzing the combined time series of four selected points, we found that the ground deformation rates of the points located at the second runway, the third runway, and the second terminal, were progressively smaller as time goes by. It indicates that the stabilities of the foundation around these points were gradually enhanced.

  16. [Comparison and Discussion of National/Military Standards Related to Flow Measurement of Medical Injection Pump].

    PubMed

    Zhang, Nan; Zhou, Juan; Yu, Jinlai; Hua, Ziyu; Li, Yongxue; Wu, Jiangang

    2018-05-30

    Medical injection pump is a commonly used clinical equipment with high risk. Accurate detection of flow is an important aspect to ensure its reliable operation. In this paper, we carefully studied and analyzed the flow detection methods of three standards being used in medical injection pump detection in our country. The three standards were compared from the aspects of standard device, flow test point selection, length of test time and accuracy judgment. The advantages and disadvantages of these standards were analyzed and suggestions for improvement were put forward.

  17. A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations

    NASA Technical Reports Server (NTRS)

    Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw

    2005-01-01

    A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.

  18. New clinical insights for transiently evoked otoacoustic emission protocols.

    PubMed

    Hatzopoulos, Stavros; Grzanka, Antoni; Martini, Alessandro; Konopka, Wieslaw

    2009-08-01

    The objective of the study was to optimize the area of a time-frequency analysis and then investigate any stable patterns in the time-frequency structure of otoacoustic emissions in a population of 152 healthy adults sampled over one year. TEOAE recordings were collected from 302 ears in subjects presenting normal hearing and normal impedance values. The responses were analyzed by the Wigner-Ville distribution (WVD). The TF region of analysis was optimized by examining the energy content of various rectangular and triangular TF regions. The TEOAE components from the initial and recordings 12 months later were compared in the optimized TF region. The best region for TF analysis was identified with base point 1 at 2.24 ms and 2466 Hz, base point 2 at 6.72 ms and 2466 Hz, and the top point at 2.24 ms and 5250 Hz. Correlation indices from the TF optimized region were higher, and were statistically significant, than the traditional indices in the selected time window. An analysis of the TF data within a 12-month period indicated a 85% TEOAE component similarity in 90% of the tested subjects.

  19. Dynamics of f(R) gravity models and asymmetry of time

    NASA Astrophysics Data System (ADS)

    Verma, Murli Manohar; Yadav, Bal Krishna

    We solve the field equations of modified gravity for f(R) model in metric formalism. Further, we obtain the fixed points of the dynamical system in phase-space analysis of f(R) models, both with and without the effects of radiation. The stability of these points is studied against the perturbations in a smooth spatial background by applying the conditions on the eigenvalues of the matrix obtained in the linearized first-order differential equations. Following this, these fixed points are used for analyzing the dynamics of the system during the radiation, matter and acceleration-dominated phases of the universe. Certain linear and quadratic forms of f(R) are determined from the geometrical and physical considerations and the behavior of the scale factor is found for those forms. Further, we also determine the Hubble parameter H(t), the Ricci scalar R and the scale factor a(t) for these cosmic phases. We show the emergence of an asymmetry of time from the dynamics of the scalar field exclusively owing to the f(R) gravity in the Einstein frame that may lead to an arrow of time at a classical level.

  20. User and technical documentation

    NASA Astrophysics Data System (ADS)

    1988-09-01

    The program LIBRATE calculates velocities for trajectories from low earth orbit (LEO) to four of the five libration points (L2, L3, L4, and L5), and from low lunar orbit (LLO) to libration points L1 and L2. The flight to be analyzed departs from a circular orbit of any altitude and inclination about the Earth or Moon and finishes in a circular orbit about the Earth at the desired libration point within a specified flight time. This program produces a matrix of the delta V's needed to complete the desired flight. The user specifies the departure orbit, and the maximum flight time. A matrix is then developed with 10 inclinations, ranging from 0 to 90 degrees, forming the columns, and 19 possible flight times, ranging from the flight time (input) to 36 hours less than the input value, in decrements of 2 hours, forming the rows. This matrix is presented in three different reports including the total delta V's, and both of the delta V components discussed. The input required from the user to define the flight is discussed. The contents of the three reports that are produced as outputs are also described. The instructions are also included which are needed to execute the program.

  1. Re/Os constraint on the time variability of the fine-structure constant.

    PubMed

    Fujii, Yasunori; Iwamoto, Akira

    2003-12-31

    We argue that the accuracy by which the isochron parameters of the decay 187Re-->187Os are determined by dating iron meteorites may constrain the possible time dependence of the decay rate and hence of the fine-structure constant alpha, not directly but only in a model-dependent manner. From this point of view, some of the attempts to analyze the Oklo constraint and the results of the quasistellar-object absorption lines are reexamined.

  2. Context-aware pattern discovery for moving object trajectories

    NASA Astrophysics Data System (ADS)

    Sharif, Mohammad; Asghar Alesheikh, Ali; Kaffash Charandabi, Neda

    2018-05-01

    Movement of point objects are highly sensitive to the underlying situations and conditions during the movement, which are known as contexts. Analyzing movement patterns, while accounting the contextual information, helps to better understand how point objects behave in various contexts and how contexts affect their trajectories. One potential solution for discovering moving objects patterns is analyzing the similarities of their trajectories. This article, therefore, contextualizes the similarity measure of trajectories by not only their spatial footprints but also a notion of internal and external contexts. The dynamic time warping (DTW) method is employed to assess the multi-dimensional similarities of trajectories. Then, the results of similarity searches are utilized in discovering the relative movement patterns of the moving point objects. Several experiments are conducted on real datasets that were obtained from commercial airplanes and the weather information during the flights. The results yielded the robustness of DTW method in quantifying the commonalities of trajectories and discovering movement patterns with 80 % accuracy. Moreover, the results revealed the importance of exploiting contextual information because it can enhance and restrict movements.

  3. Dissipative gravitational bouncer on a vibrating surface

    NASA Astrophysics Data System (ADS)

    Espinoza Ortiz, J. S.; Lagos, R. E.

    2017-12-01

    We study the dynamical behavior of a particle flying under the influence of a gravitational field, with dissipation constant λ (Stokes-like), colliding successive times against a rigid surface vibrating harmonically with restitution coefficient α. We define re-scaled dimensionless dynamical variables, such as the relative particle velocity Ω with respect to the surface’s velocity; and the real parameter τ accounting for the temporal evolution of the system. At the particle-surface contact point and for the k‧th collision, we construct the mapping described by (τk ; Ω k ) in order to analyze the system’s nonlinear dynamical behavior. From the dynamical mapping, the fixed point trajectory is computed and its stability is analyzed. We find the dynamical behavior of the fixed point trajectory to be stable or unstable, depending on the values of the re-scaled vibrating surface amplitude Γ, the restitution coefficient α and the damping constant λ. Other important dynamical aspects such as the phase space volume and the one cycle vibrating surface (decomposed into absorbing and transmitting regions) are also discussed. Furthermore, the model rescues well known results in the limit λ = 0.

  4. Multi-Dimensional Pattern Discovery of Trajectories Using Contextual Information

    NASA Astrophysics Data System (ADS)

    Sharif, M.; Alesheikh, A. A.

    2017-10-01

    Movement of point objects are highly sensitive to the underlying situations and conditions during the movement, which are known as contexts. Analyzing movement patterns, while accounting the contextual information, helps to better understand how point objects behave in various contexts and how contexts affect their trajectories. One potential solution for discovering moving objects patterns is analyzing the similarities of their trajectories. This article, therefore, contextualizes the similarity measure of trajectories by not only their spatial footprints but also a notion of internal and external contexts. The dynamic time warping (DTW) method is employed to assess the multi-dimensional similarities of trajectories. Then, the results of similarity searches are utilized in discovering the relative movement patterns of the moving point objects. Several experiments are conducted on real datasets that were obtained from commercial airplanes and the weather information during the flights. The results yielded the robustness of DTW method in quantifying the commonalities of trajectories and discovering movement patterns with 80 % accuracy. Moreover, the results revealed the importance of exploiting contextual information because it can enhance and restrict movements.

  5. Effect of Heat on Space-Time Correlations in Jets

    NASA Technical Reports Server (NTRS)

    Bridges, James

    2006-01-01

    Measurements of space-time correlations of velocity, acquired in jets from acoustic Mach number 0.5 to 1.5 and static temperature ratios up to 2.7 are presented and analyzed. Previous reports of these experiments concentrated on the experimental technique and on validating the data. In the present paper the dataset is analyzed to address the question of how space-time correlations of velocity are different in cold and hot jets. The analysis shows that turbulent kinetic energy intensities, lengthscales, and timescales are impacted by the addition of heat, but by relatively small amounts. This contradicts the models and assumptions of recent aeroacoustic theory trying to predict the noise of hot jets. Once the change in jet potential core length has been factored out, most one- and two-point statistics collapse for all hot and cold jets.

  6. Stability of BDNF in Human Samples Stored Up to 6 Months and Correlations of Serum and EDTA-Plasma Concentrations.

    PubMed

    Polyakova, Maryna; Schlögl, Haiko; Sacher, Julia; Schmidt-Kassow, Maren; Kaiser, Jochen; Stumvoll, Michael; Kratzsch, Jürgen; Schroeter, Matthias L

    2017-06-03

    Brain-derived neurotrophic factor (BDNF), an important neural growth factor, has gained growing interest in neuroscience, but many influencing physiological and analytical aspects still remain unclear. In this study we assessed the impact of storage time at room temperature, repeated freeze/thaw cycles, and storage at -80 °C up to 6 months on serum and ethylenediaminetetraacetic acid (EDTA)-plasma BDNF. Furthermore, we assessed correlations of serum and plasma BDNF concentrations in two independent sets of samples. Coefficients of variations (CVs) for serum BDNF concentrations were significantly lower than CVs of plasma concentrations ( n = 245, p = 0.006). Mean serum and plasma concentrations at all analyzed time points remained within the acceptable change limit of the inter-assay precision as declared by the manufacturer. Serum and plasma BDNF concentrations correlated positively in both sets of samples and at all analyzed time points of the stability assessment ( r = 0.455 to r s = 0.596; p < 0.004). In summary, when considering the acceptable change limit, BDNF was stable in serum and in EDTA-plasma up to 6 months. Due to a higher reliability, we suggest favoring serum over EDTA-plasma for future experiments assessing peripheral BDNF concentrations.

  7. Photometric Analysis and Transit Times of TRAPPIST-1 B and C

    NASA Astrophysics Data System (ADS)

    Morris, Brett M.; Agol, Eric; Hawley, Suzanne L.

    2018-01-01

    TRAPPIST-1 hosts seven Earth-sized planets transiting an M8 star. We observed mid-transit times of each of the inner two planets with the Astrophysical Research Consortium (ARC) 3.5 m Telescope at Apache Point Observatory (APO) to help constrain the planet masses with transit timing variations, and we outline a procedure for analyzing transit observations of late-M stars with APO. The transit times of TRAPPIST-1 b and c are $\\mathrm{BJD}_{\\mathrm{TDB}} = 2457580.87634^{+0.00034}_{-0.00034}$ and $2457558.89477^{+0.00080}_{-0.00085}$, respectively, which will help constrain the planet masses.

  8. Real-time seam tracking control system based on line laser visions

    NASA Astrophysics Data System (ADS)

    Zou, Yanbiao; Wang, Yanbo; Zhou, Weilin; Chen, Xiangzhi

    2018-07-01

    A set of six-degree-of-freedom robotic welding automatic tracking platform was designed in this study to realize the real-time tracking of weld seams. Moreover, the feature point tracking method and the adaptive fuzzy control algorithm in the welding process were studied and analyzed. A laser vision sensor and its measuring principle were designed and studied, respectively. Before welding, the initial coordinate values of the feature points were obtained using morphological methods. After welding, the target tracking method based on Gaussian kernel was used to extract the real-time feature points of the weld. An adaptive fuzzy controller was designed to input the deviation value of the feature points and the change rate of the deviation into the controller. The quantization factors, scale factor, and weight function were adjusted in real time. The input and output domains, fuzzy rules, and membership functions were constantly updated to generate a series of smooth bias robot voltage. Three groups of experiments were conducted on different types of curve welds in a strong arc and splash noise environment using the welding current of 120 A short-circuit Metal Active Gas (MAG) Arc Welding. The tracking error was less than 0.32 mm and the sensor's metrical frequency can be up to 20 Hz. The end of the torch run smooth during welding. Weld trajectory can be tracked accurately, thereby satisfying the requirements of welding applications.

  9. Diffusion-weighted MR imaging of upper abdominal organs at different time points: Apparent diffusion coefficient normalization using a reference organ.

    PubMed

    Song, Ji Soo; Kwak, Hyo Sung; Byon, Jung Hee; Jin, Gong Yong

    2017-05-01

    To compare the apparent diffusion coefficient (ADC) of upper abdominal organs acquired at different time points, and to investigate the usefulness of normalization. We retrospectively evaluated 58 patients who underwent three rounds of magnetic resonance (MR) imaging including diffusion-weighted imaging of the upper abdomen. MR examinations were performed using three different 3.0 Tesla (T) and one 1.5T systems, with variable b value combinations and respiratory motion compensation techniques. The ADC values of the upper abdominal organs from three different time points were analyzed, using the ADC values of the paraspinal muscle (ADC psm ) and spleen (ADC spleen ) for normalization. Intraclass correlation coefficients (ICC) and comparison of dependent ICCs were used for statistical analysis. The ICCs of the original ADC and ADC psm showed fair to substantial agreement, while ADC spleen showed substantial to almost perfect agreement. The ICC of ADC spleen of all anatomical regions showed less variability compared with that of the original ADC (P < 0.005). Normalized ADC using the spleen as a reference organ significantly decreased variability in measurement of the upper abdominal organs in different MR systems at different time points and could be regarded as an imaging biomarker for future multicenter, longitudinal studies. 5 J. MAGN. RESON. IMAGING 2017;45:1494-1501. © 2016 International Society for Magnetic Resonance in Medicine.

  10. Conditional heteroscedasticity as a leading indicator of ecological regime shifts.

    PubMed

    Seekell, David A; Carpenter, Stephen R; Pace, Michael L

    2011-10-01

    Regime shifts are massive, often irreversible, rearrangements of nonlinear ecological processes that occur when systems pass critical transition points. Ecological regime shifts sometimes have severe consequences for human well-being, including eutrophication in lakes, desertification, and species extinctions. Theoretical and laboratory evidence suggests that statistical anomalies may be detectable leading indicators of regime shifts in ecological time series, making it possible to foresee and potentially avert incipient regime shifts. Conditional heteroscedasticity is persistent variance characteristic of time series with clustered volatility. Here, we analyze conditional heteroscedasticity as a potential leading indicator of regime shifts in ecological time series. We evaluate conditional heteroscedasticity by using ecological models with and without four types of critical transition. On approaching transition points, all time series contain significant conditional heteroscedasticity. This signal is detected hundreds of time steps in advance of the regime shift. Time series without regime shifts do not have significant conditional heteroscedasticity. Because probability values are easily associated with tests for conditional heteroscedasticity, detection of false positives in time series without regime shifts is minimized. This property reduces the need for a reference system to compare with the perturbed system.

  11. Comparison of viscous-shock-layer solutions by time-asymptotic and steady-state methods. [flow distribution around a Jupiter entry probe

    NASA Technical Reports Server (NTRS)

    Gupta, R. N.; Moss, J. N.; Simmonds, A. L.

    1982-01-01

    Two flow-field codes employing the time- and space-marching numerical techniques were evaluated. Both methods were used to analyze the flow field around a massively blown Jupiter entry probe under perfect-gas conditions. In order to obtain a direct point-by-point comparison, the computations were made by using identical grids and turbulence models. For the same degree of accuracy, the space-marching scheme takes much less time as compared to the time-marching method and would appear to provide accurate results for the problems with nonequilibrium chemistry, free from the effect of local differences in time on the final solution which is inherent in time-marching methods. With the time-marching method, however, the solutions are obtainable for the realistic entry probe shapes with massive or uniform surface blowing rates; whereas, with the space-marching technique, it is difficult to obtain converged solutions for such flow conditions. The choice of the numerical method is, therefore, problem dependent. Both methods give equally good results for the cases where results are compared with experimental data.

  12. What You Can--and Can't--Do with Three-Wave Panel Data

    ERIC Educational Resources Information Center

    Vaisey, Stephen; Miles, Andrew

    2017-01-01

    The recent change in the general social survey (GSS) to a rotating panel design is a landmark development for social scientists. Sociological methodologists have argued that fixed-effects (FE) models are generally the best starting point for analyzing panel data because they allow analysts to control for unobserved time-constant heterogeneity. We…

  13. Growth of Expressive Syntax in Children with Fragile X Syndrome

    ERIC Educational Resources Information Center

    Komesidou, Rouzana; Brady, Nancy C.; Fleming, Kandace; Esplund, Amy; Warren, Steven F.

    2017-01-01

    Purpose: This research explored syntactic growth in children with fragile X syndrome (FXS) over a 5-year period, and variability in growth in relation to autism symptoms, nonverbal cognition, maternal responsivity, and gender. Method: Language samples at 4 time points from 39 children with FXS, 31 boys and 8 girls, were analyzed using the Index of…

  14. The development and implementation of a method using blue mussels (Mytilus spp.) as biosentinels of Cryptosporidium spp. and Toxoplasma gondii contamination in marine aquatic environments

    EPA Science Inventory

    It is estimated that protozoan parasites still account for greater than one third of waterborne disease outbreaks reported. Methods used to monitor microbial contamination typically involve collecting discrete samples at specific time-points and analyzing for a single contaminan...

  15. Predictors of Cheating and Cheating Attributions: Does Classroom Context Influence Cheating and Blame for Cheating?

    ERIC Educational Resources Information Center

    Murdock, Tamera B.; Beauchamp, Anne S.; Hinton, Amber M.

    2008-01-01

    The frequency of cheating in today's classrooms undermines educators' efforts and threatens students' learning. Data from 444 high school students in 48 math and science classrooms at two time points were analyzed to examine the classroom and individual influences on students' attributions of blame for cheating and to examine the relationship…

  16. The Conscious System for the Movement Technique: An Ontological and Holistic Alternative for (Spanish) Physical Education in Troubled Times

    ERIC Educational Resources Information Center

    Fernández-Balboa, Juan-Miguel; Prados Megías, Esther

    2014-01-01

    After critically analyzing some of the Spanish physical education's (PE) main justifications and pointing out their problematic effects on teachers and students, the paper proposes an alternative orientation to teaching and learning PE that fosters consciousness, centeredness and transformation. Drawing from arguments and examples taken from…

  17. 40 CFR 141.40 - Monitoring requirements for unregulated contaminants.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-12/31/2015 Column headings are: 1—Contaminant: The name of the contaminant to be analyzed. 2—CAS..., for List 2 Screening Survey, or List 3 Pre-Screen Testing, during the time frame indicated in column 6... paragraph (a)(3) of this section. Samples must be collected at each sample point that is specified in column...

  18. 40 CFR 141.40 - Monitoring requirements for unregulated contaminants.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-12/31/2015 Column headings are: 1—Contaminant: The name of the contaminant to be analyzed. 2—CAS..., for List 2 Screening Survey, or List 3 Pre-Screen Testing, during the time frame indicated in column 6... paragraph (a)(3) of this section. Samples must be collected at each sample point that is specified in column...

  19. 40 CFR 141.40 - Monitoring requirements for unregulated contaminants.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-12/31/2015 Column headings are: 1—Contaminant: The name of the contaminant to be analyzed. 2—CAS..., for List 2 Screening Survey, or List 3 Pre-Screen Testing, during the time frame indicated in column 6... paragraph (a)(3) of this section. Samples must be collected at each sample point that is specified in column...

  20. Wage Differentials by Field of Study--The Case of German University Graduates

    ERIC Educational Resources Information Center

    Grave, Barbara S.; Goerlitz, Katja

    2012-01-01

    Using data on German university graduates, this paper analyzes wage differentials by field of study at labor market entry and five to six years later. At both points of time, graduates from arts/humanities have lower average monthly wages compared to other fields. Blinder-Oaxaca decompositions show that these wage differentials can be explained…

  1. Analysis of high-resolution foreign exchange data of USD-JPY for 13 years

    NASA Astrophysics Data System (ADS)

    Mizuno, Takayuki; Kurihara, Shoko; Takayasu, Misako; Takayasu, Hideki

    2003-06-01

    We analyze high-resolution foreign exchange data consisting of 20 million data points of USD-JPY for 13 years to report firm statistical laws in distributions and correlations of exchange rate fluctuations. A conditional probability density analysis clearly shows the existence of trend-following movements at time scale of 8-ticks, about 1 min.

  2. [Visual field progression in glaucoma: cluster analysis].

    PubMed

    Bresson-Dumont, H; Hatton, J; Foucher, J; Fonteneau, M

    2012-11-01

    Visual field progression analysis is one of the key points in glaucoma monitoring, but distinction between true progression and random fluctuation is sometimes difficult. There are several different algorithms but no real consensus for detecting visual field progression. The trend analysis of global indices (MD, sLV) may miss localized deficits or be affected by media opacities. Conversely, point-by-point analysis makes progression difficult to differentiate from physiological variability, particularly when the sensitivity of a point is already low. The goal of our study was to analyse visual field progression with the EyeSuite™ Octopus Perimetry Clusters algorithm in patients with no significant changes in global indices or worsening of the analysis of pointwise linear regression. We analyzed the visual fields of 162 eyes (100 patients - 58 women, 42 men, average age 66.8 ± 10.91) with ocular hypertension or glaucoma. For inclusion, at least six reliable visual fields per eye were required, and the trend analysis (EyeSuite™ Perimetry) of visual field global indices (MD and SLV), could show no significant progression. The analysis of changes in cluster mode was then performed. In a second step, eyes with statistically significant worsening of at least one of their clusters were analyzed point-by-point with the Octopus Field Analysis (OFA). Fifty four eyes (33.33%) had a significant worsening in some clusters, while their global indices remained stable over time. In this group of patients, more advanced glaucoma was present than in stable group (MD 6.41 dB vs. 2.87); 64.82% (35/54) of those eyes in which the clusters progressed, however, had no statistically significant change in the trend analysis by pointwise linear regression. Most software algorithms for analyzing visual field progression are essentially trend analyses of global indices, or point-by-point linear regression. This study shows the potential role of analysis by clusters trend. However, for best results, it is preferable to compare the analyses of several tests in combination with morphologic exam. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  3. Whole-body tissue distribution of total radioactivity in rats after oral administration of [¹⁴C]-bilastine.

    PubMed

    Lucero, María Luisa; Patterson, Andrew B

    2012-06-01

    This study evaluated the tissue distribution of total radioactivity in male albino, male pigmented, and time-mated female albino rats after oral administration of a single dose of [¹⁴C]-bilastine (20 mg/kg). Although only 1 animal was analyzed at each time point, there were apparent differences in bilastine distribution. Radioactivity was distributed to only a few tissues at low levels in male rats, whereas distribution was more extensive and at higher levels in female rats. This may be a simple sex-related difference. In each group and at each time point, concentrations of radioactivity were high in the liver and kidney, reflecting the role of these organs in the elimination process. In male albino rats, no radioactivity was measurable by 72 hours postdose. In male pigmented rats, only the eye and uveal tract had measurable levels of radioactivity at 24 hours. Measureable levels of radioactivity were retained in these tissues at the final sampling time point (336 hours postdose), indicating a degree of melanin-associated binding. In time-mated female rats, but not in albino or pigmented male rats, there was evidence of low-level passage of radioactivity across the placental barrier into fetal tissues as well as low-level transfer of radioactivity into the brain.

  4. A method for analyzing temporal patterns of variability of a time series from Poincare plots.

    PubMed

    Fishman, Mikkel; Jacono, Frank J; Park, Soojin; Jamasebi, Reza; Thungtong, Anurak; Loparo, Kenneth A; Dick, Thomas E

    2012-07-01

    The Poincaré plot is a popular two-dimensional, time series analysis tool because of its intuitive display of dynamic system behavior. Poincaré plots have been used to visualize heart rate and respiratory pattern variabilities. However, conventional quantitative analysis relies primarily on statistical measurements of the cumulative distribution of points, making it difficult to interpret irregular or complex plots. Moreover, the plots are constructed to reflect highly correlated regions of the time series, reducing the amount of nonlinear information that is presented and thereby hiding potentially relevant features. We propose temporal Poincaré variability (TPV), a novel analysis methodology that uses standard techniques to quantify the temporal distribution of points and to detect nonlinear sources responsible for physiological variability. In addition, the analysis is applied across multiple time delays, yielding a richer insight into system dynamics than the traditional circle return plot. The method is applied to data sets of R-R intervals and to synthetic point process data extracted from the Lorenz time series. The results demonstrate that TPV complements the traditional analysis and can be applied more generally, including Poincaré plots with multiple clusters, and more consistently than the conventional measures and can address questions regarding potential structure underlying the variability of a data set.

  5. Modification of the G-phonon mode of graphene by nitrogen doping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lukashev, Pavel V., E-mail: pavel.lukashev@uni.edu; Hurley, Noah; Zhao, Liuyan

    2016-01-25

    The effect of nitrogen doping on the phonon spectra of graphene is analyzed. In particular, we employ first-principles calculations and scanning Raman analysis to investigate the dependence of phonon frequencies in graphene on the concentration of nitrogen dopants. We demonstrate that the G phonon frequency shows oscillatory behavior as a function of nitrogen concentration. We analyze different mechanisms which could potentially be responsible for this behavior, such as Friedel charge oscillations around the localized nitrogen impurity atom, the bond length change between nitrogen impurity and its nearest neighbor carbon atoms, and the long-range interactions of the nitrogen point defects. Wemore » show that the bond length change and the long range interaction of point defects are possible mechanisms responsible for the oscillatory behavior of the G frequency as a function of nitrogen concentration. At the same time, Friedel charge oscillations are unlikely to contribute to this behavior.« less

  6. Insulation bonding test system

    NASA Technical Reports Server (NTRS)

    Beggs, J. M.; Johnston, G. D.; Coleman, A. D.; Portwood, J. N.; Saunders, J. M.; Redmon, J. W.; Porter, A. C. (Inventor)

    1984-01-01

    A method and a system for testing the bonding of foam insulation attached to metal is described. The system involves the use of an impacter which has a calibrated load cell mounted on a plunger and a hammer head mounted on the end of the plunger. When the impacter strikes the insulation at a point to be tested, the load cell measures the force of the impact and the precise time interval during which the hammer head is in contact with the insulation. This information is transmitted as an electrical signal to a load cell amplifier where the signal is conditioned and then transmitted to a fast Fourier transform (FFT) analyzer. The FFT analyzer produces energy spectral density curves which are displayed on a video screen. The termination frequency of the energy spectral density curve may be compared with a predetermined empirical scale to determine whether a igh quality bond, good bond, or debond is present at the point of impact.

  7. Contaminant transport from point source on water surface in open channel flow with bed absorption

    NASA Astrophysics Data System (ADS)

    Guo, Jinlan; Wu, Xudong; Jiang, Weiquan; Chen, Guoqian

    2018-06-01

    Studying solute dispersion in channel flows is of significance for environmental and industrial applications. Two-dimensional concentration distribution for a most typical case of a point source release on the free water surface in a channel flow with bed absorption is presented by means of Chatwin's long-time asymptotic technique. Five basic characteristics of Taylor dispersion and vertical mean concentration distribution with skewness and kurtosis modifications are also analyzed. The results reveal that bed absorption affects both the longitudinal and vertical concentration distributions and causes the contaminant cloud to concentrate in the upper layer. Additionally, the cross-sectional concentration distribution shows an asymptotic Gaussian distribution at large time which is unaffected by the bed absorption. The vertical concentration distribution is found to be nonuniform even at large time. The obtained results are essential for practical implements with strict environmental standards.

  8. A Proof of the Occupancy Principle and the Mean-Transit-Time Theorem for Compartmental Models

    PubMed Central

    RAMAKRISHNAN, RAJASEKHAR; LEONARD, EDWARD F.; DELL, RALPH B.

    2012-01-01

    The occupancy principle and the mean-transit-time theorem are derived for the passage of a tracer through a system that can be described by a general pool model. It is proved, using matrix theory, that if (and only if) tracer entering the system labels equally all tracee fluxes into the system, then the integral of the tracer concentration is the same in all the pools. It is also proved that if, in addition, all flow out of the system is through the observation point, the first moment of the tracer concentration at the observation point can be used to calculate the total amount of trace in the system. The necessity of this condition is analyzed. Examples are given of models in which the occupancy principle and the mean-transit-time theorem hold or do not hold. PMID:22328793

  9. The analysis of quantum qutrit entanglements in a qutrit based hyper-sphere in terms of gluing and combining products

    NASA Astrophysics Data System (ADS)

    Duran, Volkan; Gençten, Azmi

    2016-03-01

    In this research the aim is to analyze quantum qutrit entanglements in a new perspective in terms of the reflection of n-dimensional sphere which can be depicted as the set of points equidistant from a fixed central point in three dimensional Euclidian Space which has also real and imaginary dimensions, that can also be depicted similarly as a two unit spheres having same centre in a dome-shaped projection. In order to analyze quantum qutrit entanglements: i- a new type of n dimensional hyper-sphere which is the extend version of Bloch Sphere to hyper-space, is defined ii- new operators and products such as rotation operator, combining and gluing products in this space are defined, iii-the entangled states are analyzed in terms of those products in order to reach a general formula to depict qutrit entanglements and some new patterns between spheres for the analysis of entanglement for different routes in a more simple way in a four dimensional time independent hypersphere.

  10. The analysis of quantum qutrit entanglements in a qutrit based hyper-sphere in terms of gluing and combining products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duran, Volkan, E-mail: volkan.duran8@gmail.com; Gençten, Azmi, E-mail: gencten@omu.edu.tr

    In this research the aim is to analyze quantum qutrit entanglements in a new perspective in terms of the reflection of n-dimensional sphere which can be depicted as the set of points equidistant from a fixed central point in three dimensional Euclidian Space which has also real and imaginary dimensions, that can also be depicted similarly as a two unit spheres having same centre in a dome-shaped projection. In order to analyze quantum qutrit entanglements: i- a new type of n dimensional hyper-sphere which is the extend version of Bloch Sphere to hyper-space, is defined ii- new operators and productsmore » such as rotation operator, combining and gluing products in this space are defined, iii-the entangled states are analyzed in terms of those products in order to reach a general formula to depict qutrit entanglements and some new patterns between spheres for the analysis of entanglement for different routes in a more simple way in a four dimensional time independent hypersphere.« less

  11. Behavioral Effects of a Locomotor-Based Physical Activity Intervention in Preschoolers.

    PubMed

    Burkart, Sarah; Roberts, Jasmin; Davidson, Matthew C; Alhassan, Sofiya

    2018-01-01

    Poor adaptive learning behaviors (ie, distractibility, inattention, and disruption) are associated with behavior problems and underachievement in school, as well as indicating potential attention-deficit hyperactivity disorder. Strategies are needed to limit these behaviors. Physical activity (PA) has been suggested to improve behavior in school-aged children, but little is known about this relationship in preschoolers. This study examined the effects of a PA intervention on classroom behaviors in preschool-aged children. Eight preschool classrooms (n = 71 children; age = 3.8 ± 0.7 y) with children from low socioeconomic environments were randomized to a locomotor-based PA (LB-PA) or unstructured free playtime (UF-PA) group. Both interventions were implemented by classroom teachers and delivered for 30 minutes per day, 5 days per week for 6 months. Classroom behavior was measured in both groups at 3 time points, whereas PA was assessed at 2 time points over a 6-month period and analyzed with hierarchical linear modeling. Linear growth models showed significant decreases in hyperactivity (LB-PA: -2.58 points, P = .001; UF-PA: 2.33 points, P = .03), aggression (LB-PA: -2.87 points, P = .01; UF-PA: 0.97 points, P = .38) and inattention (LB-PA: 1.59 points, P < .001; UF-PA: 3.91 points, P < .001). This research provides promising evidence for the efficacy of LB-PA as a strategy to improve classroom behavior in preschoolers.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khrennikov, Andrei; Volovich, Yaroslav

    We analyze dynamical consequences of a conjecture that there exists a fundamental (indivisible) quant of time. In particular we study the problem of discrete energy levels of hydrogen atom. We are able to reconstruct potential which in discrete time formalism leads to energy levels of unperturbed hydrogen atom. We also consider linear energy levels of quantum harmonic oscillator and show how they are produced in the discrete time formalism. More generally, we show that in discrete time formalism finite motion in central potential leads to discrete energy spectrum, the property which is common for quantum mechanical theory. Thus deterministic (butmore » discrete time{exclamation_point}) dynamics is compatible with discrete energy levels.« less

  13. Elastohydrodynamic lubrication theory

    NASA Technical Reports Server (NTRS)

    Hamrock, B. J.; Dowson, D.

    1982-01-01

    The isothermal elastohydrodynamic lubrication (EHL) of a point contact was analyzed numerically by simultaneously solving the elasticity and Reynolds equations. In the elasticity analysis the contact zone was divided into equal rectangular areas, and it was assumed that a uniform pressure was applied over each area. In the numerical analysis of the Reynolds equation, a phi analysis (where phi is equal to the pressure times the film thickness to the 3/2 power) was used to help the relaxation process. The EHL point contact analysis is applicable for the entire range of elliptical parameters and is valid for any combination of rolling and sliding within the contact.

  14. Evolution of Motor Control: From Reflexes and Motor Programs to the Equilibrium-Point Hypothesis.

    PubMed

    Latash, Mark L

    2008-01-01

    This brief review analyzes the evolution of motor control theories along two lines that emphasize active (motor programs) and reactive (reflexes) features of voluntary movements. It suggests that the only contemporary hypothesis that integrates both approaches in a fruitful way is the equilibrium-point hypothesis. Physical, physiological, and behavioral foundations of the EP-hypothesis are considered as well as relations between the EP-hypothesis and the recent developments of the notion of motor synergies. The paper ends with a brief review of the criticisms of the EP-hypothesis and challenges that the hypothesis faces at this time.

  15. An Algebraic Approach to Guarantee Harmonic Balance Method Using Gröbner Base

    NASA Astrophysics Data System (ADS)

    Yagi, Masakazu; Hisakado, Takashi; Okumura, Kohshi

    Harmonic balance (HB) method is well known principle for analyzing periodic oscillations on nonlinear networks and systems. Because the HB method has a truncation error, approximated solutions have been guaranteed by error bounds. However, its numerical computation is very time-consuming compared with solving the HB equation. This paper proposes an algebraic representation of the error bound using Gröbner base. The algebraic representation enables to decrease the computational cost of the error bound considerably. Moreover, using singular points of the algebraic representation, we can obtain accurate break points of the error bound by collisions.

  16. Ground-state fidelity and bipartite entanglement in the Bose-Hubbard model.

    PubMed

    Buonsante, P; Vezzani, A

    2007-03-16

    We analyze the quantum phase transition in the Bose-Hubbard model borrowing two tools from quantum-information theory, i.e., the ground-state fidelity and entanglement measures. We consider systems at unitary filling comprising up to 50 sites and show for the first time that a finite-size scaling analysis of these quantities provides excellent estimates for the quantum critical point. We conclude that fidelity is particularly suited for revealing a quantum phase transition and pinning down the critical point thereof, while the success of entanglement measures depends on the mechanisms governing the transition.

  17. Key Technology of Real-Time Road Navigation Method Based on Intelligent Data Research

    PubMed Central

    Tang, Haijing; Liang, Yu; Huang, Zhongnan; Wang, Taoyi; He, Lin; Du, Yicong; Ding, Gangyi

    2016-01-01

    The effect of traffic flow prediction plays an important role in routing selection. Traditional traffic flow forecasting methods mainly include linear, nonlinear, neural network, and Time Series Analysis method. However, all of them have some shortcomings. This paper analyzes the existing algorithms on traffic flow prediction and characteristics of city traffic flow and proposes a road traffic flow prediction method based on transfer probability. This method first analyzes the transfer probability of upstream of the target road and then makes the prediction of the traffic flow at the next time by using the traffic flow equation. Newton Interior-Point Method is used to obtain the optimal value of parameters. Finally, it uses the proposed model to predict the traffic flow at the next time. By comparing the existing prediction methods, the proposed model has proven to have good performance. It can fast get the optimal value of parameters faster and has higher prediction accuracy, which can be used to make real-time traffic flow prediction. PMID:27872637

  18. A longitudinal investigation of depressive symptoms in undergraduate students of pharmacy in Syria.

    PubMed

    Gonçalves, Vânia; Zidan, Amani; Issa, Mona; Barah, Faraj

    2012-05-01

    This prospective longitudinal study investigated depressive symptoms and its association with students' demographic, academic, and health factors in undergraduate students of pharmacy in Syria. Students attending any year (1st to 5th year) were assessed in the first semester (time 1) and in the second semester (time 2). An academic year comprises two semesters of 16 weeks each. Data for 450 students were analyzed at time 1, and 262 students were assessed at the two time points. Our results showed that most of the students experienced depressive symptoms, with a substantial percentage presenting moderate to severe levels of symptoms (35% or 450 students at time 1; 23% or 262 students at time 2). Across the two semesters, a significant decrease in depressive symptoms was observed for students with complete data at the two time points. Depressive symptoms at time 2 increased significantly with increasing depressive scores at time 1 and decreasing students' expectations about their academic performance. Our results support the clear need for dynamic, full-time, and accessible psychological services at the university to promote and assess mental health and to deliver psychological interventions to students at need.

  19. Polynomial-Time Approximation Algorithm for the Problem of Cardinality-Weighted Variance-Based 2-Clustering with a Given Center

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Motkova, A. V.

    2018-01-01

    A strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters is considered. The solution criterion is the minimum of the sum (over both clusters) of weighted sums of squared distances from the elements of each cluster to its geometric center. The weights of the sums are equal to the cardinalities of the desired clusters. The center of one cluster is given as input, while the center of the other is unknown and is determined as the point of space equal to the mean of the cluster elements. A version of the problem is analyzed in which the cardinalities of the clusters are given as input. A polynomial-time 2-approximation algorithm for solving the problem is constructed.

  20. Adolescent health-risk behavior and community disorder.

    PubMed

    Wiehe, Sarah E; Kwan, Mei-Po; Wilson, Jeff; Fortenberry, J Dennis

    2013-01-01

    Various forms of community disorder are associated with health outcomes but little is known about how dynamic context where an adolescent spends time relates to her health-related behaviors. Assess whether exposure to contexts associated with crime (as a marker of community disorder) correlates with self-reported health-related behaviors among adolescent girls. Girls (N = 52), aged 14-17, were recruited from a single geographic urban area and monitored for 1 week using a GPS-enabled cell phone. Adolescents completed an audio computer-assisted self-administered interview survey on substance use (cigarette, alcohol, or marijuana use) and sexual intercourse in the last 30 days. In addition to recorded home and school address, phones transmitted location data every 5 minutes (path points). Using ArcGIS, we defined community disorder as aggregated point-level Unified Crime Report data within a 200-meter Euclidian buffer from home, school and each path point. Using Stata, we analyzed how exposures to areas of higher crime prevalence differed among girls who reported each behavior or not. Participants lived and spent time in areas with variable crime prevalence within 200 meters of their home, school and path points. Significant differences in exposure occurred based on home location among girls who reported any substance use or not (p 0.04) and sexual intercourse or not (p 0.01). Differences in exposure by school and path points were only significant among girls reporting any substance use or not (p 0.03 and 0.02, respectively). Exposure also varied by school/non-school day as well as time of day. Adolescent travel patterns are not random. Furthermore, the crime context where an adolescent spends time relates to her health-related behavior. These data may guide policy relating to crime control and inform time- and space-specific interventions to improve adolescent health.

  1. Experimental and Analytical Studies of Shielding Concepts for Point Sources and Jet Noise.

    DTIC Science & Technology

    1983-05-01

    proximnity of the turbulent jet flow to the The Spectral Dynamics DSP 360 is a two channel real time analyzer incor- shielding surface, the edge will interact...However, this is achieved with a very long shield length equal to 190 unorthodox configurations. The emphasis is placed on the concept, times the slit...16 dB/dec. .Vn With this solid-gaseous combination, a 10 0 diameter shield of length 14 DVf =- sin 0 with a burner attached to the trailing edge

  2. The effects of fundus photography on the multifocal electroretinogram.

    PubMed

    Suresh, Sandip; Tienor, Brian J; Smith, Scott D; Lee, Michael S

    2016-02-01

    To determine the effect of flash fundus photography (FFP) on the multifocal electroretinogram (mfERG). Ten subjects underwent mfERG testing on three separate dates. Subjects received either mfERG without FFP, mfERG at 5 and 15 min after FFP, or mfERG at 30 and 45 min after FFP on each date. The FFP groups received 10 fundus photographs followed by mfERG testing, first of the right eye then of the left eye 10 min later. Data were averaged and analyzed in six concentric rings at each time point. Average amplitude and implicit times of the N1, P1, and N2 peaks for each concentric ring at each time point after FFP were compared to baseline. Flash fundus photography did not lead to a significant change of amplitude or implicit times of N1, P1, or N2 at 5 min after light exposure. These findings suggest that it is acceptable to perform mfERG testing without delay after performance of FFP.

  3. A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.

    PubMed

    Quan, Quan; Cai, Kai-Yuan

    2016-02-01

    In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.

  4. Fast, axis-agnostic, dynamically summarized storage and retrieval for mass spectrometry data.

    PubMed

    Handy, Kyle; Rosen, Jebediah; Gillan, André; Smith, Rob

    2017-01-01

    Mass spectrometry, a popular technique for elucidating the molecular contents of experimental samples, creates data sets comprised of millions of three-dimensional (m/z, retention time, intensity) data points that correspond to the types and quantities of analyzed molecules. Open and commercial MS data formats are arranged by retention time, creating latency when accessing data across multiple m/z. Existing MS storage and retrieval methods have been developed to overcome the limitations of retention time-based data formats, but do not provide certain features such as dynamic summarization and storage and retrieval of point meta-data (such as signal cluster membership), precluding efficient viewing applications and certain data-processing approaches. This manuscript describes MzTree, a spatial database designed to provide real-time storage and retrieval of dynamically summarized standard and augmented MS data with fast performance in both m/z and RT directions. Performance is reported on real data with comparisons against related published retrieval systems.

  5. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method

    PubMed Central

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-01-01

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis. PMID:28029121

  6. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.

    PubMed

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-12-24

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  7. Effects of tranexamic acid on coagulation indexes of patients undergoing heart valve replacement surgery under cardiopulmonary bypass.

    PubMed

    Liu, Fei; Xu, Dong; Zhang, Kefeng; Zhang, Jian

    2016-12-01

    This study aims to explore the effects of tranexamic acid on the coagulation indexes of patients undergoing heart valve replacement surgery under the condition of cardiopulmonary bypass (CPB). One hundred patients who conformed to the inclusive criteria were selected and divided into a tranexamic acid group and a non-tranexamic acid group. They all underwent heart valve replacement surgery under CPB. Patients in the tranexamic acid group were intravenously injected with 1 g of tranexamic acid (100 mL) at the time point after anesthesia induction and before skin incision and at the time point after the neutralization of heparin. Patients in the non-tranexamic acid group were given 100 mL of normal saline at corresponding time points, respectively. Then the coagulation indexes of the two groups were analyzed. The activated blood clotting time (ACT) of the two groups was within normal scope before CPB, while four coagulation indexes including prothrombin time (PT), activated partial thromboplastin time (APTT), international normalized ratio (INR), and fibrinogen (FIB) had significant increases after surgery; the PT and INR of the tranexamic acid group had a remarkable decline after surgery. All the findings suggest that the application of tranexamic acid in heart valve replacement surgery under CPB can effectively reduce intraoperative and postoperative blood loss. © The Author(s) 2016.

  8. Establishing the Learning Curve of Robotic Sacral Colpopexy in a Start-up Robotics Program.

    PubMed

    Sharma, Shefali; Calixte, Rose; Finamore, Peter S

    2016-01-01

    To determine the learning curve of the following segments of a robotic sacral colpopexy: preoperative setup, operative time, postoperative transition, and room turnover. A retrospective cohort study to determine the number of cases needed to reach points of efficiency in the various segments of a robotic sacral colpopexy (Canadian Task Force II-2). A university-affiliated community hospital. Women who underwent robotic sacral colpopexy at our institution from 2009 to 2013 comprise the study population. Patient characteristics and operative reports were extracted from a patient database that has been maintained since the inception of the robotics program at Winthrop University Hospital and electronic medical records. Based on additional procedures performed, 4 groups of patients were created (A-D). Learning curves for each of the segment times of interest were created using penalized basis spline (B-spline) regression. Operative time was further analyzed using an inverse curve and sequential grouping. A total of 176 patients were eligible. Nonparametric tests detected no difference in procedure times between the 4 groups (A-D) of patients. The preoperative and postoperative points of efficiency were 108 and 118 cases, respectively. The operative points of proficiency and efficiency were 25 and 36 cases, respectively. Operative time was further analyzed using an inverse curve that revealed that after 11 cases the surgeon had reached 90% of the learning plateau. Sequential grouping revealed no significant improvement in operative time after 60 cases. Turnover time could not be assessed because of incomplete data. There is a difference in the operative time learning curve for robotic sacral colpopexy depending on the statistical analysis used. The learning curve of the operative segment showed an improvement in operative time between 25 and 36 cases when using B-spline regression. When the data for operative time was fit to an inverse curve, a learning rate of 11 cases was appreciated. Using sequential grouping to describe the data, no improvement in operative time was seen after 60 cases. Ultimately, we believe that efficiency in operative time is attained after 30 to 60 cases when performing robotic sacral colpopexy. The learning curve for preoperative setup and postoperative transition, which is reflective of anesthesia and nursing staff, was approximately 110 cases. Copyright © 2016 AAGL. Published by Elsevier Inc. All rights reserved.

  9. Talking about Twisters: Relations between Mothers' and Children's Contributions to Conversations about a Devastating Tornado

    ERIC Educational Resources Information Center

    Bauer, Patricia J.; Burch, Melissa M.; Van Abbema, Dana L.; Ackil, Jennifer K.

    2007-01-01

    Mother-child dyads who experienced a devastating tornado talked about the storm and about two affectively more positive or neutral events at each of two time points: 4 months and 10 months after the storm. The conversations were analyzed to determine whether mothers and/or children's contributions differed as a function of event type and whether…

  10. Health-Related Variables and Academic Performance among First-Year College Students: Implications for Sleep and Other Behaviors.

    ERIC Educational Resources Information Center

    Trockel, Mickey T.; Barnes, Michael D.; Egget, Dennis L.

    2000-01-01

    Analyzed the effect of several health behaviors and health-related variables on college freshmen's grade point averages (GPAs). Survey data indicated that sleep habits, particularly wake-up time, accounted for the most variance in GPAs. Higher GPAs related to strength training and study of spiritually oriented material. Lower GPAs related to…

  11. High-sensitivity detection of cardiac troponin I with UV LED excitation for use in point-of-care immunoassay.

    PubMed

    Rodenko, Olga; Eriksson, Susann; Tidemand-Lichtenberg, Peter; Troldborg, Carl Peder; Fodgaard, Henrik; van Os, Sylvana; Pedersen, Christian

    2017-08-01

    High-sensitivity cardiac troponin assay development enables determination of biological variation in healthy populations, more accurate interpretation of clinical results and points towards earlier diagnosis and rule-out of acute myocardial infarction. In this paper, we report on preliminary tests of an immunoassay analyzer employing an optimized LED excitation to measure on a standard troponin I and a novel research high-sensitivity troponin I assay. The limit of detection is improved by factor of 5 for standard troponin I and by factor of 3 for a research high-sensitivity troponin I assay, compared to the flash lamp excitation. The obtained limit of detection was 0.22 ng/L measured on plasma with the research high-sensitivity troponin I assay and 1.9 ng/L measured on tris-saline-azide buffer containing bovine serum albumin with the standard troponin I assay. We discuss the optimization of time-resolved detection of lanthanide fluorescence based on the time constants of the system and analyze the background and noise sources in a heterogeneous fluoroimmunoassay. We determine the limiting factors and their impact on the measurement performance. The suggested model can be generally applied to fluoroimmunoassays employing the dry-cup concept.

  12. Impact of the Brain Injury Family Intervention (BIFI) training on rehabilitation providers: A mixed methods study.

    PubMed

    Meixner, Cara; O'Donoghue, Cynthia R; Hart, Vesna

    2017-01-01

    The psychological impact of TBI is vast, leading to adverse effects on survivors and their caregivers. Unhealthy family functioning may be mitigated by therapeutic strategies, particularly interdisciplinary family systems approaches like the well-documented Brain Injury Family Intervention (BIFI). Little is known about the experience of providers who offer such interventions. This mixed methods study aims to demonstrate that a structured three-day training on the BIFI protocol improves providers' knowledge and confidence in working with survivors and families, and that this outcome is sustainable. Participants were 34 providers who participated in an intensive training and completed a web-based survey at four points of time. Quantitative data were analyzed via Wilcoxon signed-rank tests and binomial test of proportions. Qualitative data were analyzed according to rigorous coding procedures. Providers' knowledge of brain injury and their ability to conceptualize treatment models for survivors and their families increased significantly and mostly remain consistent over time. Qualitative data point to additional gains, such as understanding of family systems. Past studies quantify the BIFI as an evidence-based intervention. This study supports the effectiveness of training and serves as first to demonstrate the benefit for providers short- and long-term.

  13. A model of cloud application assignments in software-defined storages

    NASA Astrophysics Data System (ADS)

    Bolodurina, Irina P.; Parfenov, Denis I.; Polezhaev, Petr N.; E Shukhman, Alexander

    2017-01-01

    The aim of this study is to analyze the structure and mechanisms of interaction of typical cloud applications and to suggest the approaches to optimize their placement in storage systems. In this paper, we describe a generalized model of cloud applications including the three basic layers: a model of application, a model of service, and a model of resource. The distinctive feature of the model suggested implies analyzing cloud resources from the user point of view and from the point of view of a software-defined infrastructure of the virtual data center (DC). The innovation character of this model is in describing at the same time the application data placements, as well as the state of the virtual environment, taking into account the network topology. The model of software-defined storage has been developed as a submodel within the resource model. This model allows implementing the algorithm for control of cloud application assignments in software-defined storages. Experimental researches returned this algorithm decreases in cloud application response time and performance growth in user request processes. The use of software-defined data storages allows the decrease in the number of physical store devices, which demonstrates the efficiency of our algorithm.

  14. Physical activity, sedentary behavior, and academic performance in Finnish children.

    PubMed

    Syväoja, Heidi J; Kantomaa, Marko T; Ahonen, Timo; Hakonen, Harto; Kankaanpää, Anna; Tammelin, Tuija H

    2013-11-01

    This study aimed to determine the relationships between objectively measured and self-reported physical activity, sedentary behavior, and academic performance in Finnish children. Two hundred and seventy-seven children from five schools in the Jyväskylä school district in Finland (58% of the 475 eligible students, mean age = 12.2 yr, 56% girls) participated in the study in the spring of 2011. Self-reported physical activity and screen time were evaluated with questions used in the WHO Health Behavior in School-Aged Children study. Children's physical activity and sedentary time were measured objectively by using an ActiGraph GT1M/GT3X accelerometer for seven consecutive days. A cutoff value of 2296 counts per minute was used for moderate-to-vigorous physical activity (MVPA) and 100 counts per minute for sedentary time. Grade point averages were provided by the education services of the city of Jyväskylä. ANOVA and linear regression analysis were used to analyze the relationships among physical activity, sedentary behavior, and academic performance. Objectively measured MVPA (P = 0.955) and sedentary time (P = 0.285) were not associated with grade point average. However, self-reported MVPA had an inverse U-shaped curvilinear association with grade point average (P = 0.001), and screen time had a linear negative association with grade point average (P = 0.002), after adjusting for sex, children's learning difficulties, highest level of parental education, and amount of sleep. In this study, self-reported physical activity was directly, and screen time inversely, associated with academic achievement. Objectively measured physical activity and sedentary time were not associated with academic achievement. Objective and subjective measures may reflect different constructs and contexts of physical activity and sedentary behavior in association with academic outcomes.

  15. Techniques used for the analysis of oculometer eye-scanning data obtained from an air traffic control display

    NASA Technical Reports Server (NTRS)

    Crawford, Daniel J.; Burdette, Daniel W.; Capron, William R.

    1993-01-01

    The methodology and techniques used to collect and analyze look-point position data from a real-time ATC display-format comparison experiment are documented. That study compared the delivery precision and controller workload of three final approach spacing aid display formats. Using an oculometer, controller lookpoint position data were collected, associated with gaze objects (e.g., moving aircraft) on the ATC display, and analyzed to determine eye-scan behavior. The equipment involved and algorithms for saving, synchronizing with the ATC simulation output, and filtering the data are described. Target (gaze object) and cross-check scanning identification algorithms are also presented. Data tables are provided of total dwell times, average dwell times, and cross-check scans. Flow charts, block diagrams, file record descriptors, and source code are included. The techniques and data presented are intended to benefit researchers in other studies that incorporate non-stationary gaze objects and oculometer equipment.

  16. [Preemptive analgesia with loxoprofen sodiumorally in extraction of impacted teeth].

    PubMed

    Meng, T; Zhang, Z Y; Zhang, X; Chen, Y H; Li, J Q; Chen, Q; Liu, W S; Gao, W

    2018-02-18

    To investigate the effectiveness of preemptive analgesia with loxoprofen sodium orally, which was a kind of non-steroid anti-inflammatory drugs, in extractions of mandibular impacted third teeth. There were questionnaires about postoperative pain for patients whose mandibular impacted third teeth were extracted from July 2017 to August 2017 in First Clinical Division of Peking University School and Hospital of Stomatology. All the patients did their routine clinical examinations and imaging examinations. After their mandibular impacted third teeth were extracted, the questionnaires were sent to them. The questionnaires were filled in by the patients on their own and returned one week later. There were 120 questionnaires that were sent and 105 questionnaires returned, of which 98 questionnaires were filled in completely. According to the inclusive criteria and exclusion criteria, 66 questionnaires were totally selected in this study. According to the time when the patients took their loxoprofen sodium orally firstly, the patients were divided into 3 groups. The first group was for patients who didn't take loxoprofen sodium during their extractions (non-medicine group). The second group was for patients who took 60 mg loxoprofen sodium 30 min before their extractions (preoperative group). The third group was for patients who took 60 mg loxoprofen sodium 30 min after their extractions (postoperative group). The operation time among the 3 groups was analyzed by Kruskal-Wallis method. The postoperative time points were 2, 4, 12,24 and 48 h after operation. The scores of visual analogue scales (VAS) for postoperative pain in each group at different postoperative time points were analyzed by Friedman method. At each postoperative time point, VAS scores in the different groups were analyzed by Kruskal-Wallis me-thod. The numbers of the patients taking loxoprofen sodium home and drug adverse reactions were also analyzed. The operation time of the 3 groups was 15.0 (5.0,30.0) min and had no significant differences (P=0.848).VAS scores of non-medicine group 2,4, 12,24 and 48 h after operation were 1.75 (0.1,10.0), 6.25 (1.5,10.0), 2.00 (0.1,8.0), 2.00 (0.1,6.0) and 0.5 (0.1,5.5) separately and had significant differences (P<0.001).The VAS score at 4 h after operation was higher than the VAS scores at other time points after operation (P<0.005). Four hours after the operations, the VAS scores of preoperative group [2.0 (0.1,10.0)] and postoperative group [2.0 (0.1,5.0)] were lower significantly than those of non-medicine group [6.25 (1.5,10.0)] (P<0.001).The numbers of the patients taking loxoprofen sodium home were 9(40.9%) in non-medicine group,5(21.8%) in preoperative group and 7(33.3%) in postoperative group. The number of the patients who had drug adverse reactions in preoperative group (n=3,13.0%) and in postoperative group (n=4,19.0%) was less than the number of the patients who had drug adverse reactions in non-medicine group (n=8,36.4%). There were two protocols of preemptive analgesia with loxoprofen sodium orally in extractions of mandibular impacted third teeth, which were taking 60 mg loxoprofen sodium orally 30 min before the extractions and taking 60 mg loxoprofen sodium orally 30 min after the extractions. Both of the two preemptive analgesia protocols could decrease the postoperative pain significantly.

  17. The Propagation of Movement Variability in Time: A Methodological Approach for Discrete Movements with Multiple Degrees of Freedom.

    PubMed

    Krüger, Melanie; Straube, Andreas; Eggert, Thomas

    2017-01-01

    In recent years, theory-building in motor neuroscience and our understanding of the synergistic control of the redundant human motor system has significantly profited from the emergence of a range of different mathematical approaches to analyze the structure of movement variability. Approaches such as the Uncontrolled Manifold method or the Noise-Tolerance-Covariance decomposition method allow to detect and interpret changes in movement coordination due to e.g., learning, external task constraints or disease, by analyzing the structure of within-subject, inter-trial movement variability. Whereas, for cyclical movements (e.g., locomotion), mathematical approaches exist to investigate the propagation of movement variability in time (e.g., time series analysis), similar approaches are missing for discrete, goal-directed movements, such as reaching. Here, we propose canonical correlation analysis as a suitable method to analyze the propagation of within-subject variability across different time points during the execution of discrete movements. While similar analyses have already been applied for discrete movements with only one degree of freedom (DoF; e.g., Pearson's product-moment correlation), canonical correlation analysis allows to evaluate the coupling of inter-trial variability across different time points along the movement trajectory for multiple DoF-effector systems, such as the arm. The theoretical analysis is illustrated by empirical data from a study on reaching movements under normal and disturbed proprioception. The results show increased movement duration, decreased movement amplitude, as well as altered movement coordination under ischemia, which results in a reduced complexity of movement control. Movement endpoint variability is not increased under ischemia. This suggests that healthy adults are able to immediately and efficiently adjust the control of complex reaching movements to compensate for the loss of proprioceptive information. Further, it is shown that, by using canonical correlation analysis, alterations in movement coordination that indicate changes in the control strategy concerning the use of motor redundancy can be detected, which represents an important methodical advance in the context of neuromechanics.

  18. Mineral content changes in bone associated with damage induced by the electron beam.

    PubMed

    Bloebaum, Roy D; Holmes, Jennifer L; Skedros, John G

    2005-01-01

    Energy-dispersive x-ray (EDX) spectroscopy and backscattered electron (BSE) imaging are finding increased use for determining mineral content in microscopic regions of bone. Electron beam bombardment, however, can damage the tissue, leading to erroneous interpretations of mineral content. We performed elemental (EDX) and mineral content (BSE) analyses on bone tissue in order to quantify observable deleterious effects in the context of (1) prolonged scanning time, (2) scan versus point (spot) mode, (3) low versus high magnification, and (4) embedding in poly-methylmethacrylate (PMMA). Undemineralized cortical bone specimens from adult human femora were examined in three groups: 200x embedded, 200x unembedded, and 1000x embedded. Coupled BSE/EDX analyses were conducted five consecutive times, with no location analyzed more than five times. Variation in the relative proportions of calcium (Ca), phosphorous (P), and carbon (C) were measured using EDX spectroscopy, and mineral content variations were inferred from changes in mean gray levels ("atomic number contrast") in BSE images captured at 20 keV. In point mode at 200x, the embedded specimens exhibited a significant increase in Ca by the second measurement (7.2%, p < 0.05); in scan mode, a small and statistically nonsignificant increase (1.0%) was seen by the second measurement. Changes in P were similar, although the increases were less. The apparent increases in Ca and P likely result from decreases in C: -3.2% (p < 0.05) in point mode and -0.3% in scan mode by the second measurement. Analysis of unembedded specimens showed similar results. In contrast to embedded specimens at 200x, 1000x data showed significantly larger variations in the proportions of Ca, P, and C by the second or third measurement in scan and point mode. At both magnifications, BSE image gray level values increased (suggesting increased mineral content) by the second measurement, with increases up to 23% in point mode. These results show that mineral content measurements can be reliable when using coupled BSE/EDX analyses in PMMA-embedded bone if lower magnifications are used in scan mode and if prolonged exposure to the electron beam is avoided. When point mode is used to analyze minute regions, adjustments in accelerating voltages and probe current may be required to minimize damage.

  19. Alar-columellar and lateral nostril changes following tongue-in-groove rhinoplasty.

    PubMed

    Shah, Ajul; Pfaff, Miles; Kinsman, Gianna; Steinbacher, Derek M

    2015-04-01

    Repositioning the medial crura cephalically onto the caudal septum (tongue-in-groove; TIG) allows alteration of the columella, ala, and nasal tip to address alar-columellar disproportion as seen from the lateral view. To date, quantitative analysis of nostril dimension, alar-columellar relationship, and nasal tip changes following the TIG rhinoplasty technique have not been described. The present study aims to evaluate post-operative lateral morphometric changes following TIG. Pre- and post-operative lateral views of a series of consecutive patients who underwent TIG rhinoplasty were produced from 3D images at multiple time points (≤2 weeks, 4-10 weeks, and >10 weeks post-operatively) for analysis. The 3D images were converted to 2D and set to scale. Exposed lateral nostril area, alar-columellar disproportion (divided into superior and inferior heights), nasolabial angle, nostril height, and nostril length were calculated and statistically analyzed using a pairwise t test. A P ≤ 0.05 was considered statistically significant. Ninety-four lateral views were analyzed from 20 patients (16 females; median age: 31.8). One patient had a history of current tobacco cigarette use. Lateral nostril area decreased at all time points post-operatively, in a statistically significant fashion. Alar-columellar disproportion was reduced following TIG at all time points. The nasolabial angle significantly increased post-operatively at ≤2 weeks, 4-10 weeks, and >10, all in a statistically significant fashion. Nostril height and nostril length decreased at all post-operative time points. Morphometric analysis reveals reduction in alar-columellar disproportion and lateral nostril shows following TIG rhinoplasty. Tip rotation, as a function of nasolabial angle, also increased. These results provide quantitative substantiation for qualitative descriptions attributed to the TIG technique. Future studies will focus on area and volumetric measurements, and assessment of long-term stability. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.

  20. Predictive Trip Detection for Nuclear Power Plants

    NASA Astrophysics Data System (ADS)

    Rankin, Drew J.; Jiang, Jin

    2016-08-01

    This paper investigates the use of a Kalman filter (KF) to predict, within the shutdown system (SDS) of a nuclear power plant (NPP), whether safety parameter measurements have reached a trip set-point. In addition, least squares (LS) estimation compensates for prediction error due to system-model mismatch. The motivation behind predictive shutdown is to reduce the amount of time between the occurrence of a fault or failure and the time of trip detection, referred to as time-to-trip. These reductions in time-to-trip can ultimately lead to increases in safety and productivity margins. The proposed predictive SDS differs from conventional SDSs in that it compares point-predictions of the measurements, rather than sensor measurements, against trip set-points. The predictive SDS is validated through simulation and experiments for the steam generator water level safety parameter. Performance of the proposed predictive SDS is compared against benchmark conventional SDS with respect to time-to-trip. In addition, this paper analyzes: prediction uncertainty, as well as; the conditions under which it is possible to achieve reduced time-to-trip. Simulation results demonstrate that on average the predictive SDS reduces time-to-trip by an amount of time equal to the length of the prediction horizon and that the distribution of times-to-trip is approximately Gaussian. Experimental results reveal that a reduced time-to-trip can be achieved in a real-world system with unknown system-model mismatch and that the predictive SDS can be implemented with a scan time of under 100ms. Thus, this paper is a proof of concept for KF/LS-based predictive trip detection.

  1. Analysis of dangerous area of single berth oil tanker operations based on CFD

    NASA Astrophysics Data System (ADS)

    Shi, Lina; Zhu, Faxin; Lu, Jinshu; Wu, Wenfeng; Zhang, Min; Zheng, Hailin

    2018-04-01

    Based on the single process in the liquid cargo tanker berths in the state as the research object, we analyzed the single berth oil tanker in the process of VOCs diffusion theory, built network model of VOCs diffusion with Gambit preprocessor, set up the simulation boundary conditions and simulated the five detection point sources in specific factors under the influence of VOCs concentration change with time by using Fluent software. We analyzed the dangerous area of single berth oil tanker operations through the diffusion of VOCs, so as to ensure the safe operation of oil tanker.

  2. Improved Extreme Learning Machine based on the Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Cui, Licheng; Zhai, Huawei; Wang, Benchao; Qu, Zengtang

    2018-03-01

    Extreme learning machine and its improved ones is weak in some points, such as computing complex, learning error and so on. After deeply analyzing, referencing the importance of hidden nodes in SVM, an novel analyzing method of the sensitivity is proposed which meets people’s cognitive habits. Based on these, an improved ELM is proposed, it could remove hidden nodes before meeting the learning error, and it can efficiently manage the number of hidden nodes, so as to improve the its performance. After comparing tests, it is better in learning time, accuracy and so on.

  3. Economic Feasibility of Wireless Sensor Network-Based Service Provision in a Duopoly Setting with a Monopolist Operator

    PubMed Central

    Romero, Julián; Sacoto-Cabrera, Erwin J.

    2017-01-01

    We analyze the feasibility of providing Wireless Sensor Network-data-based services in an Internet of Things scenario from an economical point of view. The scenario has two competing service providers with their own private sensor networks, a network operator and final users. The scenario is analyzed as two games using game theory. In the first game, sensors decide to subscribe or not to the network operator to upload the collected sensing-data, based on a utility function related to the mean service time and the price charged by the operator. In the second game, users decide to subscribe or not to the sensor-data-based service of the service providers based on a Logit discrete choice model related to the quality of the data collected and the subscription price. The sinks and users subscription stages are analyzed using population games and discrete choice models, while network operator and service providers pricing stages are analyzed using optimization and Nash equilibrium concepts respectively. The model is shown feasible from an economic point of view for all the actors if there are enough interested final users and opens the possibility of developing more efficient models with different types of services. PMID:29186847

  4. Comprehensive seismic monitoring of the Cascadia megathrust with real-time GPS

    NASA Astrophysics Data System (ADS)

    Melbourne, T. I.; Szeliga, W. M.; Santillan, V. M.; Scrivner, C. W.; Webb, F.

    2013-12-01

    We have developed a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone based on 1- and 5-second point position estimates computed within the ITRF08 reference frame. A Kalman filter stream editor that uses a geometry-free combination of phase and range observables to speed convergence while also producing independent estimation of carrier phase biases and ionosphere delay pre-cleans raw satellite measurements. These are then analyzed with GIPSY-OASIS using satellite clock and orbit corrections streamed continuously from the International GNSS Service (IGS) and the German Aerospace Center (DLR). The resulting RMS position scatter is less than 3 cm, and typical latencies are under 2 seconds. Currently 31 coastal Washington, Oregon, and northern California stations from the combined PANGA and PBO networks are analyzed. We are now ramping up to include all of the remaining 400+ stations currently operating throughout the Cascadia subduction zone, all of which are high-rate and telemetered in real-time to CWU. These receivers span the M9 megathrust, M7 crustal faults beneath population centers, several active Cascades volcanoes, and a host of other hazard sources. To use the point position streams for seismic monitoring, we have developed an inter-process client communication package that captures, buffers and re-broadcasts real-time positions and covariances to a variety of seismic estimation routines running on distributed hardware. An aggregator ingests, re-streams and can rebroadcast up to 24 hours of point-positions and resultant seismic estimates derived from the point positions to application clients distributed across web. A suite of seismic monitoring applications has also been written, which includes position time series analysis, instantaneous displacement vectors, and peak ground displacement contouring and mapping. We have also implemented a continuous estimation of finite-fault slip along the Cascadia megathrust using a NIF-type approach. This currently operates on the terrestrial GPS data streams, but could readily be expanded to use real-time offshore geodetic measurements as well. The continuous slip distributions are used in turn to compute tsunami excitation and, when convolved with pre-computed, hydrodynamic Green functions calculated using the COMCOT tsunami modeling software, run-up estimates for the entire Cascadia coastal margin. Finally, a suite of data visualization tools has been written to allow interaction with the real-time position streams and seismic estimates based on them, including time series plotting, instantaneous offset vectors, peak ground deformation contouring, finite-fault inversions, and tsunami run-up. This suite is currently bundled within a single client written in JAVA, called ';GPS Cockpit,' which is available for download.

  5. Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range

    NASA Technical Reports Server (NTRS)

    Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.

  6. Quantifying human response capabilities towards tsunami threats at community level

    NASA Astrophysics Data System (ADS)

    Post, J.; Mück, M.; Zosseder, K.; Wegscheider, S.; Taubenböck, H.; Strunz, G.; Muhari, A.; Anwar, H. Z.; Birkmann, J.; Gebert, N.

    2009-04-01

    Decision makers at the community level need detailed information on tsunami risks in their area. Knowledge on potential hazard impact, exposed elements such as people, critical facilities and lifelines, people's coping capacity and recovery potential are crucial to plan precautionary measures for adaptation and to mitigate potential impacts of tsunamis on society and the environment. A crucial point within a people-centred tsunami risk assessment is to quantify the human response capabilities towards tsunami threats. Based on this quantification and spatial representation in maps tsunami affected and safe areas, difficult-to-evacuate areas, evacuation target points and evacuation routes can be assigned and used as an important contribution to e.g. community level evacuation planning. Major component in the quantification of human response capabilities towards tsunami impacts is the factor time. The human response capabilities depend on the estimated time of arrival (ETA) of a tsunami, the time until technical or natural warning signs (ToNW) can be received, the reaction time (RT) of the population (human understanding of a tsunami warning and the decision to take appropriate action), the evacuation time (ET, time people need to reach a safe area) and the actual available response time (RsT = ETA - ToNW - RT). If RsT is larger than ET, people in the respective areas are able to reach a safe area and rescue themselves. Critical areas possess RsT values equal or even smaller ET and hence people whin these areas will be directly affected by a tsunami. Quantifying the factor time is challenging and an attempt to this is presented here. The ETA can be derived by analyzing pre-computed tsunami scenarios for a respective area. For ToNW we assume that the early warning center is able to fulfil the Indonesian presidential decree to issue a warning within 5 minutes. RT is difficult as here human intrinsic factors as educational level, believe, tsunami knowledge and experience besides others play a role. An attempt to quantify this variable under high uncertainty is also presented. Quantifying ET is based on a GIS modelling using a Cost Weighted Distance approach. Basic principle is to define the best evacuation path from a given point to the next safe area (shelter location). Here the fastest path from that point to the shelter location has to be found. Thereby the impact of land cover, slope, population density, population age and gender distribution are taken into account as literature studies prove these factors as highly important. Knowing the fastest path and the distance to the next safe area together with a spatially distributed pattern of evacuation speed delivers the time needed from each location to a safe area. By considering now the obtained time value for RsT the coverage area of an evacuation target point (safe area) can be assigned. Incorporating knowledge on people capacity of an evacuation target point the respective coverage area is refined. Hence areas with weak, moderate and good human response capabilities can be detected. This allows calculation of potential amount of people affected (dead or injured) and amount of people dislocated. First results for Kuta (Bali) for a worst case tsunami event deliver people affected of approx. 25 000 when RT = 0 minutes (direct evacuation when receiving a tsunami warning to 120 000 when RT > ETA (no evacuation action until tsunami hits the land). Additionally fastest evacuation routes to the evacuation target points can be assigned. Areas with weak response capabilities can be assigned as priority areas to install e.g. additional evacuation target points or to increase tsunami knowledge and awareness to promote a faster reaction time. Especially in analyzing underlying socio-economic properties causing deficiencies in responding to a tsunami threat can lead to valuable information and direct planning of adaptation measures. Keywords: Community level, Risk and vulnerability assessment, Early warning, Disaster management, Tsunami, Indonesia

  7. Time-optimal Aircraft Pursuit-evasion with a Weapon Envelope Constraint

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.

    1990-01-01

    The optimal pursuit-evasion problem between two aircraft including a realistic weapon envelope is analyzed using differential game theory. Six order nonlinear point mass vehicle models are employed and the inclusion of an arbitrary weapon envelope geometry is allowed. The performance index is a linear combination of flight time and the square of the vehicle acceleration. Closed form solution to this high-order differential game is then obtained using feedback linearization. The solution is in the form of a feedback guidance law together with a quartic polynomial for time-to-go. Due to its modest computational requirements, this nonlinear guidance law is useful for on-board real-time implementation.

  8. [Local fractal analysis of noise-like time series by all permutations method for 1-115 min periods].

    PubMed

    Panchelyuga, V A; Panchelyuga, M S

    2015-01-01

    Results of local fractal analysis of 329-per-day time series of 239Pu alpha-decay rate fluctuations by means of all permutations method (APM) are presented. The APM-analysis reveals in the time series some steady frequency set. The coincidence of the frequency set with the Earth natural oscillations was demonstrated. A short review of works by different authors who analyzed the time series of fluctuations in processes of different nature is given. We have shown that the periods observed in those works correspond to the periods revealed in our study. It points to a common mechanism of the phenomenon observed.

  9. Impact of Different Initial Epinephrine Treatment Time Points on the Early Postresuscitative Hemodynamic Status of Children With Traumatic Out-of-hospital Cardiac Arrest.

    PubMed

    Lin, Yan-Ren; Syue, Yuan-Jhen; Buddhakosai, Waradee; Lu, Huai-En; Chang, Chin-Fu; Chang, Chih-Yu; Chen, Cheng Hsu; Chen, Wen-Liang; Li, Chao-Jui

    2016-03-01

    The postresuscitative hemodynamic status of children with traumatic out-of-hospital cardiac arrest (OHCA) might be impacted by the early administration of epinephrine, but this topic has not been well addressed. The aim of this study was to analyze the early postresuscitative hemodynamics, survival, and neurologic outcome according to different time points of first epinephrine treatment among children with traumatic OHCA.Information on 388 children who presented to the emergency departments of 3 medical centers and who were treated with epinephrine for traumatic OHCA during the study period (2003-2012) was retrospectively collected. The early postresuscitative hemodynamic features (cardiac functions, end-organ perfusion, and consciousness), survival, and neurologic outcome according to different time points of first epinephrine treatment (early: <15, intermediate: 15-30, and late: >30 minutes after collapse) were analyzed.Among 165 children who achieved sustained return of spontaneous circulation, 38 children (9.8%) survived to discharge and 12 children (3.1%) had good neurologic outcomes. Early epinephrine increased the postresuscitative heart rate and blood pressure in the first 30 minutes, but ultimately impaired end-organ perfusion (decreased urine output and initial creatinine clearance) (all P < 0.05). Early epinephrine treatment increased the chance of achieving sustained return of spontaneous circulation, but did not increase the rates of survival and good neurologic outcome.Early epinephrine temporarily increased heart rate and blood pressure in the first 30 minutes of the postresuscitative period, but impaired end-organ perfusion. Most importantly, the rates of survival and good neurologic outcome were not significantly increased by early epinephrine administration.

  10. Impact of Different Initial Epinephrine Treatment Time Points on the Early Postresuscitative Hemodynamic Status of Children With Traumatic Out-of-hospital Cardiac Arrest

    PubMed Central

    Lin, Yan-Ren; Syue, Yuan-Jhen; Buddhakosai, Waradee; Lu, Huai-En; Chang, Chin-Fu; Chang, Chih-Yu; Chen, Cheng Hsu; Chen, Wen-Liang; Li, Chao-Jui

    2016-01-01

    Abstract The postresuscitative hemodynamic status of children with traumatic out-of-hospital cardiac arrest (OHCA) might be impacted by the early administration of epinephrine, but this topic has not been well addressed. The aim of this study was to analyze the early postresuscitative hemodynamics, survival, and neurologic outcome according to different time points of first epinephrine treatment among children with traumatic OHCA. Information on 388 children who presented to the emergency departments of 3 medical centers and who were treated with epinephrine for traumatic OHCA during the study period (2003–2012) was retrospectively collected. The early postresuscitative hemodynamic features (cardiac functions, end-organ perfusion, and consciousness), survival, and neurologic outcome according to different time points of first epinephrine treatment (early: <15, intermediate: 15–30, and late: >30 minutes after collapse) were analyzed. Among 165 children who achieved sustained return of spontaneous circulation, 38 children (9.8%) survived to discharge and 12 children (3.1%) had good neurologic outcomes. Early epinephrine increased the postresuscitative heart rate and blood pressure in the first 30 minutes, but ultimately impaired end-organ perfusion (decreased urine output and initial creatinine clearance) (all P < 0.05). Early epinephrine treatment increased the chance of achieving sustained return of spontaneous circulation, but did not increase the rates of survival and good neurologic outcome. Early epinephrine temporarily increased heart rate and blood pressure in the first 30 minutes of the postresuscitative period, but impaired end-organ perfusion. Most importantly, the rates of survival and good neurologic outcome were not significantly increased by early epinephrine administration. PMID:27015217

  11. Computing and visualizing time-varying merge trees for high-dimensional data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oesterling, Patrick; Heine, Christian; Weber, Gunther H.

    2017-06-03

    We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.

  12. Scanner baseliner monitoring and control in high volume manufacturing

    NASA Astrophysics Data System (ADS)

    Samudrala, Pavan; Chung, Woong Jae; Aung, Nyan; Subramany, Lokesh; Gao, Haiyong; Gomez, Juan-Manuel

    2016-03-01

    We analyze performance of different customized models on baseliner overlay data and demonstrate the reduction in overlay residuals by ~10%. Smart Sampling sets were assessed and compared with the full wafer measurements. We found that performance of the grid can still be maintained by going to one-third of total sampling points, while reducing metrology time by 60%. We also demonstrate the feasibility of achieving time to time matching using scanner fleet manager and thus identify the tool drifts even when the tool monitoring controls are within spec limits. We also explore the scanner feedback constant variation with illumination sources.

  13. The 1983 tail-era series. Volume 1: ISEE 3 plasma

    NASA Technical Reports Server (NTRS)

    Fairfield, D. H.; Phillips, J. L.

    1991-01-01

    Observations from the ISEE 3 electron analyzer are presented in plots. Electrons were measured in 15 continuous energy levels between 8.5 and 1140 eV during individual 3-sec spacecraft spins. Times associated with each data point are the beginning time of the 3 sec data collection interval. Moments calculated from the measured distribution function are shown as density, temperature, velocity, and velocity azimuthal angle. Spacecraft ephemeris is shown at the bottom in GSE and GSM coordinates in units of Earth radii, with vertical ticks on the time axis corresponding to the printed positions.

  14. Point-and-stare operation and high-speed image acquisition in real-time hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Driver, Richard D.; Bannon, David P.; Ciccone, Domenic; Hill, Sam L.

    2010-04-01

    The design and optical performance of a small-footprint, low-power, turnkey, Point-And-Stare hyperspectral analyzer, capable of fully automated field deployment in remote and harsh environments, is described. The unit is packaged for outdoor operation in an IP56 protected air-conditioned enclosure and includes a mechanically ruggedized fully reflective, aberration-corrected hyperspectral VNIR (400-1000 nm) spectrometer with a board-level detector optimized for point and stare operation, an on-board computer capable of full system data-acquisition and control, and a fully functioning internal hyperspectral calibration system for in-situ system spectral calibration and verification. Performance data on the unit under extremes of real-time survey operation and high spatial and high spectral resolution will be discussed. Hyperspectral acquisition including full parameter tracking is achieved by the addition of a fiber-optic based downwelling spectral channel for solar illumination tracking during hyperspectral acquisition and the use of other sensors for spatial and directional tracking to pinpoint view location. The system is mounted on a Pan-And-Tilt device, automatically controlled from the analyzer's on-board computer, making the HyperspecTM particularly adaptable for base security, border protection and remote deployments. A hyperspectral macro library has been developed to control hyperspectral image acquisition, system calibration and scene location control. The software allows the system to be operated in a fully automatic mode or under direct operator control through a GigE interface.

  15. Developing new mathematical method for search of the time series periodicity with deletions and insertions

    NASA Astrophysics Data System (ADS)

    Korotkov, E. V.; Korotkova, M. A.

    2017-01-01

    The purpose of this study was to detect latent periodicity in the presence of deletions or insertions in the analyzed data, when the points of deletions or insertions are unknown. A mathematical method was developed to search for periodicity in the numerical series, using dynamic programming and random matrices. The developed method was applied to search for periodicity in the Euro/Dollar (Eu/) exchange rate, since 2001. The presence of periodicity within the period length equal to 24 h in the analyzed financial series was shown. Periodicity can be detected only with insertions and deletions. The results of this study show that periodicity phase shifts, depend on the observation time. The reasons for the existence of the periodicity in the financial ranks are discussed.

  16. Rapid and safe learning of robotic gastrectomy for gastric cancer: multidimensional analysis in a comparison with laparoscopic gastrectomy.

    PubMed

    Kim, H-I; Park, M S; Song, K J; Woo, Y; Hyung, W J

    2014-10-01

    The learning curve of robotic gastrectomy has not yet been evaluated in comparison with the laparoscopic approach. We compared the learning curves of robotic gastrectomy and laparoscopic gastrectomy based on operation time and surgical success. We analyzed 172 robotic and 481 laparoscopic distal gastrectomies performed by single surgeon from May 2003 to April 2009. The operation time was analyzed using a moving average and non-linear regression analysis. Surgical success was evaluated by a cumulative sum plot with a target failure rate of 10%. Surgical failure was defined as laparoscopic or open conversion, insufficient lymph node harvest for staging, resection margin involvement, postoperative morbidity, and mortality. Moving average and non-linear regression analyses indicated stable state for operation time at 95 and 121 cases in robotic gastrectomy, and 270 and 262 cases in laparoscopic gastrectomy, respectively. The cumulative sum plot identified no cut-off point for surgical success in robotic gastrectomy and 80 cases in laparoscopic gastrectomy. Excluding the initial 148 laparoscopic gastrectomies that were performed before the first robotic gastrectomy, the two groups showed similar number of cases to reach steady state in operation time, and showed no cut-off point in analysis of surgical success. The experience of laparoscopic surgery could affect the learning process of robotic gastrectomy. An experienced laparoscopic surgeon requires fewer cases of robotic gastrectomy to reach steady state. Moreover, the surgical outcomes of robotic gastrectomy were satisfactory. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Thermal pretreatment of a high lignin SSF digester residue to increase its softening point

    DOE PAGES

    Howe, Daniel; Garcia-Perez, Manuel; Taasevigen, Danny; ...

    2016-03-24

    Residues high in lignin and ash generated from the simultaneous saccharification and fermentation of corn stover were thermally pretreated in an inert (N 2) atmosphere to study the effect of time and temperature on their softening points. These residues are difficult to feed into gasifiers due to premature thermal degradation and formation of reactive liquids in the feed lines, leading to plugging. The untreated and treated residues were characterized by proximate and ultimate analysis, and then analyzed via TGA, DSC, 13C NMR, Py-GC–MS, CHNO/S, and TMA. Interpretation of the compositional analysis indicates that the weight loss observed during pretreatment ismore » mainly due to the thermal decomposition and volatilization of the hemicelluloses and amorphous cellulose fractions. Fixed carbon increases in the pretreated material, mostly due to a concentration effect rather than the formation of new extra poly-aromatic material. The optimal processing time and temperature to minimize the production of carbonyl groups in the pretreated samples was 300 °C at a time of 30 min. Results showed that the softening point of the material could be increased from 187 °C to 250 °C, and that under the experimental conditions studied, pretreatment temperature plays a more important role than time. The increase in softening point was mainly due to the formation of covalent bonds in the lignin structures and the removal of low molecular weight volatile intermediates.« less

  18. Theoretical relation between halo current-plasma energy displacement/deformation in EAST

    NASA Astrophysics Data System (ADS)

    Khan, Shahab Ud-Din; Khan, Salah Ud-Din; Song, Yuntao; Dalong, Chen

    2018-04-01

    In this paper, theoretical model for calculating halo current has been developed. This work attained novelty as no theoretical calculations for halo current has been reported so far. This is the first time to use theoretical approach. The research started by calculating points for plasma energy in terms of poloidal and toroidal magnetic field orientations. While calculating these points, it was extended to calculate halo current and to developed theoretical model. Two cases were considered for analyzing the plasma energy when flows down/upward to the diverter. Poloidal as well as toroidal movement of plasma energy was investigated and mathematical formulations were designed as well. Two conducting points with respect to (R, Z) were calculated for halo current calculations and derivations. However, at first, halo current was established on the outer plate in clockwise direction. The maximum generation of halo current was estimated to be about 0.4 times of the plasma current. A Matlab program has been developed to calculate halo current and plasma energy calculation points. The main objective of the research was to establish theoretical relation with experimental results so as to precautionary evaluate the plasma behavior in any Tokamak.

  19. Trend Extraction of Understanding Degree of Classes through Analyses of Questionnaires on Teaching Skills and the Uppermost Important Points

    NASA Astrophysics Data System (ADS)

    Koike, Katsuaki; Mori, Kazuya; Yamao, Toshitaka; Fujimi, Toshio

    As an activity of one working group for the Good Practice at Kumamoto Univ., we proposed a questionnaire survey on understanding degree of the uppermost important points of each class in addition to the usual class-evaluation questionnaire. Each class lists three uppermost important points which are essential to understand the class contents. The understanding degree is classified into four levels ; full, most, insufficient, and not at all understandings. By analyzing the replies to 124 classes in the 2008 school year with a regression model, the understanding degrees of the bachelor students of the faculty of engineering were clarified to be affected meaningfully by degree of difficulty, effectiveness of audiovisual aids, self-study time, and attendance to class.

  20. A guide for recording esthetic and biologic changes with photographs

    Treesearch

    Arthur W. Magill; R.H. Twiss

    1965-01-01

    Photography has long been a useful tool for recording and analyzing environmental conditions. Permanent camera points can be established to help detect ,and analyze changes in the esthetics and ecology of wildland resources. This note describes the usefulness of permanent camera points and outlines procedures for establishing points and recording data.

  1. Preliminary Results on Thermal Shock Behavior of CuZnAl Shape Memory Alloy Using a Solar Concentrator as Heating Source

    NASA Astrophysics Data System (ADS)

    Tudora, C.; Abrudeanu, M.; Stanciu, S.; Anghel, D.; Plaiaşu, G. A.; Rizea, V.; Ştirbu, I.; Cimpoeşu, N.

    2018-06-01

    It is highly accepted that martensitic transformation can be induced by temperature variation and by stress solicitation. Using a solar concentrator, we manage to increase the material surface temperature (till 573 respectively 873 K) in very short periods of time in order to analyze the material behavior under thermal shocks. The heating/cooling process was registered and analyzed during the experiments. Material surface was analyzed before and after thermal shocks by microstructure point of view using scanning electron microscopy (SEM) and atomic force microscopy (AFM). The experiments follow the material behavior during fast heating and propose the possibility of activating smart materials using the sun heat for aerospace applications.

  2. Singularities of Floquet scattering and tunneling

    NASA Astrophysics Data System (ADS)

    Landa, H.

    2018-04-01

    We study quasibound states and scattering with short-range potentials in three dimensions, subject to an axial periodic driving. We find that poles of the scattering S matrix can cross the real energy axis as a function of the drive amplitude, making the S matrix nonanalytic at a singular point. For the corresponding quasibound states that can tunnel out of (or get captured within) a potential well, this results in a discontinuous jump in both the angular momentum and energy of emitted (absorbed) waves. We also analyze elastic and inelastic scattering of slow particles in the time-dependent potential. For a drive amplitude at the singular point, there is a total absorption of incoming low-energy (s wave) particles and their conversion to high-energy outgoing (mostly p ) waves. We examine the relation of such Floquet singularities, lacking in an effective time-independent approximation, with well-known "spectral singularities" (or "exceptional points"). These results are based on an analytic approach for obtaining eigensolutions of time-dependent periodic Hamiltonians with mixed cylindrical and spherical symmetry, and apply broadly to particles interacting via power-law forces and subject to periodic fields, e.g., co-trapped ions and atoms.

  3. Neural Network and Regression Approximations in High Speed Civil Transport Aircraft Design Optimization

    NASA Technical Reports Server (NTRS)

    Patniak, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.

    1998-01-01

    Nonlinear mathematical-programming-based design optimization can be an elegant method. However, the calculations required to generate the merit function, constraints, and their gradients, which are frequently required, can make the process computational intensive. The computational burden can be greatly reduced by using approximating analyzers derived from an original analyzer utilizing neural networks and linear regression methods. The experience gained from using both of these approximation methods in the design optimization of a high speed civil transport aircraft is the subject of this paper. The Langley Research Center's Flight Optimization System was selected for the aircraft analysis. This software was exercised to generate a set of training data with which a neural network and a regression method were trained, thereby producing the two approximating analyzers. The derived analyzers were coupled to the Lewis Research Center's CometBoards test bed to provide the optimization capability. With the combined software, both approximation methods were examined for use in aircraft design optimization, and both performed satisfactorily. The CPU time for solution of the problem, which had been measured in hours, was reduced to minutes with the neural network approximation and to seconds with the regression method. Instability encountered in the aircraft analysis software at certain design points was also eliminated. On the other hand, there were costs and difficulties associated with training the approximating analyzers. The CPU time required to generate the input-output pairs and to train the approximating analyzers was seven times that required for solution of the problem.

  4. Theory of Turing Patterns on Time Varying Networks.

    PubMed

    Petit, Julien; Lauwens, Ben; Fanelli, Duccio; Carletti, Timoteo

    2017-10-06

    The process of pattern formation for a multispecies model anchored on a time varying network is studied. A nonhomogeneous perturbation superposed to an homogeneous stable fixed point can be amplified following the Turing mechanism of instability, solely instigated by the network dynamics. By properly tuning the frequency of the imposed network evolution, one can make the examined system behave as its averaged counterpart, over a finite time window. This is the key observation to derive a closed analytical prediction for the onset of the instability in the time dependent framework. Continuously and piecewise constant periodic time varying networks are analyzed, setting the framework for the proposed approach. The extension to nonperiodic settings is also discussed.

  5. General error analysis in the relationship between free thyroxine and thyrotropin and its clinical relevance.

    PubMed

    Goede, Simon L; Leow, Melvin Khee-Shing

    2013-01-01

    This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH]. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect). When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.

  6. Study on super-resolution three-dimensional range-gated imaging technology

    NASA Astrophysics Data System (ADS)

    Guo, Huichao; Sun, Huayan; Wang, Shuai; Fan, Youchen; Li, Yuanmiao

    2018-04-01

    Range-gated three dimensional imaging technology is a hotspot in recent years, because of the advantages of high spatial resolution, high range accuracy, long range, and simultaneous reflection of target reflectivity information. Based on the study of the principle of intensity-related method, this paper has carried out theoretical analysis and experimental research. The experimental system adopts the high power pulsed semiconductor laser as light source, gated ICCD as the imaging device, can realize the imaging depth and distance flexible adjustment to achieve different work mode. The imaging experiment of small imaging depth is carried out aiming at building 500m away, and 26 group images were obtained with distance step 1.5m. In this paper, the calculation method of 3D point cloud based on triangle method is analyzed, and 15m depth slice of the target 3D point cloud are obtained by using two frame images, the distance precision is better than 0.5m. The influence of signal to noise ratio, illumination uniformity and image brightness on distance accuracy are analyzed. Based on the comparison with the time-slicing method, a method for improving the linearity of point cloud is proposed.

  7. Compression performance comparison in low delay real-time video for mobile applications

    NASA Astrophysics Data System (ADS)

    Bivolarski, Lazar

    2012-10-01

    This article compares the performance of several current video coding standards in the conditions of low-delay real-time in a resource constrained environment. The comparison is performed using the same content and the metrics and mix of objective and perceptual quality metrics. The metrics results in different coding schemes are analyzed from a point of view of user perception and quality of service. Multiple standards are compared MPEG-2, MPEG4 and MPEG-AVC and well and H.263. The metrics used in the comparison include SSIM, VQM and DVQ. Subjective evaluation and quality of service are discussed from a point of view of perceptual metrics and their incorporation in the coding scheme development process. The performance and the correlation of results are presented as a predictor of the performance of video compression schemes.

  8. Chiral dynamics in the low-temperature phase of QCD

    NASA Astrophysics Data System (ADS)

    Brandt, Bastian B.; Francis, Anthony; Meyer, Harvey B.; Robaina, Daniel

    2014-09-01

    We investigate the low-temperature phase of QCD and the crossover region with two light flavors of quarks. The chiral expansion around the point (T,m=0) in the temperature vs quark-mass plane indicates that a sharp real-time excitation exists with the quantum numbers of the pion. An exact sum rule is derived for the thermal modification of the spectral function associated with the axial charge density; the (dominant) pion pole contribution obeys the sum rule. We determine the two parameters of the pion dispersion relation using lattice QCD simulations and test the applicability of the chiral expansion. The time-dependent correlators are also analyzed using the maximum entropy method, yielding consistent results. Finally, we test the predictions of the chiral expansion around the point (T=0,m=0) for the temperature dependence of static observables.

  9. Using High-Content Imaging to Analyze Toxicological Tipping Points (ICTATT meeting China)

    EPA Science Inventory

    Presentation at International Conference on Toxicological Alternatives & Translational Toxicology (ICTATT) held in China and Discussing the possibility of using High Content Imaging to Analyze Toxicological Tipping Points

  10. Software algorithm and hardware design for real-time implementation of new spectral estimator

    PubMed Central

    2014-01-01

    Background Real-time spectral analyzers can be difficult to implement for PC computer-based systems because of the potential for high computational cost, and algorithm complexity. In this work a new spectral estimator (NSE) is developed for real-time analysis, and compared with the discrete Fourier transform (DFT). Method Clinical data in the form of 216 fractionated atrial electrogram sequences were used as inputs. The sample rate for acquisition was 977 Hz, or approximately 1 millisecond between digital samples. Real-time NSE power spectra were generated for 16,384 consecutive data points. The same data sequences were used for spectral calculation using a radix-2 implementation of the DFT. The NSE algorithm was also developed for implementation as a real-time spectral analyzer electronic circuit board. Results The average interval for a single real-time spectral calculation in software was 3.29 μs for NSE versus 504.5 μs for DFT. Thus for real-time spectral analysis, the NSE algorithm is approximately 150× faster than the DFT. Over a 1 millisecond sampling period, the NSE algorithm had the capability to spectrally analyze a maximum of 303 data channels, while the DFT algorithm could only analyze a single channel. Moreover, for the 8 second sequences, the NSE spectral resolution in the 3-12 Hz range was 0.037 Hz while the DFT spectral resolution was only 0.122 Hz. The NSE was also found to be implementable as a standalone spectral analyzer board using approximately 26 integrated circuits at a cost of approximately $500. The software files used for analysis are included as a supplement, please see the Additional files 1 and 2. Conclusions The NSE real-time algorithm has low computational cost and complexity, and is implementable in both software and hardware for 1 millisecond updates of multichannel spectra. The algorithm may be helpful to guide radiofrequency catheter ablation in real time. PMID:24886214

  11. An Intrinsic Role of Beta Oscillations in Memory for Time Estimation.

    PubMed

    Wiener, Martin; Parikh, Alomi; Krakow, Arielle; Coslett, H Branch

    2018-05-22

    The neural mechanisms underlying time perception are of vital importance to a comprehensive understanding of behavior and cognition. Recent work has suggested a supramodal role for beta oscillations in measuring temporal intervals. However, the precise function of beta oscillations and whether their manipulation alters timing has yet to be determined. To accomplish this, we first re-analyzed two, separate EEG datasets and demonstrate that beta oscillations are associated with the retention and comparison of a memory standard for duration. We next conducted a study of 20 human participants using transcranial alternating current stimulation (tACS), over frontocentral cortex, at alpha and beta frequencies, during a visual temporal bisection task, finding that beta stimulation exclusively shifts the perception of time such that stimuli are reported as longer in duration. Finally, we decomposed trialwise choice data with a drift diffusion model of timing, revealing that the shift in timing is caused by a change in the starting point of accumulation, rather than the drift rate or threshold. Our results provide evidence for the intrinsic involvement of beta oscillations in the perception of time, and point to a specific role for beta oscillations in the encoding and retention of memory for temporal intervals.

  12. Evolution of Motor Control: From Reflexes and Motor Programs to the Equilibrium-Point Hypothesis

    PubMed Central

    Latash, Mark L.

    2009-01-01

    This brief review analyzes the evolution of motor control theories along two lines that emphasize active (motor programs) and reactive (reflexes) features of voluntary movements. It suggests that the only contemporary hypothesis that integrates both approaches in a fruitful way is the equilibrium-point hypothesis. Physical, physiological, and behavioral foundations of the EP-hypothesis are considered as well as relations between the EP-hypothesis and the recent developments of the notion of motor synergies. The paper ends with a brief review of the criticisms of the EP-hypothesis and challenges that the hypothesis faces at this time. PMID:19823595

  13. Implementing system simulation of C3 systems using autonomous objects

    NASA Technical Reports Server (NTRS)

    Rogers, Ralph V.

    1987-01-01

    The basis of all conflict recognition in simulation is a common frame of reference. Synchronous discrete-event simulation relies on the fixed points in time as the basic frame of reference. Asynchronous discrete-event simulation relies on fixed-points in the model space as the basic frame of reference. Neither approach provides sufficient support for autonomous objects. The use of a spatial template as a frame of reference is proposed to address these insufficiencies. The concept of a spatial template is defined and an implementation approach offered. Discussed are the uses of this approach to analyze the integration of sensor data associated with Command, Control, and Communication systems.

  14. Registered Replication Report: Rand, Greene, and Nowak (2012).

    PubMed

    Bouwmeester, S; Verkoeijen, P P J L; Aczel, B; Barbosa, F; Bègue, L; Brañas-Garza, P; Chmura, T G H; Cornelissen, G; Døssing, F S; Espín, A M; Evans, A M; Ferreira-Santos, F; Fiedler, S; Flegr, J; Ghaffari, M; Glöckner, A; Goeschl, T; Guo, L; Hauser, O P; Hernan-Gonzalez, R; Herrero, A; Horne, Z; Houdek, P; Johannesson, M; Koppel, L; Kujal, P; Laine, T; Lohse, J; Martins, E C; Mauro, C; Mischkowski, D; Mukherjee, S; Myrseth, K O R; Navarro-Martínez, D; Neal, T M S; Novakova, J; Pagà, R; Paiva, T O; Palfi, B; Piovesan, M; Rahal, R-M; Salomon, E; Srinivasan, N; Srivastava, A; Szaszi, B; Szollosi, A; Thor, K Ø; Tinghög, G; Trueblood, J S; Van Bavel, J J; van 't Veer, A E; Västfjäll, D; Warner, M; Wengström, E; Wills, J; Wollbrant, C E

    2017-05-01

    In an anonymous 4-person economic game, participants contributed more money to a common project (i.e., cooperated) when required to decide quickly than when forced to delay their decision (Rand, Greene & Nowak, 2012), a pattern consistent with the social heuristics hypothesis proposed by Rand and colleagues. The results of studies using time pressure have been mixed, with some replication attempts observing similar patterns (e.g., Rand et al., 2014) and others observing null effects (e.g., Tinghög et al., 2013; Verkoeijen & Bouwmeester, 2014). This Registered Replication Report (RRR) assessed the size and variability of the effect of time pressure on cooperative decisions by combining 21 separate, preregistered replications of the critical conditions from Study 7 of the original article (Rand et al., 2012). The primary planned analysis used data from all participants who were randomly assigned to conditions and who met the protocol inclusion criteria (an intent-to-treat approach that included the 65.9% of participants in the time-pressure condition and 7.5% in the forced-delay condition who did not adhere to the time constraints), and we observed a difference in contributions of -0.37 percentage points compared with an 8.6 percentage point difference calculated from the original data. Analyzing the data as the original article did, including data only for participants who complied with the time constraints, the RRR observed a 10.37 percentage point difference in contributions compared with a 15.31 percentage point difference in the original study. In combination, the results of the intent-to-treat analysis and the compliant-only analysis are consistent with the presence of selection biases and the absence of a causal effect of time pressure on cooperation.

  15. Registered Replication Report: Rand, Greene, and Nowak (2012)

    PubMed Central

    Bouwmeester, S.; Verkoeijen, P. P. J. L.; Aczel, B.; Barbosa, F.; Bègue, L.; Brañas-Garza, P.; Chmura, T. G. H.; Cornelissen, G.; Døssing, F. S.; Espín, A. M.; Evans, A. M.; Ferreira-Santos, F.; Fiedler, S.; Flegr, J.; Ghaffari, M.; Glöckner, A.; Goeschl, T.; Guo, L.; Hauser, O. P.; Hernan-Gonzalez, R.; Herrero, A.; Horne, Z.; Houdek, P.; Johannesson, M.; Koppel, L.; Kujal, P.; Laine, T.; Lohse, J.; Martins, E. C.; Mauro, C.; Mischkowski, D.; Mukherjee, S.; Myrseth, K. O. R.; Navarro-Martínez, D.; Neal, T. M. S.; Novakova, J.; Pagà, R.; Paiva, T. O.; Palfi, B.; Piovesan, M.; Rahal, R.-M.; Salomon, E.; Srinivasan, N.; Srivastava, A.; Szaszi, B.; Szollosi, A.; Thor, K. Ø.; Tinghög, G.; Trueblood, J. S.; Van Bavel, J. J.; van ‘t Veer, A. E.; Västfjäll, D.; Warner, M.; Wengström, E.; Wills, J.; Wollbrant, C. E.

    2017-01-01

    In an anonymous 4-person economic game, participants contributed more money to a common project (i.e., cooperated) when required to decide quickly than when forced to delay their decision (Rand, Greene & Nowak, 2012), a pattern consistent with the social heuristics hypothesis proposed by Rand and colleagues. The results of studies using time pressure have been mixed, with some replication attempts observing similar patterns (e.g., Rand et al., 2014) and others observing null effects (e.g., Tinghög et al., 2013; Verkoeijen & Bouwmeester, 2014). This Registered Replication Report (RRR) assessed the size and variability of the effect of time pressure on cooperative decisions by combining 21 separate, preregistered replications of the critical conditions from Study 7 of the original article (Rand et al., 2012). The primary planned analysis used data from all participants who were randomly assigned to conditions and who met the protocol inclusion criteria (an intent-to-treat approach that included the 65.9% of participants in the time-pressure condition and 7.5% in the forced-delay condition who did not adhere to the time constraints), and we observed a difference in contributions of −0.37 percentage points compared with an 8.6 percentage point difference calculated from the original data. Analyzing the data as the original article did, including data only for participants who complied with the time constraints, the RRR observed a 10.37 percentage point difference in contributions compared with a 15.31 percentage point difference in the original study. In combination, the results of the intent-to-treat analysis and the compliant-only analysis are consistent with the presence of selection biases and the absence of a causal effect of time pressure on cooperation. PMID:28475467

  16. Human erythrocytes analyzed by generalized 2D Raman correlation spectroscopy

    NASA Astrophysics Data System (ADS)

    Wesełucha-Birczyńska, Aleksandra; Kozicki, Mateusz; Czepiel, Jacek; Łabanowska, Maria; Nowak, Piotr; Kowalczyk, Grzegorz; Kurdziel, Magdalena; Birczyńska, Malwina; Biesiada, Grażyna; Mach, Tomasz; Garlicki, Aleksander

    2014-07-01

    The most numerous elements of the blood cells, erythrocytes, consist mainly of two components: homogeneous interior filled with hemoglobin and closure which is the cell membrane. To gain insight into their specific properties we studied the process of disintegration, considering these two constituents, and comparing the natural aging process of human healthy blood cells. MicroRaman spectra of hemoglobin within the single RBC were recorded using 514.5, and 785 nm laser lines. The generalized 2D correlation method was applied to analyze the collected spectra. The time passed from blood donation was regarded as an external perturbation. The time was no more than 40 days according to the current storage limit of blood banks, although, the average RBC life span is 120 days. An analysis of the prominent synchronous and asynchronous cross peaks allow us to get insight into the mechanism of hemoglobin decomposition. Appearing asynchronous cross-peaks point towards globin and heme separation from each other, while synchronous shows already broken globin into individual amino acids. Raman scattering analysis of hemoglobin "wrapping", i.e. healthy erythrocyte ghosts, allows for the following peculiarity of their behavior. The increasing power of the excitation laser induced alterations in the assemblage of membrane lipids. 2D correlation maps, obtained with increasing laser power recognized as an external perturbation, allows for the consideration of alterations in the erythrocyte membrane structure and composition, which occurs first in the proteins. Cross-peaks were observed indicating an asynchronous correlation between the senescent-cell antigen (SCA) and heme or proteins vibrations. The EPR spectra of the whole blood was analyzed regarding time as an external stimulus. The 2D correlation spectra points towards participation of the selected metal ion centers in the disintegration process.

  17. Molecular weight analyses and enzymatic degradation profiles of the soft-tissue fillers Belotero Balance, Restylane, and Juvéderm Ultra.

    PubMed

    Flynn, Timothy Corcoran; Thompson, David H; Hyun, Seok-Hee

    2013-10-01

    In this study, the authors sought to determine the molecular weight distribution of three hyaluronic acids-Belotero Balance, Restylane, and Juvéderm Ultra-and their rates of degradation following exposure to hyaluronidase. Lot consistency of Belotero Balance also was analyzed. Three lots of Belotero Balance were analyzed using liquid chromatography techniques. The product was found to have high-molecular-weight and low-molecular-weight species. One lot of Belotero Balance was compared to one lot each of Juvéderm Ultra and Restylane. Molecular weights of the species were analyzed. The hyaluronic acids were exposed to ovine testicular hyaluronidase at six time points-baseline and 0.5, 1, 2, 6, and 24 hours-to determine degradation rates. Belotero Balance lots were remarkably consistent. Belotero Balance had the largest high-molecular-weight species, followed by Juvéderm Ultra and Restylane (p < 0.001). Low-molecular-weight differences among all three hyaluronic acids were not statistically significant. Percentages of high-molecular-weight polymer differ among the three materials, with Belotero Balance having the highest fraction of high-molecular-weight polymer. Degradation of the high-molecular-weight species over time showed different molecular weights of the high-molecular-weight fraction. Rates of degradation of the hyaluronic acids following exposure to ovine testicular hyaluronidase were similar. All hyaluronic acids were fully degraded at 24 hours. Fractions of high-molecular-weight polymer differ across the hyaluronic acids tested. The low-molecular-weight differences are not statistically significant. The high-molecular-weight products have different molecular weights at the 0.5- and 2-hour time points when exposed to ovine testicular hyaluronidase and are not statistically different at 24 hours.

  18. Using High-Content Imaging to Analyze Toxicological Tipping ...

    EPA Pesticide Factsheets

    Presentation at International Conference on Toxicological Alternatives & Translational Toxicology (ICTATT) held in China and Discussing the possibility of using High Content Imaging to Analyze Toxicological Tipping Points Slide Presentation at International Conference on Toxicological Alternatives & Translational Toxicology (ICTATT) held in China and Discussing the possibility of using High Content Imaging to Analyze Toxicological Tipping Points

  19. The Effects of College Students' Personal Values on Changes in Learning Approaches

    ERIC Educational Resources Information Center

    Lietz, Petra; Matthews, Bobbie

    2010-01-01

    Many studies of changes in learning approaches have used data from different age groups at one point in time only (Gow and Kember, High Educ 19:307-322, 1990; Watkins and Hattie, Br J Educ Psychol 51:384-393, 1981) or have analyzed the effects of just two or three factors using single level analytical techniques (Cano, Br J Educ Psychol…

  20. Teaching the Voices of History through Primary Sources and Historical Fiction: A Case Study of Teacher and Librarian Roles

    ERIC Educational Resources Information Center

    Stripling, Barbara Kay

    2011-01-01

    The ability to analyze alternative points of view and to empathize (understand the beliefs, attitudes and actions of another from the other's perspective rather than from one's own) are essential building blocks for learning in the 21st century. Empathy for the human participants of historical times has been deemed by a number of educators as…

  1. Variability in total ozone associated with baroclinic waves

    NASA Technical Reports Server (NTRS)

    Mote, Philip W.; Holton, James R.; Wallace, John M.

    1991-01-01

    One-point regression maps of total ozone formed by regressing the time series of bandpass-filtered geopotential height data have been analyzed against Total Ozone Mapping Spectrometer data. Results obtained reveal a strong signature of baroclinic waves in the ozone variability. The regressed patterns are found to be similar in extent and behavior to the relative vorticity patterns reported by Lim and Wallace (1991).

  2. Decrease in Ionized and Total Magnesium Blood Concentrations in Endurance Athletes Following an Exercise Bout Restores within Hours-Potential Consequences for Monitoring and Supplementation.

    PubMed

    Terink, Rieneke; Balvers, Michiel G J; Hopman, Maria T; Witkamp, Renger F; Mensink, Marco; Gunnewiek, Jacqueline M T Klein

    2017-06-01

    Magnesium is essential for optimal sport performance, generating an interest to monitor its status in athletes. However, before measuring magnesium status in blood could become routine, more insight into its diurnal fluctuations and effects of exercise itself is necessary. Therefore, we measured the effect of an acute bout of exercise on ionized (iMg) and total plasma magnesium (tMg) in blood obtained from 18 healthy well-trained endurance athletes (age, 31.1 ± 8.1 yr.; VO 2max , 50.9 ± 7.5 ml/kg/min) at multiple time points, and compared this with a resting situation. At both days, 7 blood samples were taken at set time points (8:30 fasted, 11:00, 12:30, 13:30, 15:00, 16:00, 18:30). The control day was included to correct for a putative diurnal fluctuation of magnesium. During the exercise day, athletes performed a 90 min bicycle ergometer test (70% VO 2max ) between 11:00 and 12:30. Whole blood samples were analyzed for iMg and plasma for tMg concentrations. Both concentrations decreased significantly after exercise (0.52 ± 0.04-0.45 ± 0.03 mmol/L and 0.81 ± 0.07-0.73 ± 0.06 mmol/L, respectively, p < .001) while no significant decline was observed during that time-interval on control days. Both, iMg and tMg, returned to baseline, on average, 2.5 hr after exercise. These findings suggest that timing of blood sampling to analyze Mg status is important. Additional research is needed to establish the recovery time after different types of exercise to come to a general advice regarding the timing of magnesium status assessment in practice.

  3. Yield of atrial fibrillation detection with Textile Wearable Holter from the acute phase of stroke: Pilot study of Crypto-AF registry.

    PubMed

    Pagola, Jorge; Juega, Jesus; Francisco-Pascual, Jaume; Moya, Angel; Sanchis, Mireia; Bustamante, Alejandro; Penalba, Anna; Usero, Maria; Cortijo, Elisa; Arenillas, Juan F; Calleja, Ana I; Sandin-Fuentes, Maria; Rubio, Jeronimo; Mancha, Fernando; Escudero-Martinez, Irene; Moniche, Francisco; de Torres, Reyes; Pérez-Sánchez, Soledad; González-Matos, Carlos E; Vega, Ángela; Pedrote, Alonso A; Arana-Rueda, Eduardo; Montaner, Joan; Molina, Carlos A

    2018-01-15

    We describe the feasibility of monitoring with a Textile Wearable Holter (TWH) in patients included in Crypto AF registry. We monitored cryptogenic stroke patients from stroke onset (<3days) continuously during 28days. We employed a TWH composed by a garment and a recorder. We compared two garments (Lead and Vest) to assess rate of undiagnosed Atrial Fibrillation (AF) detection, monitoring compliance, comfortability (1 to 5 points), skin lesions, and time analyzed. We describe the timing of AF detection in three periods (0-3, 4-15 and 16-28days). The rate of undiagnosed AF detection with TWH was 21.9% (32 out of 146 patients who completed the monitoring). Global time compliance was 90% of the time expected (583/644h). The level of comfortability was 4 points (IQR 3-5). We detected reversible skin lesions in 5.47% (8/146). The comfortability was similar but time compliance (in hours) was longer in Vest group 591 (IQR [521-639]) vs. Lead 566 (IQR [397-620]) (p=0.025). Also, time analyzed was more prolonged in Vest group 497 (IQR [419-557]) vs. Lead (336h (IQR [140-520]) (p=0.001)). The incidence of AF increases from 5.6% (at 3days) to 17.5% (at 15th day) and up to 20.9% (at 28th day). The percentage of AF episodes detected only in each period was 12.5% (0-3days); 21.7% (4-15days) and 19% (16-28days). 28days Holter monitoring from the acute phase of the stroke was feasible with TWH. Following our protocol, only five patients were needed to screen to detected one case of AF. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Using Microsoft PowerPoint as an Astronomical Image Analysis Tool

    NASA Astrophysics Data System (ADS)

    Beck-Winchatz, Bernhard

    2006-12-01

    Engaging students in the analysis of authentic scientific data is an effective way to teach them about the scientific process and to develop their problem solving, teamwork and communication skills. In astronomy several image processing and analysis software tools have been developed for use in school environments. However, the practical implementation in the classroom is often difficult because the teachers may not have the comfort level with computers necessary to install and use these tools, they may not have adequate computer privileges and/or support, and they may not have the time to learn how to use specialized astronomy software. To address this problem, we have developed a set of activities in which students analyze astronomical images using basic tools provided in PowerPoint. These include measuring sizes, distances, and angles, and blinking images. In contrast to specialized software, PowerPoint is broadly available on school computers. Many teachers are already familiar with PowerPoint, and the skills developed while learning how to analyze astronomical images are highly transferable. We will discuss several practical examples of measurements, including the following: -Variations in the distances to the sun and moon from their angular sizes -Magnetic declination from images of shadows -Diameter of the moon from lunar eclipse images -Sizes of lunar craters -Orbital radii of the Jovian moons and mass of Jupiter -Supernova and comet searches -Expansion rate of the universe from images of distant galaxies

  5. Longitudinal Monitoring of Patients With Chronic Low Back Pain During Physical Therapy Treatment Using the STarT Back Screening Tool.

    PubMed

    Medeiros, Flávia Cordeiro; Costa, Leonardo Oliveira Pena; Added, Marco Aurélio Nemitalla; Salomão, Evelyn Cassia; Costa, Lucíola da Cunha Menezes

    2017-05-01

    Study Design Preplanned secondary analysis of a randomized clinical trial. Background The STarT Back Screening Tool (SBST) was developed to screen and to classify patients with low back pain into subgroups for the risk of having a poor prognosis. However, this classification at baseline does not take into account variables that can influence the prognosis during treatment or over time. Objectives (1) To investigate the changes in risk subgroup measured by the SBST over a period of 6 months, and (2) to assess the long-term predictive ability of the SBST when administered at different time points. Methods Patients with chronic nonspecific low back pain (n = 148) receiving physical therapy care as part of a randomized trial were analyzed. Pain intensity, disability, global perceived effect, and the SBST were collected at baseline, 5 weeks, 3 months, and 6 months. Changes in SBST risk classification were calculated. Hierarchical linear regression models adjusted for potential confounders were built to analyze the predictive capabilities of the SBST when administered at different time points. Results A large proportion of patients (60.8%) changed their risk subgroup after receiving physical therapy care. The SBST improved the prediction for all 6-month outcomes when using the 5-week risk subgroup and the difference between baseline and 5-week subgroup, after controlling for potential confounders. The SBST at baseline did not improve the predictive ability of the models after adjusting for confounders. Conclusion This study shows that many patients change SBST risk subgroup after receiving physical therapy care, and that the predictive ability of the SBST in patients with chronic low back pain increases when administered at different time points. Level of Evidence Prognosis, 2b. J Orthop Sports Phys Ther 2017;47(5):314-323. Epub 29 Mar 2017. doi:10.2519/jospt.2017.7199.

  6. Generation of Ground Truth Datasets for the Analysis of 3d Point Clouds in Urban Scenes Acquired via Different Sensors

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.

    2018-04-01

    In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.

  7. Time-resolved contrast-enhanced MR angiography of the thorax in adults with congenital heart disease.

    PubMed

    Mohrs, Oliver K; Petersen, Steffen E; Voigtlaender, Thomas; Peters, Jutta; Nowak, Bernd; Heinemann, Markus K; Kauczor, Hans-Ulrich

    2006-10-01

    The aim of this study was to evaluate the diagnostic value of time-resolved contrast-enhanced MR angiography in adults with congenital heart disease. Twenty patients with congenital heart disease (mean age, 38 +/- 14 years; range, 16-73 years) underwent contrast-enhanced turbo fast low-angle shot MR angiography. Thirty consecutive coronal 3D slabs with a frame rate of 1-second duration were acquired. The mask defined as the first data set was subtracted from subsequent images. Image quality was evaluated using a 5-point scale (from 1, not assessable, to 5, excellent image quality). Twelve diagnostic parameters yielded 1 point each in case of correct diagnosis (binary analysis into normal or abnormal) and were summarized into three categories: anatomy of the main thoracic vessels (maximum, 5 points), sequential cardiac anatomy (maximum, 5 points), and shunt detection (maximum, 2 points). The results were compared with a combined clinical reference comprising medical or surgical reports and other imaging studies. Diagnostic accuracies were calculated for each of the parameters as well as for the three categories. The mean image quality was 3.7 +/- 1.0. Using a binary approach, 220 (92%) of the 240 single diagnostic parameters could be analyzed. The percentage of maximum diagnostic points, the sensitivity, the specificity, and the positive and the negative predictive values were all 100% for the anatomy of the main thoracic vessels; 97%, 87%, 100%, 100%, and 96% for sequential cardiac anatomy; and 93%, 93%, 92%, 88%, and 96% for shunt detection. Time-resolved contrast-enhanced MR angiography provides, in one breath-hold, anatomic and qualitative functional information in adult patients with congenital heart disease. The high diagnostic accuracy allows the investigator to tailor subsequent specific MR sequences within the same session.

  8. Extending nonlinear analysis to short ecological time series.

    PubMed

    Hsieh, Chih-hao; Anderson, Christian; Sugihara, George

    2008-01-01

    Nonlinearity is important and ubiquitous in ecology. Though detectable in principle, nonlinear behavior is often difficult to characterize, analyze, and incorporate mechanistically into models of ecosystem function. One obvious reason is that quantitative nonlinear analysis tools are data intensive (require long time series), and time series in ecology are generally short. Here we demonstrate a useful method that circumvents data limitation and reduces sampling error by combining ecologically similar multispecies time series into one long time series. With this technique, individual ecological time series containing as few as 20 data points can be mined for such important information as (1) significantly improved forecast ability, (2) the presence and location of nonlinearity, and (3) the effective dimensionality (the number of relevant variables) of an ecological system.

  9. Dynamic detection of N-terminal pro-B-type natriuretic peptide helps to predict the outcome of patients with major trauma.

    PubMed

    Qian, A; Zhang, M; Zhao, G

    2015-02-01

    NT-proBNP and BNP have been demonstrated to be prognostic markers in cardiac disease and sepsis. However, the prognostic value and the dynamic changes of BNP or NT-proBNP in trauma patients remain unclear. The present study was conducted to investigate the dynamic changes of NT-proBNP in patients with major trauma (injury severity score ≥16), determine whether NT-proBNP could be used as a simple index to predict mortality in major trauma patients. This prospective observational study included 60 patients with major trauma. Serum NT-proBNP levels were measured on the 1st, 3rd and 7th day after injury The NT-proBNP levels in survivors were compared with those in non-survivors. The efficacy of NT-proBNP to predict survival was analyzed using receiver operating characteristic curves. An analysis of correlations between NT-proBNP and various factors, including injury severity score, Glasgow coma score, acute physiology and chronic health evaluation II, central venous pressure, creatine kinase-MB, cardiac troponin I and procalcitonin (PCT) was performed. NT-proBNP levels in patients with traumatic brain injury were compared with those in patients without traumatic brain injury. A comparison of NT-proBNP levels between patients with and without sepsis was also performed at each time point. NT-proBNP levels in non-survivors were significantly higher than those in survivors at all the indicated time points. In the group of non-survivors, NT-proBNP levels on the 7th day were markedly higher than those on the 1st day. In contrast, NT-proBNP levels in survivors showed a reduction over time. The efficacy of NT-proBNP to predict survival was analyzed using ROC curves, and there was no difference in the area under the ROC between NT-proBNP and APACHE II/ISS at the three time points. A significant correlation was found between NT-proBNP and ISS on the 1st day, NT-proBNP and CK-MB, Tn-I and APACHE II on the 3rd day, NT-proBNP and PCT on the 7th day. There were no significant differences in NT-proBNP levels between patients with or without brain trauma at all the indicated time points. NT-proBNP levels in patients with sepsis were significantly higher than those in patients without sepsis at all the indicated time points. These findings suggest that dynamic detection of serum NT-proBNP might help to predict death in patients with major trauma. A high level of NT-proBNP at admission or maintained for several days after trauma indicates poor survival.

  10. Registration of 4D cardiac CT sequences under trajectory constraints with multichannel diffeomorphic demons.

    PubMed

    Peyrat, Jean-Marc; Delingette, Hervé; Sermesant, Maxime; Xu, Chenyang; Ayache, Nicholas

    2010-07-01

    We propose a framework for the nonlinear spatiotemporal registration of 4D time-series of images based on the Diffeomorphic Demons (DD) algorithm. In this framework, the 4D spatiotemporal registration is decoupled into a 4D temporal registration, defined as mapping physiological states, and a 4D spatial registration, defined as mapping trajectories of physical points. Our contribution focuses more specifically on the 4D spatial registration that should be consistent over time as opposed to 3D registration that solely aims at mapping homologous points at a given time-point. First, we estimate in each sequence the motion displacement field, which is a dense representation of the point trajectories we want to register. Then, we perform simultaneously 3D registrations of corresponding time-points with the constraints to map the same physical points over time called the trajectory constraints. Under these constraints, we show that the 4D spatial registration can be formulated as a multichannel registration of 3D images. To solve it, we propose a novel version of the Diffeomorphic Demons (DD) algorithm extended to vector-valued 3D images, the Multichannel Diffeomorphic Demons (MDD). For evaluation, this framework is applied to the registration of 4D cardiac computed tomography (CT) sequences and compared to other standard methods with real patient data and synthetic data simulated from a physiologically realistic electromechanical cardiac model. Results show that the trajectory constraints act as a temporal regularization consistent with motion whereas the multichannel registration acts as a spatial regularization. Finally, using these trajectory constraints with multichannel registration yields the best compromise between registration accuracy, temporal and spatial smoothness, and computation times. A prospective example of application is also presented with the spatiotemporal registration of 4D cardiac CT sequences of the same patient before and after radiofrequency ablation (RFA) in case of atrial fibrillation (AF). The intersequence spatial transformations over a cardiac cycle allow to analyze and quantify the regression of left ventricular hypertrophy and its impact on the cardiac function.

  11. Ultra-performance liquid chromatography/tandem mass spectrometric quantification of structurally diverse drug mixtures using an ESI-APCI multimode ionization source.

    PubMed

    Yu, Kate; Di, Li; Kerns, Edward; Li, Susan Q; Alden, Peter; Plumb, Robert S

    2007-01-01

    We report in this paper an ultra-performance liquid chromatography/tandem mass spectrometric (UPLC(R)/MS/MS) method utilizing an ESI-APCI multimode ionization source to quantify structurally diverse analytes. Eight commercial drugs were used as test compounds. Each LC injection was completed in 1 min using a UPLC system coupled with MS/MS multiple reaction monitoring (MRM) detection. Results from three separate sets of experiments are reported. In the first set of experiments, the eight test compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes (ESI+, ESI-, APCI-, and APCI+) during an LC run. Approximately 8-10 data points were collected across each LC peak. This was insufficient for a quantitative analysis. In the second set of experiments, four compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes during an LC run. Approximately 15 data points were obtained for each LC peak. Quantification results were obtained with a limit of detection (LOD) as low as 0.01 ng/mL. For the third set of experiments, the eight test compounds were analyzed as a batch. During each LC injection, a single compound was analyzed. The mass spectrometer was detecting at a particular ionization mode during each LC injection. More than 20 data points were obtained for each LC peak. Quantification results were also obtained. This single-compound analytical method was applied to a microsomal stability test. Compared with a typical HPLC method currently used for the microsomal stability test, the injection-to-injection cycle time was reduced to 1.5 min (UPLC method) from 3.5 min (HPLC method). The microsome stability results were comparable with those obtained by traditional HPLC/MS/MS.

  12. Hypothesis testing of a change point during cognitive decline among Alzheimer's disease patients.

    PubMed

    Ji, Ming; Xiong, Chengjie; Grundman, Michael

    2003-10-01

    In this paper, we present a statistical hypothesis test for detecting a change point over the course of cognitive decline among Alzheimer's disease patients. The model under the null hypothesis assumes a constant rate of cognitive decline over time and the model under the alternative hypothesis is a general bilinear model with an unknown change point. When the change point is unknown, however, the null distribution of the test statistics is not analytically tractable and has to be simulated by parametric bootstrap. When the alternative hypothesis that a change point exists is accepted, we propose an estimate of its location based on the Akaike's Information Criterion. We applied our method to a data set from the Neuropsychological Database Initiative by implementing our hypothesis testing method to analyze Mini Mental Status Exam scores based on a random-slope and random-intercept model with a bilinear fixed effect. Our result shows that despite large amount of missing data, accelerated decline did occur for MMSE among AD patients. Our finding supports the clinical belief of the existence of a change point during cognitive decline among AD patients and suggests the use of change point models for the longitudinal modeling of cognitive decline in AD research.

  13. Lévy flights in the presence of a point sink of finite strength

    NASA Astrophysics Data System (ADS)

    Janakiraman, Deepika

    2017-01-01

    In this paper, the absorption of a particle undergoing Lévy flight in the presence of a point sink of arbitrary strength and position is studied. The motion of such a particle is given by a modified Fokker-Planck equation whose exact solution in the Laplace domain can be described in terms of the Laplace transform of the unperturbed (absence of the sink) Green's function. This solution for the Green's function is a well-studied, generic result which applies to both fractional and usual Fokker-Planck equations alike. Using this result, the propagator and the absorption-time distribution are obtained for free Lévy flight and Lévy flight in linear and harmonic potentials in the presence of a delta function sink, and their dependence on the sink strength is analyzed. Analytical results are presented for the long-time behavior of the absorption-time distribution in all three above-mentioned potentials. Simulation results are found to corroborate closely with analytical results.

  14. Electronic method for autofluorography of macromolecules on two-D matrices

    DOEpatents

    Davidson, Jackson B.; Case, Arthur L.

    1983-01-01

    A method for detecting, localizing, and quantifying macromolecules contained in a two-dimensional matrix is provided which employs a television-based position sensitive detection system. A molecule-containing matrix may be produced by conventional means to produce spots of light at the molecule locations which are detected by the television system. The matrix, such as a gel matrix, is exposed to an electronic camera system including an image-intensifier and secondary electron conduction camera capable of light integrating times of many minutes. A light image stored in the form of a charge image on the camera tube target is scanned by conventional television techniques, digitized, and stored in a digital memory. Intensity of any point on the image may be determined from the number at the memory address of the point. The entire image may be displayed on a television monitor for inspection and photographing or individual spots may be analyzed through selected readout of the memory locations. Compared to conventional film exposure methods, the exposure time may be reduced 100-1000 times.

  15. Single axis control of ball position in magnetic levitation system using fuzzy logic control

    NASA Astrophysics Data System (ADS)

    Sahoo, Narayan; Tripathy, Ashis; Sharma, Priyaranjan

    2018-03-01

    This paper presents the design and real time implementation of Fuzzy logic control(FLC) for the control of the position of a ferromagnetic ball by manipulating the current flowing in an electromagnet that changes the magnetic field acting on the ball. This system is highly nonlinear and open loop unstable. Many un-measurable disturbances are also acting on the system, making the control of it highly complex but interesting for any researcher in control system domain. First the system is modelled using the fundamental laws, which gives a nonlinear equation. The nonlinear model is then linearized at an operating point. Fuzzy logic controller is designed after studying the system in closed loop under PID control action. The controller is then implemented in real time using Simulink real time environment. The controller is tuned manually to get a stable and robust performance. The set point tracking performance of FLC and PID controllers were compared and analyzed.

  16. Gender-Role Attitudes and Behavior Across the Transition to Parenthood

    PubMed Central

    Katz-Wise, Sabra L.; Priess, Heather A.; Hyde, Janet S.

    2013-01-01

    Based on social structural theory and identity theory, the current study examined changes in gender-role attitudes and behavior across the first-time transition to parenthood, and following the birth of a second child for experienced mothers and fathers. Data were analyzed from the ongoing longitudinal Wisconsin Study of Families and Work (WSFW). Gender-role attitudes, work and family identity salience, and division of household labor were measured for 205 first-time and 198 experienced mothers and fathers across four time points from five months pregnant to 12 months postpartum. Multi-level latent growth curve analysis was used to analyze the data. In general, parents became more traditional in their gender-role attitudes and behavior following the birth of a child, women changed more than men, and first-time parents changed more than experienced parents. Findings suggest that changes in gender-role attitudes and behavior following the birth of a child may be attributed both to transitioning to parenthood for the first time, and to negotiating the demands of having a new baby in the family. PMID:20053003

  17. Whole genome sequencing in the search for genes associated with the control of SIV infection in the Mauritian macaque model.

    PubMed

    de Manuel, Marc; Shiina, Takashi; Suzuki, Shingo; Dereuddre-Bosquet, Nathalie; Garchon, Henri-Jean; Tanaka, Masayuki; Congy-Jolivet, Nicolas; Aarnink, Alice; Le Grand, Roger; Marques-Bonet, Tomas; Blancher, Antoine

    2018-05-08

    In the Mauritian macaque experimentally inoculated with SIV, gene polymorphisms potentially associated with the plasma virus load at a set point, approximately 100 days post inoculation, were investigated. Among the 42 animals inoculated with 50 AID 50 of the same strain of SIV, none of which received any preventive or curative treatment, nine individuals were selected: three with a plasma virus load (PVL) among the lowest, three with intermediate PVL values and three among the highest PVL values. The complete genomes of these nine animals were then analyzed. Initially, attention was focused on variants with a potential functional impact on protein encoding genes (non-synonymous SNPs (NS-SNPs) and splicing variants). Thus, 424 NS-SNPs possibly associated with PVL were detected. The 424 candidates SNPs were genotyped in these 42 SIV experimentally infected animals (including the nine animals subjected to whole genome sequencing). The genes containing variants most probably associated with PVL at a set time point are analyzed herein.

  18. Robust cubature Kalman filter for GNSS/INS with missing observations and colored measurement noise.

    PubMed

    Cui, Bingbo; Chen, Xiyuan; Tang, Xihua; Huang, Haoqian; Liu, Xiao

    2018-01-01

    In order to improve the accuracy of GNSS/INS working in GNSS-denied environment, a robust cubature Kalman filter (RCKF) is developed by considering colored measurement noise and missing observations. First, an improved cubature Kalman filter (CKF) is derived by considering colored measurement noise, where the time-differencing approach is applied to yield new observations. Then, after analyzing the disadvantages of existing methods, the measurement augment in processing colored noise is translated into processing the uncertainties of CKF, and new sigma point update framework is utilized to account for the bounded model uncertainties. By reusing the diffused sigma points and approximation residual in the prediction stage of CKF, the RCKF is developed and its error performance is analyzed theoretically. Results of numerical experiment and field test reveal that RCKF is more robust than CKF and extended Kalman filter (EKF), and compared with EKF, the heading error of land vehicle is reduced by about 72.4%. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Optimal strategy analysis based on robust predictive control for inventory system with random demand

    NASA Astrophysics Data System (ADS)

    Saputra, Aditya; Widowati, Sutrisno

    2017-12-01

    In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.

  20. Analysis of biological time-lapse microscopic experiment from the point of view of the information theory.

    PubMed

    Štys, Dalibor; Urban, Jan; Vaněk, Jan; Císař, Petr

    2011-06-01

    We report objective analysis of information in the microscopic image of the cell monolayer. The process of transfer of information about the cell by the microscope is analyzed in terms of the classical Shannon information transfer scheme. The information source is the biological object, the information transfer channel is the whole microscope including the camera chip. The destination is the model of biological system. The information contribution is analyzed as information carried by a point to overall information in the image. Subsequently we obtain information reflection of the biological object. This is transformed in the biological model which, in information terminology, is the destination. This, we propose, should be constructed as state transitions in individual cells modulated by information bonds between the cells. We show examples of detected cell states in multidimensional state space. This space is reflected as colour channel intensity phenomenological state space. We have also observed information bonds and show examples of them.

  1. Analysis of biological time-lapse microscopic experiment from the point of view of the information theory.

    PubMed

    Stys, Dalibor; Urban, Jan; Vanek, Jan; Císar, Petr

    2010-07-01

    We report objective analysis of information in the microscopic image of the cell monolayer. The process of transfer of information about the cell by the microscope is analyzed in terms of the classical Shannon information transfer scheme. The information source is the biological object, the information transfer channel is the whole microscope including the camera chip. The destination is the model of biological system. The information contribution is analyzed as information carried by a point to overall information in the image. Subsequently we obtain information reflection of the biological object. This is transformed in the biological model which, in information terminology, is the destination. This, we propose, should be constructed as state transitions in individual cells modulated by information bonds between the cells. We show examples of detected cell states in multidimensional state space reflected in space an colour channel intensity phenomenological state space. We have also observed information bonds and show examples of them. Copyright 2010 Elsevier Ltd. All rights reserved.

  2. Optimization of fixed-range trajectories for supersonic transport aircraft

    NASA Astrophysics Data System (ADS)

    Windhorst, Robert Dennis

    1999-11-01

    This thesis develops near-optimal guidance laws that generate minimum fuel, time, or direct operating cost fixed-range trajectories for supersonic transport aircraft. The approach uses singular perturbation techniques to time-scale de-couple the equations of motion into three sets of dynamics, two of which are analyzed in the main body of this thesis and one of which is analyzed in the Appendix. The two-point-boundary-value-problems obtained by application of the maximum principle to the dynamic systems are solved using the method of matched asymptotic expansions. Finally, the two solutions are combined using the matching principle and an additive composition rule to form a uniformly valid approximation of the full fixed-range trajectory. The approach is used on two different time-scale formulations. The first holds weight constant, and the second allows weight and range dynamics to propagate on the same time-scale. Solutions for the first formulation are only carried out to zero order in the small parameter, while solutions for the second formulation are carried out to first order. Calculations for a HSCT design were made to illustrate the method. Results show that the minimum fuel trajectory consists of three segments: a minimum fuel energy-climb, a cruise-climb, and a minimum drag glide. The minimum time trajectory also has three segments: a maximum dynamic pressure ascent, a constant altitude cruise, and a maximum dynamic pressure glide. The minimum direct operating cost trajectory is an optimal combination of the two. For realistic costs of fuel and flight time, the minimum direct operating cost trajectory is very similar to the minimum fuel trajectory. Moreover, the HSCT has three local optimum cruise speeds, with the globally optimum cruise point at the highest allowable speed, if range is sufficiently long. The final range of the trajectory determines which locally optimal speed is best. Ranges of 500 to 6,000 nautical miles, subsonic and supersonic mixed flight, and varying fuel efficiency cases are analyzed. Finally, the payload-range curve of the HSCT design is determined.

  3. Effects of dietary 2,2', 4,4'-tetrabromodiphenyl ether (BDE-47) exposure on medaka (Oryzias latipes) swimming behavior.

    PubMed

    Sastre, Salvador; Fernández Torija, Carlos; Carbonell, Gregoria; Rodríguez Martín, José Antonio; Beltrán, Eulalia María; González-Doncel, Miguel

    2018-02-01

    A diet fortified with 2,2', 4,4'-tetrabromodiphenyl ether (BDE-47: 0, 10, 100, and 1000 ng/g) was dosed to 4-7-day-old post-hatch medaka fish for 40 days to evaluate the effects on the swimming activity of fish using a miniaturized swimming flume. Chlorpyrifos (CF)-exposed fish were selected as the positive control to assess the validity and sensitivity of the behavioral findings. After 20 and 40 days of exposure, the locomotor activity was analyzed for 6 min in a flume section (arena). The CF positive control for each time point were fish exposed to 50 ng CF/ml for 48 h. Swimming patterns, presented as two-dimensional heat maps of fish movement and positioning, were obtained by geostatistical analyses. The heat maps of the control groups at time point 20 revealed visually comparable swimming patterns to those of the BDE-47-treated groups. For the comparative fish positioning analysis, both the arenas were divided into 15 proportional areas. No statistical differences were found between residence times in the areas from the control groups and those from the BDE-47-treated groups. At time point 40, the heat map overall patterns of the control groups differed visually from that of the 100-ng BDE-47/g-treated group, but a comparative analysis of the residence times in the corresponding 15 areas did not reveal consistent differences. The relative distances traveled by the control and treated groups at time points 20 and 40 were also comparable. The heat maps of CF-treated fish at both time points showed contrasting swim patterns with respect to those of the controls. These differential patterns were statistically supported with differences in the residence times for different areas. The relative distances traveled by the CF-treated fish were also significantly shorter. These results confirm the validity of the experimental design and indicate that a dietary BDE-47 exposure does not affect forced swimming in medaka at growing stages. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Harmonic generation of Lamb waves

    NASA Astrophysics Data System (ADS)

    Ing, Ros Kiri

    2002-11-01

    Lamb waves are dispersive waves that propagate following a number of distinct modes that depend on the values of the central frequency and frequency band. According to such properties and using the time-reversal process, it is shown that the hyperfocusing effect may be experienced [R. K. Ing and M. Fink, IEEE Trans. Ultrason. Ferroelectr. Freq. Control 45, 1032-1043 (1998)]. Such a focusing effect both relates the time recompression of the dispersive Lamb waves and constructive interference on the focus point of the modes involved. The hyperfocusing effect is interesting because it allows the amplitude of the Lamb waves to reach huge values on the focus point. In our experiments, Lamb waves with normal amplitudes of micrometer values have been achieved on the free surface of a Duralumin plate of 3 mm thickness. By analyzing the Lamb waves in the neighborhood of the focus point using the 2-D Fourier transform technique, a nonlinear process of harmonic generation is then observed--the fundamental frequency component is centered at 1.5 MHz. This nonlinear process is under study and quantified.

  5. Feature-based registration of historical aerial images by Area Minimization

    NASA Astrophysics Data System (ADS)

    Nagarajan, Sudhagar; Schenk, Toni

    2016-06-01

    The registration of historical images plays a significant role in assessing changes in land topography over time. By comparing historical aerial images with recent data, geometric changes that have taken place over the years can be quantified. However, the lack of ground control information and precise camera parameters has limited scientists' ability to reliably incorporate historical images into change detection studies. Other limitations include the methods of determining identical points between recent and historical images, which has proven to be a cumbersome task due to continuous land cover changes. Our research demonstrates a method of registering historical images using Time Invariant Line (TIL) features. TIL features are different representations of the same line features in multi-temporal data without explicit point-to-point or straight line-to-straight line correspondence. We successfully determined the exterior orientation of historical images by minimizing the area formed between corresponding TIL features in recent and historical images. We then tested the feasibility of the approach with synthetic and real data and analyzed the results. Based on our analysis, this method shows promise for long-term 3D change detection studies.

  6. Development of a methodology for assessing the safety of embedded software systems

    NASA Technical Reports Server (NTRS)

    Garrett, C. J.; Guarro, S. B.; Apostolakis, G. E.

    1993-01-01

    A Dynamic Flowgraph Methodology (DFM) based on an integrated approach to modeling and analyzing the behavior of software-driven embedded systems for assessing and verifying reliability and safety is discussed. DFM is based on an extension of the Logic Flowgraph Methodology to incorporate state transition models. System models which express the logic of the system in terms of causal relationships between physical variables and temporal characteristics of software modules are analyzed to determine how a certain state can be reached. This is done by developing timed fault trees which take the form of logical combinations of static trees relating the system parameters at different point in time. The resulting information concerning the hardware and software states can be used to eliminate unsafe execution paths and identify testing criteria for safety critical software functions.

  7. Critical Events in the Lives of Interns

    PubMed Central

    Graham, Mark; Schmidt, Hilary; Stern, David T.; Miller, Steven Z.

    2008-01-01

    BACKGROUND Early residency is a crucial time in the professional development of physicians. As interns assume primary care for their patients, they take on new responsibilities. The events they find memorable during this time could provide us with insight into their developing professional identities. OBJECTIVE To evaluate the most critical events in the lives of interns. PARTICIPANTS Forty-one internal medicine residents at one program participated in a two-day retreat in the fall of their first year. Each resident provided a written description of a recent high point, low point, and patient conflict. MEASUREMENTS We used a variant of grounded theory to analyze these critical incidents and determine the underlying themes of early internship. Independent inter-rater agreement of >90% was achieved for the coding of excerpts. MAIN RESULTS The 123 critical incidents were clustered into 23 categories. The categories were further organized into six themes: confidence, life balance, connections, emotional responses, managing expectations, and facilitating teamwork. High points were primarily in the themes of confidence and connections. Low points were dispersed more generally throughout the conceptual framework. Conflicts with patients were about negotiating the expectations inherent in the physician–patient relationship. CONCLUSION The high points, low points, and conflicts reported by early residents provide us with a glimpse into the lives of interns. The themes we have identified reflect critical challenges interns face the development of their professional identity. Program directors could use this process and conceptual framework to guide the development and promotion of residents’ emerging professional identities. PMID:18972091

  8. Evaluation of two disinfection/sterilization methods on silicon rubber-based composite finishing instruments.

    PubMed

    Lacerda, Vánia A; Pereira, Leandro O; Hirata JUNIOR, Raphael; Perez, Cesar R

    2015-12-01

    To evaluate the effectiveness of disinfection/sterilization methods and their effects on polishing capacity, micomorphology, and composition of two different composite fiishing and polishing instruments. Two brands of finishing and polishing instruments (Jiffy and Optimize), were analyzed. For the antimicrobial test, 60 points (30 of each brand) were used for polishing composite restorations and submitted to three different groups of disinfection/sterilization methods: none (control), autoclaving, and immersion in peracetic acid for 60 minutes. The in vitro tests were performed to evaluate the polishing performance on resin composite disks (Amelogen) using a 3D scanner (Talyscan) and to evaluate the effects on the points' surface composition (XRF) and micromorphology (MEV) after completing a polishing and sterilizing routine five times. Both sterilization/disinfection methods were efficient against oral cultivable organisms and no deleterious modification was observed to point surface.

  9. Replacement Condition Detection of Railway Point Machines Using an Electric Current Sensor.

    PubMed

    Sa, Jaewon; Choi, Younchang; Chung, Yongwha; Kim, Hee-Young; Park, Daihee; Yoon, Sukhan

    2017-01-29

    Detecting replacement conditions of railway point machines is important to simultaneously satisfy the budget-limit and train-safety requirements. In this study, we consider classification of the subtle differences in the aging effect-using electric current shape analysis-for the purpose of replacement condition detection of railway point machines. After analyzing the shapes of after-replacement data and then labeling the shapes of each before-replacement data, we can derive the criteria that can handle the subtle differences between "does-not-need-to-be-replaced" and "needs-to-be-replaced" shapes. On the basis of the experimental results with in-field replacement data, we confirmed that the proposed method could detect the replacement conditions with acceptable accuracy, as well as provide visual interpretability of the criteria used for the time-series classification.

  10. Replacement Condition Detection of Railway Point Machines Using an Electric Current Sensor

    PubMed Central

    Sa, Jaewon; Choi, Younchang; Chung, Yongwha; Kim, Hee-Young; Park, Daihee; Yoon, Sukhan

    2017-01-01

    Detecting replacement conditions of railway point machines is important to simultaneously satisfy the budget-limit and train-safety requirements. In this study, we consider classification of the subtle differences in the aging effect—using electric current shape analysis—for the purpose of replacement condition detection of railway point machines. After analyzing the shapes of after-replacement data and then labeling the shapes of each before-replacement data, we can derive the criteria that can handle the subtle differences between “does-not-need-to-be-replaced” and “needs-to-be-replaced” shapes. On the basis of the experimental results with in-field replacement data, we confirmed that the proposed method could detect the replacement conditions with acceptable accuracy, as well as provide visual interpretability of the criteria used for the time-series classification. PMID:28146057

  11. Analysis for collapse behavior of resist pattern in short develop time process using atomic force microscope

    NASA Astrophysics Data System (ADS)

    Sanada, Masakazu; Tamada, Osamu; Ishikawa, Atsushi; Kawai, Akira

    2005-05-01

    Adhesion property of resist is characterized with DPAT (direct peeling with atomic force microscope (AFM) tip) method using 193 nm resist patterns of 180 nm dot shape which were developed for various developing time between 12 and 120 seconds in order to analyze the phenomenon which the short develop time process had led to suppress the pattern collapse. Surface free energy and refractive index of resist film treated with the developing time were also investigated from a thermodynamic point of view. The balance model among surface energy was adopted for analyzing intrusion phenomenon of developer solution into the resist-substrate interface. It can be explained quantitatively that the intrusion energy of developer solution acts to weaken the adhesion strength of resist pattern to the substrate. Furthermore, the intrusion energy became larger with increasing developing time. Analysis with the DPAT method indicates that the pattern collapse occurs accompanied with interface and cohesion destruction. Interface-scientifically speaking, the short develop time process proved to be effective to suppress the pattern collapse because of higher adhesion energy of the resist pattern to the substrate in shorter developing time.

  12. With the future behind them: convergent evidence from aymara language and gesture in the crosslinguistic comparison of spatial construals of time.

    PubMed

    Núñez, Rafael E; Sweetser, Eve

    2006-05-06

    Cognitive research on metaphoric concepts of time has focused on differences between moving Ego and moving time models, but even more basic is the contrast between Ego- and temporal-reference-point models. Dynamic models appear to be quasi-universal cross-culturally, as does the generalization that in Ego-reference-point models, FUTURE IS IN FRONT OF EGO and PAST IS IN BACK OF EGO. The Aymara language instead has a major static model of time wherein FUTURE IS BEHIND EGO and PAST IS IN FRONT OF EGO; linguistic and gestural data give strong confirmation of this unusual culture-specific cognitive pattern. Gestural data provide crucial information unavailable to purely linguistic analysis, suggesting that when investigating conceptual systems both forms of expression should be analyzed complementarily. Important issues in embodied cognition are raised: how fully shared are bodily grounded motivations for universal cognitive patterns, what makes a rare pattern emerge, and what are the cultural entailments of such patterns? 2006 Lawrence Erlbaum Associates, Inc.

  13. The structural approach to shared knowledge: an application to engineering design teams.

    PubMed

    Avnet, Mark S; Weigel, Annalisa L

    2013-06-01

    We propose a methodology for analyzing shared knowledge in engineering design teams. Whereas prior work has focused on shared knowledge in small teams at a specific point in time, the model presented here is both scalable and dynamic. By quantifying team members' common views of design drivers, we build a network of shared mental models to reveal the structure of shared knowledge at a snapshot in time. Based on a structural comparison of networks at different points in time, a metric of change in shared knowledge is computed. Analysis of survey data from 12 conceptual space mission design sessions reveals a correlation between change in shared knowledge and each of several system attributes, including system development time, system mass, and technological maturity. From these results, we conclude that an early period of learning and consensus building could be beneficial to the design of engineered systems. Although we do not examine team performance directly, we demonstrate that shared knowledge is related to the technical design and thus provide a foundation for improving design products by incorporating the knowledge and thoughts of the engineering design team into the process.

  14. TU-CD-304-11: Veritas 2.0: A Cloud-Based Tool to Facilitate Research and Innovation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, P; Patankar, A; Etmektzoglou, A

    Purpose: We introduce Veritas 2.0, a cloud-based, non-clinical research portal, to facilitate translation of radiotherapy research ideas to new delivery techniques. The ecosystem of research tools includes web apps for a research beam builder for TrueBeam Developer Mode, an image reader for compressed and uncompressed XIM files, and a trajectory log file based QA/beam delivery analyzer. Methods: The research beam builder can generate TrueBeam readable XML file either from scratch or from pre-existing DICOM-RT plans. DICOM-RT plan is first converted to XML format and then researcher can interactively modify or add control points to them. Delivered beam can be verifiedmore » via reading generated images and analyzing trajectory log files. Image reader can read both uncompressed and HND-compressed XIM images. The trajectory log analyzer lets researchers plot expected vs. actual values and deviations among 30 mechanical axes. The analyzer gives an animated view of MLC patterns for the beam delivery. Veritas 2.0 is freely available and its advantages versus standalone software are i) No software installation or maintenance needed, ii) easy accessibility across all devices iii) seamless upgrades and iv) OS independence. Veritas is written using open-source tools like twitter bootstrap, jQuery, flask, and Python-based modules. Results: In the first experiment, an anonymized 7-beam DICOM-RT IMRT plan was converted to XML beam containing 1400 control points. kV and MV imaging points were inserted into this XML beam. In another experiment, a binary log file was analyzed to compare actual vs expected values and deviations among axes. Conclusions: Veritas 2.0 is a public cloud-based web app that hosts a pool of research tools for facilitating research from conceptualization to verification. It is aimed at providing a platform for facilitating research and collaboration. I am full time employee at Varian Medical systems, Palo Alto.« less

  15. PULSE AMPLITUDE ANALYZER

    DOEpatents

    Greenblatt, M.H.

    1958-03-25

    This patent pertains to pulse amplitude analyzers for sorting and counting a serles of pulses, and specifically discloses an analyzer which ls simple in construction and presents the puise height distribution visually on an oscilloscope screen. According to the invention, the pulses are applied to the vertical deflection plates of an oscilloscope and trigger the horizontal sweep. Each pulse starts at the same point on the screen and has a maximum amplitude substantially along the same vertical line. A mask is placed over the screen except for a slot running along the line where the maximum amplitudes of the pulses appear. After the slot has been scanned by a photocell in combination with a slotted rotating disk, the photocell signal is displayed on an auxiliary oscilloscope as vertical deflection along a horizontal time base to portray the pulse amplitude distribution.

  16. Molecular pathological analysis for determining the possible mechanism of piperonyl butoxide-induced hepatocarcinogenesis in mice.

    PubMed

    Muguruma, Masako; Nishimura, Jihei; Jin, Meilan; Kashida, Yoko; Moto, Mitsuyoshi; Takahashi, Miwa; Yokouchi, Yusuke; Mitsumori, Kunitoshi

    2006-12-07

    Piperonyl butoxide (PBO), alpha-[2-(2-butoxyethoxy)ethoxy]-4,5-methylene-dioxy-2-propyltoluene, is widely used as a synergist for pyrethrins. In order to clarify the possible mechanism of non-genotoxic hepatocarcinogenesis induced by PBO, molecular pathological analyses consisting of low-density microarray analysis and real-time reverse transcriptase (RT)-PCR were performed in male ICR mice fed a basal powdered diet containing 6000 or 0 ppm PBO for 1, 4, or 8 weeks. The animals were sacrificed at weeks 1, 4, and 8, and the livers were histopathologically examined and analyzed for gene expression using the microarray at weeks 1 and 4 followed by real-time RT-PCR at each time point. Reactive oxygen species (ROS) products were also measured using liver microsomes. At each time point, the hepatocytes of PBO-treated mice showed centrilobular hypertrophy and increased lipofuscin deposition in Schmorl staining. The ROS products were significantly increased in the liver microsomes of PBO-treated mice. In the microarray analysis, the expression of oxidative and metabolic stress-related genes--cytochrome P450 (Cyp) 1A1, Cyp2A5 (week 1 only), Cyp2B9, Cyp2B10, and NADPH-cytochrome P450 oxidoreductase (Por) was over-expressed in mice given PBO at weeks 1 and 4. Fluctuations of these genes were confirmed by real-time RT-PCR in PBO-treated mice at each time point. In additional real-time RT-PCR, the expression of Cyclin D1 gene, key regulator of cell-cycle progression, and Xrcc5 gene, DNA damage repair-related gene, was significantly increased at each time point and at week 8, respectively. These results suggest the possibility that PBO has the potential to generate ROS via the metabolic pathway and to induce oxidative stress, including oxidative DNA damage, resulting in the induction of hepatocellular tumors in mice.

  17. Speed of recovery after arthroscopic rotator cuff repair.

    PubMed

    Kurowicki, Jennifer; Berglund, Derek D; Momoh, Enesi; Disla, Shanell; Horn, Brandon; Giveans, M Russell; Levy, Jonathan C

    2017-07-01

    The purpose of this study was to delineate the time taken to achieve maximum improvement (plateau of recovery) and the degree of recovery observed at various time points (speed of recovery) for pain and function after arthroscopic rotator cuff repair. An institutional shoulder surgery registry query identified 627 patients who underwent arthroscopic rotator cuff repair between 2006 and 2015. Measured range of motion, patient satisfaction, and patient-reported outcome measures were analyzed for preoperative, 3-month, 6-month, 1-year, and 2-year intervals. Subgroup analysis was performed on the basis of tear size by retraction grade and number of anchors used. As an entire group, the plateau of maximum recovery for pain, function, and motion occurred at 1 year. Satisfaction with surgery was >96% at all time points. At 3 months, 74% of improvement in pain and 45% to 58% of functional improvement were realized. However, only 22% of elevation improvement was achieved (P < .001). At 6 months, 89% of improvement in pain, 81% to 88% of functional improvement, and 78% of elevation improvement were achieved (P < .001). Larger tears had a slower speed of recovery for Single Assessment Numeric Evaluation scores, forward elevation, and external rotation. Smaller tears had higher motion and functional scores across all time points. Tear size did not influence pain levels. The plateau of maximum recovery after rotator cuff repair occurred at 1 year with high satisfaction rates at all time points. At 3 months, approximately 75% of pain relief and 50% of functional recovery can be expected. Larger tears have a slower speed of recovery. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  18. Parent and Child Personality Traits and Children's Externalizing Problem Behavior from Age 4 to 9 Years: A Cohort-Sequential Latent Growth Curve Analysis

    ERIC Educational Resources Information Center

    Prinzie, P.; Onghena, P.; Hellinckx, W.

    2005-01-01

    Cohort-sequential latent growth modeling was used to analyze longitudinal data for children's externalizing behavior from four overlapping age cohorts (4, 5, 6, and 7 years at first assessment) measured at three annual time points. The data included mother and father ratings on the Child Behavior Checklist and the Five-Factor Personality Inventory…

  19. Regulating Tobacco Product Advertising and Promotions in the Retail Environment: A Roadmap for States and Localities.

    PubMed

    Lange, Tamara; Hoefges, Michael; Ribisl, Kurt M

    2015-01-01

    Recent amendments to federal law and a burgeoning body of research have intensified public health officials' interest in reducing youth initiation of tobacco use, including by regulating the time, place, or manner of tobacco product advertising at the point of sale. This article analyzes legal obstacles to various strategies for reducing youth initiation. © 2015 American Society of Law, Medicine & Ethics, Inc.

  20. An analytical method to assess spruce beetle impacts on white spruce resources, Kenai Peninsula, Alaska.

    Treesearch

    Willem W.S. van Hees

    1992-01-01

    Forest inventory data collected in 1987 fTom sample plots established on the Kenai Peninsula were analyzed to provide point-in-time estimates of the trend and current status of a spruce beetle infestation. Ground plots were categorized by stage of infestation. Estimates of numbers of live and dead white spruce trees, cubic-foot volume in those trees, and areal extent...

  1. Photogrammetry research for FAST eleven-meter reflector panel surface shape measurement

    NASA Astrophysics Data System (ADS)

    Zhou, Rongwei; Zhu, Lichun; Li, Weimin; Hu, Jingwen; Zhai, Xuebing

    2010-10-01

    In order to design and manufacture the Five-hundred-meter Aperture Spherical Radio Telescope (FAST) active reflector measuring equipment, measurement on each reflector panel surface shape was presented, static measurement of the whole neutral spherical network of nodes was performed, real-time dynamic measurement at the cable network dynamic deformation was undertaken. In the implementation process of the FAST, reflector panel surface shape detection was completed before eleven-meter reflector panel installation. Binocular vision system was constructed based on the method of binocular stereo vision in machine vision, eleven-meter reflector panel surface shape was measured with photogrammetry method. Cameras were calibrated with the feature points. Under the linearity camera model, the lighting spot array was used as calibration standard pattern, and the intrinsic and extrinsic parameters were acquired. The images were collected for digital image processing and analyzing with two cameras, feature points were extracted with the detection algorithm of characteristic points, and those characteristic points were matched based on epipolar constraint method. Three-dimensional reconstruction coordinates of feature points were analyzed and reflective panel surface shape structure was established by curve and surface fitting method. The error of reflector panel surface shape was calculated to realize automatic measurement on reflector panel surface shape. The results show that unit reflector panel surface inspection accuracy was 2.30mm, within the standard deviation error of 5.00mm. Compared with the requirement of reflector panel machining precision, photogrammetry has fine precision and operation feasibility on eleven-meter reflector panel surface shape measurement for FAST.

  2. High-sensitivity detection of cardiac troponin I with UV LED excitation for use in point-of-care immunoassay

    PubMed Central

    Rodenko, Olga; Eriksson, Susann; Tidemand-Lichtenberg, Peter; Troldborg, Carl Peder; Fodgaard, Henrik; van Os, Sylvana; Pedersen, Christian

    2017-01-01

    High-sensitivity cardiac troponin assay development enables determination of biological variation in healthy populations, more accurate interpretation of clinical results and points towards earlier diagnosis and rule-out of acute myocardial infarction. In this paper, we report on preliminary tests of an immunoassay analyzer employing an optimized LED excitation to measure on a standard troponin I and a novel research high-sensitivity troponin I assay. The limit of detection is improved by factor of 5 for standard troponin I and by factor of 3 for a research high-sensitivity troponin I assay, compared to the flash lamp excitation. The obtained limit of detection was 0.22 ng/L measured on plasma with the research high-sensitivity troponin I assay and 1.9 ng/L measured on tris-saline-azide buffer containing bovine serum albumin with the standard troponin I assay. We discuss the optimization of time-resolved detection of lanthanide fluorescence based on the time constants of the system and analyze the background and noise sources in a heterogeneous fluoroimmunoassay. We determine the limiting factors and their impact on the measurement performance. The suggested model can be generally applied to fluoroimmunoassays employing the dry-cup concept. PMID:28856047

  3. Interior of black holes and information recovery

    NASA Astrophysics Data System (ADS)

    Kawai, Hikaru; Yokokura, Yuki

    2016-02-01

    We analyze time evolution of a spherically symmetric collapsing matter from a point of view that black holes evaporate by nature. We first consider a spherical thin shell that falls in the metric of an evaporating Schwarzschild black hole of which the radius a (t ) decreases in time. The important point is that the shell can never reach a (t ) but it approaches a (t )-a (t )d/a (t ) d t . This situation holds at any radius because the motion of a shell in a spherically symmetric system is not affected by the outside. In this way, we find that the collapsing matter evaporates without forming a horizon. Nevertheless, a Hawking-like radiation is created in the metric, and the object looks the same as a conventional black hole from the outside. We then discuss how the information of the matter is recovered. We also consider a black hole that is adiabatically grown in the heat bath and obtain the interior metric. We show that it is the self-consistent solution of Gμ ν=8 π G ⟨Tμ ν⟩ and that the four-dimensional Weyl anomaly induces the radiation and a strong angular pressure. Finally, we analyze the internal structures of the charged and the slowly rotating black holes.

  4. VizieR Online Data Catalog: ChaMP X-ray point source catalog (Kim+, 2007)

    NASA Astrophysics Data System (ADS)

    Kim, M.; Kim, D.-W.; Wilkes, B. J.; Green, P. J.; Kim, E.; Anderson, C. S.; Barkhouse, W. A.; Evans, N. R.; Ivezic, Z.; Karovska, M.; Kashyap, V. L.; Lee, M. G.; Maksym, P.; Mossman, A. E.; Silverman, J. D.; Tananbaum, H. D.

    2009-01-01

    We present the Chandra Multiwavelength Project (ChaMP) X-ray point source catalog with ~6800 X-ray sources detected in 149 Chandra observations covering ~10deg2. The full ChaMP catalog sample is 7 times larger than the initial published ChaMP catalog. The exposure time of the fields in our sample ranges from 0.9 to 124ks, corresponding to a deepest X-ray flux limit of f0.5-8.0=9x10-16ergs/cm2/s. The ChaMP X-ray data have been uniformly reduced and analyzed with ChaMP-specific pipelines and then carefully validated by visual inspection. The ChaMP catalog includes X-ray photometric data in eight different energy bands as well as X-ray spectral hardness ratios and colors. To best utilize the ChaMP catalog, we also present the source reliability, detection probability, and positional uncertainty. (10 data files).

  5. Web-HLA and Service-Enabled RTI in the Simulation Grid

    NASA Astrophysics Data System (ADS)

    Huang, Jijie; Li, Bo Hu; Chai, Xudong; Zhang, Lin

    HLA-based simulations in a grid environment have now become a main research hotspot in the M&S community, but there are many shortcomings of the current HLA running in a grid environment. This paper analyzes the analogies between HLA and OGSA from the software architecture point of view, and points out the service-oriented method should be introduced into the three components of HLA to overcome its shortcomings. This paper proposes an expanded running architecture that can integrate the HLA with OGSA and realizes a service-enabled RTI (SE-RTI). In addition, in order to handle the bottleneck problem that is how to efficiently realize the HLA time management mechanism, this paper proposes a centralized way by which the CRC of the SE-RTI takes charge of the time management and the dispatching of TSO events of each federate. Benchmark experiments indicate that the running velocity of simulations in Internet or WAN is properly improved.

  6. Low-cost computing and network communication for a point-of-care device to perform a 3-part leukocyte differential

    NASA Astrophysics Data System (ADS)

    Powless, Amy J.; Feekin, Lauren E.; Hutcheson, Joshua A.; Alapat, Daisy V.; Muldoon, Timothy J.

    2016-03-01

    Point-of-care approaches for 3-part leukocyte differentials (granulocyte, monocyte, and lymphocyte), traditionally performed using a hematology analyzer within a panel of tests called a complete blood count (CBC), are essential not only to reduce cost but to provide faster results in low resource areas. Recent developments in lab-on-a-chip devices have shown promise in reducing the size and reagents used, relating to a decrease in overall cost. Furthermore, smartphone diagnostic approaches have shown much promise in the area of point-of-care diagnostics, but the relatively high per-unit cost may limit their utility in some settings. We present here a method to reduce computing cost of a simple epi-fluorescence imaging system using a Raspberry Pi (single-board computer, <$40) to perform a 3-part leukocyte differential comparable to results from a hematology analyzer. This system uses a USB color camera in conjunction with a leukocyte-selective vital dye (acridine orange) in order to determine a leukocyte count and differential from a low volume (<20 microliters) of whole blood obtained via fingerstick. Additionally, the system utilizes a "cloud-based" approach to send image data from the Raspberry Pi to a main server and return results back to the user, exporting the bulk of the computational requirements. Six images were acquired per minute with up to 200 cells per field of view. Preliminary results showed that the differential count varied significantly in monocytes with a 1 minute time difference indicating the importance of time-gating to produce an accurate/consist differential.

  7. Repeat synoptic sampling reveals drivers of change in carbon and nutrient chemistry of Arctic catchments

    NASA Astrophysics Data System (ADS)

    Zarnetske, J. P.; Abbott, B. W.; Bowden, W. B.; Iannucci, F.; Griffin, N.; Parker, S.; Pinay, G.; Aanderud, Z.

    2017-12-01

    Dissolved organic carbon (DOC), nutrients, and other solute concentrations are increasing in rivers across the Arctic. Two hypotheses have been proposed to explain these trends: 1. distributed, top-down permafrost degradation, and 2. discrete, point-source delivery of DOC and nutrients from permafrost collapse features (thermokarst). While long-term monitoring at a single station cannot discriminate between these mechanisms, synoptic sampling of multiple points in the stream network could reveal the spatial structure of solute sources. In this context, we sampled carbon and nutrient chemistry three times over two years in 119 subcatchments of three distinct Arctic catchments (North Slope, Alaska). Subcatchments ranged from 0.1 to 80 km2, and included three distinct types of Arctic landscapes - mountainous, tundra, and glacial-lake catchments. We quantified the stability of spatial patterns in synoptic water chemistry and analyzed high-frequency time series from the catchment outlets across the thaw season to identify source areas for DOC, nutrients, and major ions. We found that variance in solute concentrations between subcatchments collapsed at spatial scales between 1 to 20 km2, indicating a continuum of diffuse- and point-source dynamics, depending on solute and catchment characteristics (e.g. reactivity, topography, vegetation, surficial geology). Spatially-distributed mass balance revealed conservative transport of DOC and nitrogen, and indicates there may be strong in-stream retention of phosphorus, providing a network-scale confirmation of previous reach-scale studies in these Arctic catchments. Overall, we present new approaches to analyzing synoptic data for change detection and quantification of ecohydrological mechanisms in ecosystems in the Arctic and beyond.

  8. Drinking, smoking, and educational achievement: Cross-lagged associations from adolescence to adulthood

    PubMed Central

    Latvala, Antti; Rose, Richard J.; Pulkkinen, Lea; Dick, Danielle M.; Korhonen, Tellervo; Kaprio, Jaakko

    2014-01-01

    Background Adolescent substance use is associated with lower educational achievement but the directionality of the association remains uncertain. We analyzed data on drinking, smoking and educational achievement to study the associations between substance use and education from early adolescence to young adulthood. Methods Longitudinal data from four time points (ages 12, 14, 17, and 19-27 years) from a population-based cohort study of Finnish twin individuals were used to estimate bivariate cross-lagged path models for substance use and educational achievement, adjusting for sex, parental covariates, and adolescent externalizing behavior. A total of 4,761 individuals (49.4% females) were included in the analyses. Educational achievement was assessed with teacher-reported grade point average at ages 12 and 14, and with self-reported student status and completed education at age 17 and in young adulthood. From self-reported questionnaire items, frequency of any drinking, frequency of drinking to intoxication, any smoking and daily smoking were analyzed. Results Alcohol use and smoking behaviors at ages 12 and 14 predicted lower educational achievement at later time points even after previous achievement and confounding factors were taken into account. Lower school achievement in adolescence predicted a higher likelihood of engaging in smoking behaviors but did not predict later alcohol use. Higher educational attainment at age 17 predicted more frequent drinking in young adulthood. Conclusions Adolescent drinking behaviors are associated with lower future educational achievement independently of prior achievement, whereas smoking both predicts and is predicted by lower achievement. Early substance use indexes elevated risk for poor educational outcomes. PMID:24548801

  9. Analysis of Land Subsidence Monitoring in Mining Area with Time-Series Insar Technology

    NASA Astrophysics Data System (ADS)

    Sun, N.; Wang, Y. J.

    2018-04-01

    Time-series InSAR technology has become a popular land subsidence monitoring method in recent years, because of its advantages such as high accuracy, wide area, low expenditure, intensive monitoring points and free from accessibility restrictions. In this paper, we applied two kinds of satellite data, ALOS PALSAR and RADARSAT-2, to get the subsidence monitoring results of the study area in two time periods by time-series InSAR technology. By analyzing the deformation range, rate and amount, the time-series analysis of land subsidence in mining area was realized. The results show that InSAR technology could be used to monitor land subsidence in large area and meet the demand of subsidence monitoring in mining area.

  10. Triatomine Infestation in Guatemala: Spatial Assessment after Two Rounds of Vector Control

    PubMed Central

    Manne, Jennifer; Nakagawa, Jun; Yamagata, Yoichi; Goehler, Alexander; Brownstein, John S.; Castro, Marcia C.

    2012-01-01

    In 2000, the Guatemalan Ministry of Health initiated a Chagas disease program to control Rhodnius prolixus and Triatoma dimidiata by periodic house spraying with pyrethroid insecticides to characterize infestation patterns and analyze the contribution of programmatic practices to these patterns. Spatial infestation patterns at three time points were identified using the Getis-Ord Gi*(d) test. Logistic regression was used to assess predictors of reinfestation after pyrethroid insecticide administration. Spatial analysis showed high and low clusters of infestation at three time points. After two rounds of spray, 178 communities persistently fell in high infestation clusters. A time lapse between rounds of vector control greater than 6 months was associated with 1.54 (95% confidence interval = 1.07–2.23) times increased odds of reinfestation after first spray, whereas a time lapse of greater than 1 year was associated with 2.66 (95% confidence interval = 1.85–3.83) times increased odds of reinfestation after first spray compared with localities where the time lapse was less than 180 days. The time lapse between rounds of vector control should remain under 1 year. Spatial analysis can guide targeted vector control efforts by enabling tracking of reinfestation hotspots and improved targeting of resources. PMID:22403315

  11. [Associations of the Employment Status during the First 2 Years Following Medical Rehabilitation and Long Term Occupational Trajectories: Implications for Outcome Measurement].

    PubMed

    Holstiege, J; Kaluscha, R; Jankowiak, S; Krischak, G

    2017-02-01

    Study Objectives: The aim was to investigate the predictive value of the employment status measured in the 6 th , 12 th , 18 th and 24 th month after medical rehabilitation for long-term employment trajectories during 4 years. Methods: A retrospective study was conducted based on a 20%-sample of all patients receiving inpatient rehabilitation funded by the German pension fund. Patients aged <62 years who were treated due to musculoskeletal, cardiovascular or psychosomatic disorders during the years 2002-2005 were included and followed for 4 consecutive years. The predictive value of the employment status in 4 predefined months after discharge (6 th , 12 th , 18 th and 24 th month), for the total number of months in employment in 4 years following rehabilitative treatment was analyzed using multiple linear regression. Per time point, separate regression analyses were conducted, including the employment status (employed vs. unemployed) at the respective point in time as explanatory variable, besides a standard set of additional prognostic variables. Results: A total of 252 591 patients were eligible for study inclusion. The level of explained variance of the regression models increased with the point in time used to measure the employment status, included as explanatory variable. Overall the R²-measure increased by 30% from the regression model that included the employment status in the 6 th month (R²=0.60) to the model that included the work status in the 24 th month (R²=0.78). Conclusion: The degree of accuracy in the prognosis of long-term employment biographies increases with the point in time used to measure employment in the first 2 years following rehabilitation. These findings should be taken into consideration for the predefinition of time points used to measure the employment status in future studies. © Georg Thieme Verlag KG Stuttgart · New York.

  12. Patterns and sources of personality development in old age.

    PubMed

    Kandler, Christian; Kornadt, Anna E; Hagemeyer, Birk; Neyer, Franz J

    2015-07-01

    Despite abundant evidence that personality development continues in adulthood, little is known about the patterns and sources of personality development in old age. We thus investigated mean-level trends and individual differences in change as well as the genetic and environmental sources of rank-order continuity and change in several personality traits (neuroticism, extraversion, openness, agreeableness, conscientiousness, perceived control, and affect intensity) and well-being. In addition, we analyzed the interrelation between perceived control and change in other personality traits as well as between change in personality traits and change in well-being. We analyzed data from older adult twins, aged 64-85 years at Time 1 (N = 410; 135 males and 275 females; 134 monozygotic and 63 dizygotic twin pairs), collected at 2 different time points about 5 years apart. On average, neuroticism increased, whereas extraversion, conscientiousness, and perceived control significantly decreased over time. Change in perceived control was associated with change in neuroticism and conscientiousness, pointing to particular adaptation mechanisms specific to old age. Whereas individual differences in personality traits were fairly stable due to both genetic and environmental sources, individual differences in change were primarily due to environmental sources (beyond random error) indicating plasticity in old age. Even though the average level of well-being did not significantly change over time, individual well-being tended to decrease with strongly increasing levels of neuroticism as well as decreasing extraversion, conscientiousness, and perceived control, indicating that personality traits predict well-being but not vice versa. We discuss implications for theory on personality development across the lifespan. (c) 2015 APA, all rights reserved).

  13. Adolescents' Sedentary Behaviors in Two European Cities.

    PubMed

    Aibar Solana, Alberto; Bois, Julien E; Zaragoza, Javier; Bru, Noëlle; Paillard, Thierry; Generelo, Eduardo

    2015-01-01

    The aim of this study was to determine and compare the correlates of objective sedentary behavior (SB) and nonschool self-reported SB in adolescents from 2 midsized cities, 1 in France (Tarbes) and 1 in Spain (Huesca). Stability of objective SB and nonschool self-reported SB were also assessed at different time points during 1 academic year. Starting with a total of 829 participants and after applying inclusion criteria, objective SB was assessed for 646 adolescents (Mage = 14.30 ± 0.71 years) with GT3X accelerometers for 7 days at 2 time points. Nonschool self-reported SB was measured for 781 adolescents (Mage = 14.46 ± 0.76 years) at 3 time points by means of a questionnaire. Data were analyzed using multiple regression analysis. Gender and ambient temperature emerged as the main statistically significant correlates in all objective SB models, showing higher objective SB levels in girls and lower objective SB levels when ambient temperature was higher. According to nonschool self-reported SB, a gender effect was found in almost all behaviors. Whereas boys spent more time playing with video games as well as games on their mobile phones, girls spent more time studying and using their computers and mobile phones to communicate with each other. The findings showed a statistically significant city effect on study time (Huesca > Tarbes) and video games and telephone communication time (Tarbes > Huesca). Nonschool self-reported SB patterns were different in Huesca and Tarbes. Intervention programs should be adapted to target the reduction of adolescents' SB according to different contexts.

  14. Improving pointing of Toruń 32-m radio telescope: effects of rail surface irregularities

    NASA Astrophysics Data System (ADS)

    Lew, Bartosz

    2018-03-01

    Over the last few years a number of software and hardware improvements have been implemented to the 32-m Cassegrain radio telescope located near Toruń. The 19-bit angle encoders have been upgraded to 29-bit in azimuth and elevation axes. The control system has been substantially improved, in order to account for a number of previously-neglected, astrometric effects that are relevant for milli-degree pointing. In the summer 2015, as a result of maintenance works, the orientation of the secondary mirror has been slightly altered, which resulted in worsening of the pointing precision, much below the nominal telescope capabilities. In preparation for observations at the highest available frequency of 30-GHz, we use One Centimeter Receiver Array (OCRA), to take the most accurate pointing data ever collected with the telescope, and we analyze it in order to improve the pointing precision. We introduce a new generalized pointing model that, for the first time, accounts for the rail irregularities, and we show that the telescope can have root mean square pointing accuracy at the level < 8″ and < 12″ in azimuth and elevation respectively. Finally, we discuss the implemented pointing improvements in the light of effects that may influence their long-term stability.

  15. Line of sight pointing technology for laser communication system between aircrafts

    NASA Astrophysics Data System (ADS)

    Zhao, Xin; Liu, Yunqing; Song, Yansong

    2017-12-01

    In space optical communications, it is important to obtain the most efficient performance of line of sight (LOS) pointing system. The errors of position (latitude, longitude, and altitude), attitude angles (pitch, yaw, and roll), and installation angle among a different coordinates system are usually ineluctable when assembling and running an aircraft optical communication terminal. These errors would lead to pointing errors and make it difficult for the LOS system to point to its terminal to establish a communication link. The LOS pointing technology of an aircraft optical communication system has been researched using a transformation matrix between the coordinate systems of two aircraft terminals. A method of LOS calibration has been proposed to reduce the pointing error. In a flight test, a successful 144-km link was established between two aircrafts. The position and attitude angles of the aircraft have been obtained to calculate the pointing angle in azimuth and elevation provided by using a double-antenna GPS/INS system. The size of the field of uncertainty (FOU) and the pointing accuracy are analyzed based on error theory, and it has been also measured using an observation camera installed next to the optical LOS. Our results show that the FOU of aircraft optical communications is 10 mrad without a filter, which is the foundation to acquisition strategy and scanning time.

  16. Ground-Water Age and its Water-Management Implications, Cook Inlet Basin, Alaska

    USGS Publications Warehouse

    Glass, Roy L.

    2002-01-01

    The Cook Inlet Basin encompasses 39,325 square miles in south-central Alaska. Approximately 350,000 people, more than half of Alaska?s population, reside in the basin, mostly in the Anchorage area. However, rapid growth is occurring in the Matanuska?Susitna and Kenai Peninsula Boroughs to the north and south of Anchorage. Ground-water resources provide about one-third of the water used for domestic, commercial and industrial purposes in the Anchorage metropolitan area and are the sole sources of water for industries and residents outside Anchorage. In 1997, a study of the Cook Inlet Basin was begun as part of the U.S. Geological Survey?s National Water-Quality Assessment Program. Samples of ground water were collected from 35 existing wells in unconsolidated glacial and alluvial aquifers during 1999 to determine the regional quality of ground water beneath about 790 mi2 of developed land and to gain a better understanding of the natural and human factors that affect the water quality (Glass, 2001). Of the 35 wells sampled, 31 had water analyzed for atmospherically derived substances to determine the ground water?s travel time from its point of recharge to its point of use or discharge?also known as ground-water age. Ground water moves slowly from its point of recharge to its point of use or discharge. This water starts as rain and melting snow that soak into the ground as recharge. In the Matanuska?Susitna, Anchorage, and Kenai Peninsula areas, ground water generally moves from near the mountain fronts toward Cook Inlet or the major rivers. Much of the water pumped by domestic and public-supply wells may have traveled less than 10 miles, and the trip may have taken as short a time as a few days or as long as several decades. This ground water is vulnerable to contamination from the land surface, and many contaminants in the water would follow the same paths and have similar travel times from recharge areas to points of use as the chemical substances analyzed in this study. The effects of contamination may not be seen for several years after a contaminant is introduced into the ground-water system. Many contaminants could make the water unsuitable for drinking for many years, even in concentrations too low to detect without expensive chemical tests. The travel time of a chemically conservative substance depends primarily on the velocity of ground water through the aquifer, which in turn depends on the hydrologic characteristics of the aquifer system.

  17. Machine characterization and benchmark performance prediction

    NASA Technical Reports Server (NTRS)

    Saavedra-Barrera, Rafael H.

    1988-01-01

    From runs of standard benchmarks or benchmark suites, it is not possible to characterize the machine nor to predict the run time of other benchmarks which have not been run. A new approach to benchmarking and machine characterization is reported. The creation and use of a machine analyzer is described, which measures the performance of a given machine on FORTRAN source language constructs. The machine analyzer yields a set of parameters which characterize the machine and spotlight its strong and weak points. Also described is a program analyzer, which analyzes FORTRAN programs and determines the frequency of execution of each of the same set of source language operations. It is then shown that by combining a machine characterization and a program characterization, we are able to predict with good accuracy the run time of a given benchmark on a given machine. Characterizations are provided for the Cray-X-MP/48, Cyber 205, IBM 3090/200, Amdahl 5840, Convex C-1, VAX 8600, VAX 11/785, VAX 11/780, SUN 3/50, and IBM RT-PC/125, and for the following benchmark programs or suites: Los Alamos (BMK8A1), Baskett, Linpack, Livermore Loops, Madelbrot Set, NAS Kernels, Shell Sort, Smith, Whetstone and Sieve of Erathostenes.

  18. Large Deviations for Stationary Probabilities of a Family of Continuous Time Markov Chains via Aubry-Mather Theory

    NASA Astrophysics Data System (ADS)

    Lopes, Artur O.; Neumann, Adriana

    2015-05-01

    In the present paper, we consider a family of continuous time symmetric random walks indexed by , . For each the matching random walk take values in the finite set of states ; notice that is a subset of , where is the unitary circle. The infinitesimal generator of such chain is denoted by . The stationary probability for such process converges to the uniform distribution on the circle, when . Here we want to study other natural measures, obtained via a limit on , that are concentrated on some points of . We will disturb this process by a potential and study for each the perturbed stationary measures of this new process when . We disturb the system considering a fixed potential and we will denote by the restriction of to . Then, we define a non-stochastic semigroup generated by the matrix , where is the infinifesimal generator of . From the continuous time Perron's Theorem one can normalized such semigroup, and, then we get another stochastic semigroup which generates a continuous time Markov Chain taking values on . This new chain is called the continuous time Gibbs state associated to the potential , see (Lopes et al. in J Stat Phys 152:894-933, 2013). The stationary probability vector for such Markov Chain is denoted by . We assume that the maximum of is attained in a unique point of , and from this will follow that . Thus, here, our main goal is to analyze the large deviation principle for the family , when . The deviation function , which is defined on , will be obtained from a procedure based on fixed points of the Lax-Oleinik operator and Aubry-Mather theory. In order to obtain the associated Lax-Oleinik operator we use the Varadhan's Lemma for the process . For a careful analysis of the problem we present full details of the proof of the Large Deviation Principle, in the Skorohod space, for such family of Markov Chains, when . Finally, we compute the entropy of the invariant probabilities on the Skorohod space associated to the Markov Chains we analyze.

  19. Multi-star processing and gyro filtering for the video inertial pointing system

    NASA Technical Reports Server (NTRS)

    Murphy, J. P.

    1976-01-01

    The video inertial pointing (VIP) system is being developed to satisfy the acquisition and pointing requirements of astronomical telescopes. The VIP system uses a single video sensor to provide star position information that can be used to generate three-axis pointing error signals (multi-star processing) and for input to a cathode ray tube (CRT) display of the star field. The pointing error signals are used to update the telescope's gyro stabilization system (gyro filtering). The CRT display facilitates target acquisition and positioning of the telescope by a remote operator. Linearized small angle equations are used for the multistar processing and a consideration of error performance and singularities lead to star pair location restrictions and equation selection criteria. A discrete steady-state Kalman filter which uses the integration of the gyros is developed and analyzed. The filter includes unit time delays representing asynchronous operations of the VIP microprocessor and video sensor. A digital simulation of a typical gyro stabilized gimbal is developed and used to validate the approach to the gyro filtering.

  20. Structural Analysis of Single-Point Mutations Given an RNA Sequence: A Case Study with RNAMute

    NASA Astrophysics Data System (ADS)

    Churkin, Alexander; Barash, Danny

    2006-12-01

    We introduce here for the first time the RNAMute package, a pattern-recognition-based utility to perform mutational analysis and detect vulnerable spots within an RNA sequence that affect structure. Mutations in these spots may lead to a structural change that directly relates to a change in functionality. Previously, the concept was tried on RNA genetic control elements called "riboswitches" and other known RNA switches, without an organized utility that analyzes all single-point mutations and can be further expanded. The RNAMute package allows a comprehensive categorization, given an RNA sequence that has functional relevance, by exploring the patterns of all single-point mutants. For illustration, we apply the RNAMute package on an RNA transcript for which individual point mutations were shown experimentally to inactivate spectinomycin resistance in Escherichia coli. Functional analysis of mutations on this case study was performed experimentally by creating a library of point mutations using PCR and screening to locate those mutations. With the availability of RNAMute, preanalysis can be performed computationally before conducting an experiment.

  1. Overall Survival After Whole-Brain Radiation Therapy for Intracerebral Metastases from Testicular Cancer.

    PubMed

    Rades, Dirk; Dziggel, Liesa; Veninga, Theo; Bajrovic, Amira; Schild, Steven E

    2016-09-01

    To identify predictors and develop a score for overall survival of patients with intracerebral metastasis from testicular cancer. Whole-brain radiation therapy program, age, Karnofsky performance score (KPS), number of intracerebral metastases, number of other metastatic sites and time between testicular cancer diagnosis and radiation therapy were analyzed for their association with overall survival in eight patients. KPS of 80-90% was significantly associated with better overall survival (p=0.006), one or no other metastatic sites showed a trend for a better outcome (p=0.10). The following scores were assigned: KPS 60-70%=0 points, KPS 80-90%=1 point, ≥2 other metastatic sites=0 points, 0-1 other metastatic sites=1 point. Two groups, with 0 and with 1-2 points, were formed. Overall survival rates were 33% vs. 100% at 6 months and 0% vs. 100% at 12 months (p=0.006), respectively. A simple instrument enabling physicians to judge the overall survival of patients with intracerebral metastasis from testicular cancer is provided. Copyright© 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  2. Influence of the distance between target surface and focal point on the expansion dynamics of a laser-induced silicon plasma with spatial confinement

    NASA Astrophysics Data System (ADS)

    Zhang, Dan; Chen, Anmin; Wang, Xiaowei; Wang, Ying; Sui, Laizhi; Ke, Da; Li, Suyu; Jiang, Yuanfei; Jin, Mingxing

    2018-05-01

    Expansion dynamics of a laser-induced plasma plume, with spatial confinement, for various distances between the target surface and focal point were studied by the fast photography technique. A silicon wafer was ablated to induce the plasma with a Nd:YAG laser in an atmospheric environment. The expansion dynamics of the plasma plume depended on the distance between the target surface and focal point. In addition, spatially confined time-resolved images showed the different structures of the plasma plumes at different distances between the target surface and focal point. By analyzing the plume images, the optimal distance for emission enhancement was found to be approximately 6 mm away from the geometrical focus using a 10 cm focal length lens. This optimized distance resulted in the strongest compression ratio of the plasma plume by the reflected shock wave. Furthermore, the duration of the interaction between the reflected shock wave and the plasma plume was also prolonged.

  3. Image formation of volume holographic microscopy using point spread functions

    NASA Astrophysics Data System (ADS)

    Luo, Yuan; Oh, Se Baek; Kou, Shan Shan; Lee, Justin; Sheppard, Colin J. R.; Barbastathis, George

    2010-04-01

    We present a theoretical formulation to quantify the imaging properties of volume holographic microscopy (VHM). Volume holograms are formed by exposure of a photosensitive recording material to the interference of two mutually coherent optical fields. Recently, it has been shown that a volume holographic pupil has spatial and spectral sectioning capability for fluorescent samples. Here, we analyze the point spread function (PSF) to assess the imaging behavior of the VHM with a point source and detector. The coherent PSF of the VHM is derived, and the results are compared with those from conventional microscopy, and confocal microscopy with point and slit apertures. According to our analysis, the PSF of the VHM can be controlled in the lateral direction by adjusting the parameters of the VH. Compared with confocal microscopes, the performance of the VHM is comparable or even potentially better, and the VHM is also able to achieve real-time and three-dimensional (3D) imaging due to its multiplexing ability.

  4. Unified Ultrasonic/Eddy-Current Data Acquisition

    NASA Technical Reports Server (NTRS)

    Chern, E. James; Butler, David W.

    1993-01-01

    Imaging station for detecting cracks and flaws in solid materials developed combining both ultrasonic C-scan and eddy-current imaging. Incorporation of both techniques into one system eliminates duplication of computers and of mechanical scanners; unifies acquisition, processing, and storage of data; reduces setup time for repetitious ultrasonic and eddy-current scans; and increases efficiency of system. Same mechanical scanner used to maneuver either ultrasonic or eddy-current probe over specimen and acquire point-by-point data. For ultrasonic scanning, probe linked to ultrasonic pulser/receiver circuit card, while, for eddy-current imaging, probe linked to impedance-analyzer circuit card. Both ultrasonic and eddy-current imaging subsystems share same desktop-computer controller, containing dedicated plug-in circuit boards for each.

  5. At the Tipping Point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiley, H. S.

    There comes a time in every field of science when things suddenly change. While it might not be immediately apparent that things are different, a tipping point has occurred. Biology is now at such a point. The reason is the introduction of high-throughput genomics-based technologies. I am not talking about the consequences of the sequencing of the human genome (and every other genome within reach). The change is due to new technologies that generate an enormous amount of data about the molecular composition of cells. These include proteomics, transcriptional profiling by sequencing, and the ability to globally measure microRNAs andmore » post-translational modifications of proteins. These mountains of digital data can be mapped to a common frame of reference: the organism’s genome. With the new high-throughput technologies, we can generate tens of thousands of data points from each sample. Data are now measured in terabytes and the time necessary to analyze data can now require years. Obviously, we can’t wait to interpret the data fully before the next experiment. In fact, we might never be able to even look at all of it, much less understand it. This volume of data requires sophisticated computational and statistical methods for its analysis and is forcing biologists to approach data interpretation as a collaborative venture.« less

  6. Distributed optical fiber vibration sensor based on Sagnac interference in conjunction with OTDR.

    PubMed

    Pan, Chao; Liu, Xiaorui; Zhu, Hui; Shan, Xuekang; Sun, Xiaohan

    2017-08-21

    A real-time distributed optical fiber vibration sensing prototype based on the Sagnac interference in conjunction with the optical time domain reflectometry (OTDR) was developed. The sensing mechanism for single- and multi-points vibrations along the sensing fiber was analyzed theoretically and demonstrated experimentally. The experimental results show excellent agreement with the theoretical models. It is verified that single-point vibration induces a significantly abrupt and monotonous power change in the corresponding position of OTDR trace. As to multi-points vibrations, the detection of the following vibration is influenced by all previous ones. However, if the distance between the adjacent two vibrations is larger than half of the input optical pulse width, abrupt power changes induced by them are separate and still monotonous. A time-shifting differential module was developed and carried out to convert vibration-induced power changes to pulses. Consequently, vibrations can be located accurately by measuring peak or valley positions of the vibration-induced pulses. It is demonstrated that when the width and peak power of input optical pulse are set to 1 μs and 35 mW, respectively, the position error is less than ± 0.5 m in a sensing range of more than 16 km, with the spatial resolution of ~110 m.

  7. Longitudinal Study of Daily Hassles in Adolescents in Arab Muslim Immigrant Families

    PubMed Central

    Templin, Thomas N.; Hough, Edythe S.

    2013-01-01

    This study investigated which daily hassles (i.e., Parent, School, Peer, Neighborhood, and Resource) were perceived by Arab Muslim immigrant adolescents as most stressful over a three-year time period and according to child's gender and mother's immigration status (i.e., refugee or non refugee). Data were collected at three time points during adolescence and analyzed using doubly multivariate analysis of covariance (MANCOVA) with linear and quadratic trends. School and Parent hassles were greater than other hassles at every time point. Main effects of time, immigration status, and father's employment, but not child's gender, were statistically significant. School and Parent hassles increased while Peer and Resource hassles decreased over time. Adolescents with refugee mothers reported greater School and Neighborhood and fewer Parent hassles than those with non refugee mothers. Adolescents with unemployed fathers reported significantly more School and Neighborhood hassles. Study findings identify two at risk subgroups: those adolescents with refugee mothers and/or those adolescents with unemployed fathers; and pinpoint problematic daily hassles. Additional research is needed to explore vicarious trauma effects as a potential underlying reason for the pattern of daily hassles noted in adolescents with refugee mothers. PMID:23430463

  8. Low-dose caffeine administered in chewing gum does not enhance cycling to exhaustion.

    PubMed

    Ryan, Edward J; Kim, Chul-Ho; Muller, Matthew D; Bellar, David M; Barkley, Jacob E; Bliss, Matthew V; Jankowski-Wilkinson, Andrea; Russell, Morgan; Otterstetter, Ronald; Macander, Daniela; Glickman, Ellen L; Kamimori, Gary H

    2012-03-01

    Low-dose caffeine administered in chewing gum does not enhance cycling to exhaustion. The purpose of the current investigation was to examine the effect of low-dose caffeine (CAF) administered in chewing gum at 3 different time points during submaximal cycling exercise to exhaustion. Eight college-aged (26 ± 4 years), physically active (45.5 ± 5.7 ml·kg(-1)·min(-1)) volunteers participated in 4 experimental trials. Two pieces of caffeinated chewing gum (100 mg per piece, total quantity of 200 mg) were administered in a double-blind manner at 1 of 3 time points (-35, -5, and +15 minutes) with placebo at the other 2 points and at all 3 points in the control trial. The participants cycled at 85% of maximal oxygen consumption until volitional fatigue and time to exhaustion (TTE) were recorded in minutes. Venous blood samples were obtained at -40, -10, and immediately postexercise and analyzed for serum-free fatty acid and plasma catecholamine concentrations. Oxygen consumption, respiratory exchange ratio, heart rate, glucose, lactate, ratings of perceived exertion, and perceived leg pain measures were obtained at baseline and every 10 minutes during cycling. The results showed that there were no significant differences between the trials for any of the parameters measured including TTE. These findings suggest that low-dose CAF administered in chewing gum has no effect on TTE during cycling in recreational athletes and is, therefore, not recommended.

  9. Property Analysis of the Real-Time Uncalibrated Phase Delay Product Generated by Regional Reference Stations and Its Influence on Precise Point Positioning Ambiguity Resolution

    PubMed Central

    Zhang, Yong; Wang, Qing; Jiang, Xinyuan

    2017-01-01

    The real-time estimation of the wide-lane and narrow-lane Uncalibrated Phase Delay (UPD) of satellites is realized by real-time data received from regional reference station networks; The properties of the real-time UPD product and its influence on real-time precise point positioning ambiguity resolution (RTPPP-AR) are experimentally analyzed according to real-time data obtained from the regional Continuously Operating Reference Stations (CORS) network located in Tianjin, Shanghai, Hong Kong, etc. The results show that the real-time wide-lane and narrow-lane UPD products differ significantly from each other in time-domain characteristics; the wide-lane UPDs have daily stability, with a change rate of less than 0.1 cycle/day, while the narrow-lane UPDs have short-term stability, with significant change in one day. The UPD products generated by different regional networks have obvious spatial characteristics, thus significantly influencing RTPPP-AR: the adoption of real-time UPD products employing the sparse stations in the regional network for estimation is favorable for improving the regional RTPPP-AR up to 99%; the real-time UPD products of different regional networks slightly influence PPP-AR positioning accuracy. After ambiguities are successfully fixed, the real-time dynamic RTPPP-AR positioning accuracy is better than 3 cm in the plane and 8 cm in the upward direction. PMID:28534844

  10. Property Analysis of the Real-Time Uncalibrated Phase Delay Product Generated by Regional Reference Stations and Its Influence on Precise Point Positioning Ambiguity Resolution.

    PubMed

    Zhang, Yong; Wang, Qing; Jiang, Xinyuan

    2017-05-19

    The real-time estimation of the wide-lane and narrow-lane Uncalibrated Phase Delay (UPD) of satellites is realized by real-time data received from regional reference station networks; The properties of the real-time UPD product and its influence on real-time precise point positioning ambiguity resolution (RTPPP-AR) are experimentally analyzed according to real-time data obtained from the regional Continuously Operating Reference Stations (CORS) network located in Tianjin, Shanghai, Hong Kong, etc. The results show that the real-time wide-lane and narrow-lane UPD products differ significantly from each other in time-domain characteristics; the wide-lane UPDs have daily stability, with a change rate of less than 0.1 cycle/day, while the narrow-lane UPDs have short-term stability, with significant change in one day. The UPD products generated by different regional networks have obvious spatial characteristics, thus significantly influencing RTPPP-AR: the adoption of real-time UPD products employing the sparse stations in the regional network for estimation is favorable for improving the regional RTPPP-AR up to 99%; the real-time UPD products of different regional networks slightly influence PPP-AR positioning accuracy. After ambiguities are successfully fixed, the real-time dynamic RTPPP-AR positioning accuracy is better than 3 cm in the plane and 8 cm in the upward direction.

  11. [Point-of-care-testing--the intensive care laboratory].

    PubMed

    Müller, M M; Hackl, W; Griesmacher, A

    1999-01-01

    After successful centralization of laboratory analyses since more than 30 years, advances in biosensors, microprocessors, measurement of undiluted whole blood and miniaturization of laboratory analyzers are leading nowadays more and more to a re-decentralization in the laboratory medicine. Point-of-care-testing (POCT), which is defined as any laboratory test performed outside central or decentralized laboratories, is becoming more and more popular. The theoretical advantages of POCT are faster turn-around-times (TAT), more rapid medical decisions, avoidance of sample identification and sample transport problems and the need of only small specimen volumes. These advantages are frequently mentioned, but are not associated with a clear clinical benefit. The disadvantages of POCT such as incorrect handling and/or maintenance of the analyzers by nontrained clinical staff, inadequate or even absent calibrations and/or quality controls, lack of cost-effectiveness because of an increased number of analyzers and more expensive reagents, insufficient documentation and difficult comparability of the obtained POCT-results with routine laboratory results, are strongly evident. According to the authors' opinion the decision for the establishing of POCT has only to be made in a close co-operation between physicians and laboratorians in order to vouch for necessity and high quality of the analyses. Taking the local situation into consideration (24-h-central laboratory, etc.) the spectrum of parameters measured by means of POCT should be rigorously restricted to the vital functions. Such analytes should be: hemoglobin or hematocrit, activated whole blood clotting time, blood gases, sodium, potassium, ionized calcium, glucose, creatinine, ammonia and lactate.

  12. Analysis of One-Way Laser Ranging Data to LRO, Time Transfer and Clock Characterization

    NASA Technical Reports Server (NTRS)

    Bauer, S.; Hussmann, H.; Oberst, J.; Dirkx, D.; Mao, D.; Neumann, G. A.; Mazarico, E.; Torrence, M. H.; McGarry, J. F.; Smith, D. E.; hide

    2016-01-01

    We processed and analyzed one-way laser ranging data from International Laser Ranging Service ground stations to NASA's Lunar Reconnaissance Orbiter (LRO), obtained from June 13, 2009 until September 30, 2014. We pair and analyze the one-way range observables from station laser fire and spacecraft laser arrival times by using nominal LRO orbit models based on the GRAIL gravity field. We apply corrections for instrument range walk, as well as for atmospheric and relativistic effects. In total we derived a tracking data volume of approximately 3000 hours featuring 64 million Full Rate and 1.5 million Normal Point observations. From a statistical analysis of the dataset we evaluate the experiment and the ground station performance. We observe a laser ranging measurement precision of 12.3 centimeters in case of the Full Rate data which surpasses the LOLA (Lunar Orbiting Laser Altimeter) timestamp precision of 15 centimeters. The averaging to Normal Point data further reduces the measurement precision to 5.6 centimeters. We characterized the LRO clock with fits throughout the mission time and estimated the rate to 6.9 times10 (sup -8), the aging to 1.6 times 10 (sup -12) per day and the change of aging to 2.3 times 10 (sup -14) per day squared over all mission phases. The fits also provide referencing of onboard time to the TDB (Barycentric Dynamical Time) time scale at a precision of 166 nanoseconds over two and 256 nanoseconds over all mission phases, representing ground to space time transfer. Furthermore we measure ground station clock differences from the fits as well as from simultaneous passes which we use for ground to ground time transfer from common view observations. We observed relative offsets ranging from 33 to 560 nanoseconds and relative rates ranging from 2 times 10 (sup -13) to 6 times 10 (sup -12) between the ground station clocks during selected mission phases. We study the results from the different methods and discuss their applicability for time transfer.

  13. Effect of alendronate on post-traumatic osteoarthritis induced by anterior cruciate ligament rupture in mice.

    PubMed

    Khorasani, Mohammad S; Diko, Sindi; Hsia, Allison W; Anderson, Matthew J; Genetos, Damian C; Haudenschild, Dominik R; Christiansen, Blaine A

    2015-02-16

    Previous studies in animal models of osteoarthritis suggest that alendronate (ALN) has antiresorptive and chondroprotective effects, and can reduce osteophyte formation. However, these studies used non-physiologic injury methods, and did not investigate early time points during which bone is rapidly remodeled prior to cartilage degeneration. The current study utilized a non-invasive model of knee injury in mice to investigate the effect of ALN treatment on subchondral bone changes, articular cartilage degeneration, and osteophyte formation following injury. Non-invasive knee injury via tibial compression overload or sham injury was performed on a total of 90 mice. Mice were treated with twice weekly subcutaneous injections of low-dose ALN (40 μg/kg/dose), high-dose ALN (1,000 μg/kg/dose), or vehicle, starting immediately after injury until sacrifice at 7, 14 or 56 days. Trabecular bone of the femoral epiphysis, subchondral cortical bone, and osteophyte volume were quantified using micro-computed tomography (μCT). Whole-joint histology was performed at all time points to analyze articular cartilage and joint degeneration. Blood was collected at sacrifice, and serum was analyzed for biomarkers of bone formation and resorption. μCT analysis revealed significant loss of trabecular bone from the femoral epiphysis 7 and 14 days post-injury, which was effectively prevented by high-dose ALN treatment. High-dose ALN treatment was also able to reduce subchondral bone thickening 56 days post-injury, and was able to partially preserve articular cartilage 14 days post-injury. However, ALN treatment was not able to reduce osteophyte formation at 56 days post-injury, nor was it able to prevent articular cartilage and joint degeneration at this time point. Analysis of serum biomarkers revealed an increase in bone resorption at 7 and 14 days post-injury, with no change in bone formation at any time points. High-dose ALN treatment was able to prevent early trabecular bone loss and cartilage degeneration following non-invasive knee injury, but was not able to mitigate long-term joint degeneration. These data contribute to understanding the effect of bisphosphonates on the development of osteoarthritis, and may support the use of anti-resorptive drugs to prevent joint degeneration following injury, although further investigation is warranted.

  14. Power law for the duration of recession and prosperity in Latin American countries

    NASA Astrophysics Data System (ADS)

    Redelico, Francisco O.; Proto, Araceli N.; Ausloos, Marcel

    2008-11-01

    Ormerod and Mounfield [P. Ormerod, C. Mounfield, Power law distribution of duration and magnitude of recessions in capitalist economies: Breakdown of scaling, Physica A 293 (2001) 573] and Ausloos et al. [M. Ausloos, J. Mikiewicz, M. Sanglier, The durations of recession and prosperity: Does their distribution follow a power or an exponential law? Physica A 339 (2004) 548] have independently analyzed the duration of recessions for developed countries through the evolution of the GDP in different time windows. It was found that there is a power law governing the duration distribution. We have analyzed data collected from 19 Latin American countries in order to observe whether such results are valid or not for developing countries. The case of prosperity years is also discussed. We observe that the power law of recession time intervals, see Ref. [1], is valid for Latin American countries as well. Thus an interesting point is discovered: the same scaling time is found in the case of recessions for the three data sets (ca. 1 year), and this could represent a universal feature. Other time scale parameters differ significantly from each other.

  15. Photobiomodulation in the Prevention of Tooth Sensitivity Caused by In-Office Dental Bleaching. A Randomized Placebo Preliminary Study.

    PubMed

    Calheiros, Andrea Paiva Corsetti; Moreira, Maria Stella; Gonçalves, Flávia; Aranha, Ana Cecília Correa; Cunha, Sandra Ribeiro; Steiner-Oliveira, Carolina; Eduardo, Carlos de Paula; Ramalho, Karen Müller

    2017-08-01

    Analyze the effect of photobiomodulation in the prevention of tooth sensitivity after in-office dental bleaching. Tooth sensitivity is a common clinical consequence of dental bleaching. Therapies for prevention of sensitivity have been investigated in literature. This study was developed as a randomized, placebo blind clinical trial. Fifty patients were selected (n = 10) and randomly divided into five groups: (1) control, (2) placebo, (3) laser before bleaching, (4) laser after bleaching, and (5) laser before and after bleaching. Irradiation was performed perpendicularly, in contact, on each tooth during 10 sec per point in two points. The first point was positioned in the middle of the tooth crown and the second in the periapical region. Photobiomodulation was applied using the following parameters: 780 nm, 40 mW, 10 J/cm 2 , 0.4 J per point. Pain was analyzed before, immediately after, and seven subsequent days after bleaching. Patients were instructed to report pain using the scale: 0 = no tooth sensitivity, 1 = gentle sensitivity, 2 = moderate sensitivity, 3 = severe sensitivity. There were no statistical differences between groups at any time (p > 0.05). More studies, with others parameters and different methods of tooth sensitivity analysis, should be performed to complement the results found. Within the limitation of the present study, the laser parameters of photobiomodulation tested in the present study were not efficient in preventing tooth sensitivity after in-office bleaching.

  16. Global and Local Approaches Describing Critical Phenomena on the Developing and Developed Financial Markets

    NASA Astrophysics Data System (ADS)

    Grech, Dariusz

    We define and confront global and local methods to analyze the financial crash-like events on the financial markets from the critical phenomena point of view. These methods are based respectively on the analysis of log-periodicity and on the local fractal properties of financial time series in the vicinity of phase transitions (crashes). The log-periodicity analysis is made in a daily time horizon, for the whole history (1991-2008) of Warsaw Stock Exchange Index (WIG) connected with the largest developing financial market in Europe. We find that crash-like events on the Polish financial market are described better by the log-divergent price model decorated with log-periodic behavior than by the power-law-divergent price model usually discussed in log-periodic scenarios for developed markets. Predictions coming from log-periodicity scenario are verified for all main crashes that took place in WIG history. It is argued that crash predictions within log-periodicity model strongly depend on the amount of data taken to make a fit and therefore are likely to contain huge inaccuracies. Next, this global analysis is confronted with the local fractal description. To do so, we provide calculation of the so-called local (time dependent) Hurst exponent H loc for the WIG time series and for main US stock market indices like DJIA and S&P 500. We point out dependence between the behavior of the local fractal properties of financial time series and the crashes appearance on the financial markets. We conclude that local fractal method seems to work better than the global approach - both for developing and developed markets. The very recent situation on the market, particularly related to the Fed intervention in September 2007 and the situation immediately afterwards is also analyzed within fractal approach. It is shown in this context how the financial market evolves through different phases of fractional Brownian motion. Finally, the current situation on American market is analyzed in fractal language. This is to show how far we still are from the end of recession and from the beginning of a new boom on US financial market or on other world leading stocks.

  17. Skeletal muscle proteins: a new approach to delimitate the time since death.

    PubMed

    Foditsch, Elena Esra; Saenger, Alexandra Maria; Monticelli, Fabio Carlo

    2016-03-01

    Skeletal muscle tissue is proposed as a forensic model tissue with strong potential, as it is easily accessible and its true-to-life state structure and function is well known. Despite this strong potential, skeletal muscle degradation studies are rare. The aim of this study was to test if a skeletal muscle-based protein analysis is applicable to delimitate the time since death. Under standard conditions, two pigs were stored either at 22 °C for 5 days or 4 °C for 21 days. Their Mm. biceps femori were sampled periodically for analyses of ten skeletal muscle proteins postmortem. All analyzed proteins can serve as markers for a delimitation of the time since death. Desmin, nebulin, titin, and SERCA 1 displayed distinct protein patterns at certain points of time. The other five proteins, α-actinin, calsequestrin-1, laminin, troponin T-C, and SERCA 2, showed no degradation patterns within the analyzed postmortem time frame. Referring to specific skeletal muscle proteins, results showed short-term stabilities for just a minority of analyzed proteins, while the majority of investigated proteins displayed characteristics as long-term markers. Due to specific patterns and the possibility to determine definite constraints of the presence, absence, or pattern alterations of single proteins, the feasibility of porcine skeletal muscle as forensic model tissue is outlined and the potential of skeletal muscle as forensic model tissue is underlined, especially with respect to later postmortem phases, which so far lack feasible methods to delimitate the time since death.

  18. Predict or classify: The deceptive role of time-locking in brain signal classification

    NASA Astrophysics Data System (ADS)

    Rusconi, Marco; Valleriani, Angelo

    2016-06-01

    Several experimental studies claim to be able to predict the outcome of simple decisions from brain signals measured before subjects are aware of their decision. Often, these studies use multivariate pattern recognition methods with the underlying assumption that the ability to classify the brain signal is equivalent to predict the decision itself. Here we show instead that it is possible to correctly classify a signal even if it does not contain any predictive information about the decision. We first define a simple stochastic model that mimics the random decision process between two equivalent alternatives, and generate a large number of independent trials that contain no choice-predictive information. The trials are first time-locked to the time point of the final event and then classified using standard machine-learning techniques. The resulting classification accuracy is above chance level long before the time point of time-locking. We then analyze the same trials using information theory. We demonstrate that the high classification accuracy is a consequence of time-locking and that its time behavior is simply related to the large relaxation time of the process. We conclude that when time-locking is a crucial step in the analysis of neural activity patterns, both the emergence and the timing of the classification accuracy are affected by structural properties of the network that generates the signal.

  19. Real time analysis of voiced sounds

    NASA Technical Reports Server (NTRS)

    Hong, J. P. (Inventor)

    1976-01-01

    A power spectrum analysis of the harmonic content of a voiced sound signal is conducted in real time by phase-lock-loop tracking of the fundamental frequency, (f sub 0) of the signal and successive harmonics (h sub 1 through h sub n) of the fundamental frequency. The analysis also includes measuring the quadrature power and phase of each frequency tracked, differentiating the power measurements of the harmonics in adjacent pairs, and analyzing successive differentials to determine peak power points in the power spectrum for display or use in analysis of voiced sound, such as for voice recognition.

  20. Classifying individuals based on a densely captured sequence of vital signs: An example using repeated blood pressure measurements during hemodialysis treatment.

    PubMed

    Goldstein, Benjamin A; Chang, Tara I; Winkelmayer, Wolfgang C

    2015-10-01

    Electronic Health Records (EHRs) present the opportunity to observe serial measurements on patients. While potentially informative, analyzing these data can be challenging. In this work we present a means to classify individuals based on a series of measurements collected by an EHR. Using patients undergoing hemodialysis, we categorized people based on their intradialytic blood pressure. Our primary criteria were that the classifications were time dependent and independent of other subjects. We fit a curve of intradialytic blood pressure using regression splines and then calculated first and second derivatives to come up with four mutually exclusive classifications at different time points. We show that these classifications relate to near term risk of cardiac events and are moderately stable over a succeeding two-week period. This work has general application for analyzing dense EHR data. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Long term economic relationships from cointegration maps

    NASA Astrophysics Data System (ADS)

    Vicente, Renato; Pereira, Carlos de B.; Leite, Vitor B. P.; Caticha, Nestor

    2007-07-01

    We employ the Bayesian framework to define a cointegration measure aimed to represent long term relationships between time series. For visualization of these relationships we introduce a dissimilarity matrix and a map based on the sorting points into neighborhoods (SPIN) technique, which has been previously used to analyze large data sets from DNA arrays. We exemplify the technique in three data sets: US interest rates (USIR), monthly inflation rates and gross domestic product (GDP) growth rates.

  2. Metal Vapor Arcing Risk Assessment Tool

    NASA Technical Reports Server (NTRS)

    Hill, Monika C.; Leidecker, Henning W.

    2010-01-01

    The Tin Whisker Metal Vapor Arcing Risk Assessment Tool has been designed to evaluate the risk of metal vapor arcing and to help facilitate a decision toward a researched risk disposition. Users can evaluate a system without having to open up the hardware. This process allows for investigating components at risk rather than spending time and money analyzing every component. The tool points to a risk level and provides direction for appropriate action and documentation.

  3. Early Diagnosis, Treatment, and Care of Cancer Patients

    DTIC Science & Technology

    2010-09-01

    14 and Day 56 time points. In these studies, we examined the distortion product otoacoustic emissions (DPOAE) as an indicator of cochlear function...5-FU did not cause any changes in cochlear function as indicated by DPOAE test results. Our studies on EPO-mediated protection demonstrated that...temperature in the dark for 15 mins. Cells were then diluted with 200 µL of Annexin binding buffer and analyzed immediately by flow cytometry

  4. On Electron-Scale Whistler Turbulence in the Solar Wind

    NASA Technical Reports Server (NTRS)

    Narita, Y.; Nakamura, R.; Baumjohann, W.; Glassmeier, K.-H.; Motschmann, U.; Giles, B.; Magnes, W.; Fischer, D.; Torbert, R. B.; Russell, C. T.

    2016-01-01

    For the first time, the dispersion relation for turbulence magnetic field fluctuations in the solar wind is determined directly on small scales of the order of the electron inertial length, using four-point magnetometer observations from the Magnetospheric Multiscale mission. The data are analyzed using the high-resolution adaptive wave telescope technique. Small-scale solar wind turbulence is primarily composed of highly obliquely propagating waves, with dispersion consistent with that of the whistler mode.

  5. Advanced Research Workshop on Fundamentals of Electronic Nanosystems Held in St. Petersburg, Russia on 25 June-1 July 2005

    DTIC Science & Technology

    2005-01-01

    qubits . Suppression of Superconductivity in Granular Metals Igor Beloborodov Argonne National Laboratory, USA We investigate the suppression of...Russia Various strategies for extending coherence times of superconducting qubits have been proposed. We analyze the effect of fluctuations on a... qubit operated at an optimal point in the free- induction decay and the spin-echo-like experiments. Motivated by the recent experimental findings we

  6. Unidirectional reflectionless light propagation at exceptional points

    NASA Astrophysics Data System (ADS)

    Huang, Yin; Shen, Yuecheng; Min, Changjun; Fan, Shanhui; Veronis, Georgios

    2017-05-01

    In this paper, we provide a comprehensive review of unidirectional reflectionless light propagation in photonic devices at exceptional points (EPs). EPs, which are branch point singularities of the spectrum, associated with the coalescence of both eigenvalues and corresponding eigenstates, lead to interesting phenomena, such as level repulsion and crossing, bifurcation, chaos, and phase transitions in open quantum systems described by non-Hermitian Hamiltonians. Recently, it was shown that judiciously designed photonic synthetic matters could mimic the complex non-Hermitian Hamiltonians in quantum mechanics and realize unidirectional reflection at optical EPs. Unidirectional reflectionlessness is of great interest for optical invisibility. Achieving unidirectional reflectionless light propagation could also be potentially important for developing optical devices, such as optical network analyzers. Here, we discuss unidirectional reflectionlessness at EPs in both parity-time (PT)-symmetric and non-PT-symmetric optical systems. We also provide an outlook on possible future directions in this field.

  7. Investigations of magnesium, histamine and immunoglobulins dynamics in acute urticaria.

    PubMed

    Mureşan, D; Oană, A; Nicolae, I; Alecu, M; Moşescu, L; Benea, V; Flueraş, M

    1990-01-01

    In 42 urticaria patients, magnesium, histamine and IgE were dosed. Magnesium, IgE and histamine variations were followed in urticaria evolution, during acute phase and clinical remission. We noticed magnesium, histamine, IgE values variations depending on disease evolution and applied therapeutic scheme. Therefore: At disease starting point, histamine presented 3.5 times higher values than the normal ones. The value decreases following a curve which tends to reach normal values during clinical remission. At disease starting point, magnesium presented values under the inferior limit of the normal, 0.5 m mol/L respectively, as a mean. The value increases towards the normal limit during clinical remission. Immunoglobulins E follow a similar curve to histamine one, presenting 1,250 U/L values at the starting point, that, under medication, influence decrease between normal limits (800 U/L), during clinical remission. Analyzing the variations of biochemical parameters, the authors emphasize magnesium substitution treatment in urticaria.

  8. Steelmaking process control using remote ultraviolet atomic emission spectroscopy

    NASA Astrophysics Data System (ADS)

    Arnold, Samuel

    Steelmaking in North America is a multi-billion dollar industry that has faced tremendous economic and environmental pressure over the past few decades. Fierce competition has driven steel manufacturers to improve process efficiency through the development of real-time sensors to reduce operating costs. In particular, much attention has been focused on end point detection through furnace off gas analysis. Typically, off-gas analysis is done with extractive sampling and gas analyzers such as Non-dispersive Infrared Sensors (NDIR). Passive emission spectroscopy offers a more attractive approach to end point detection as the equipment can be setup remotely. Using high resolution UV spectroscopy and applying sophisticated emission line detection software, a correlation was observed between metal emissions and the process end point during field trials. This correlation indicates a relationship between the metal emissions and the status of a steelmaking melt which can be used to improve overall process efficiency.

  9. Fuel feasibility study for Red River Army Depot boiler plant. Final report. [Economic breakeven points for conversion to fossil fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ables, L.D.

    This paper establishes economic breakeven points for the conversion to various fossil fuels as a function of time and pollution constraints for the main boiler plant at Red River Army Depot in Texarkana, Texas. In carrying out the objectives of this paper, the author develops what he considers to be the basic conversion costs and operating costs for each fossil fuel under investigation. These costs are analyzed by the use of the present worth comparison method, and the minimum cost difference between the present fuel and the proposed fuel which would justify the conversion to the proposed fuel is calculated.more » These calculated breakeven points allow a fast and easy method of determining the feasibility of a fuel by merely knowing the relative price difference between the fuels under consideration. (GRA)« less

  10. Influences of rolling method on deformation force in cold roll-beating forming process

    NASA Astrophysics Data System (ADS)

    Su, Yongxiang; Cui, Fengkui; Liang, Xiaoming; Li, Yan

    2018-03-01

    In process, the research object, the gear rack was selected to study the influence law of rolling method on the deformation force. By the mean of the cold roll forming finite element simulation, the variation regularity of radial and tangential deformation was analysed under different rolling methods. The variation of deformation force of the complete forming racks and the single roll during the steady state under different rolling modes was analyzed. The results show: when upbeating and down beating, radial single point average force is similar, the tangential single point average force gap is bigger, the gap of tangential single point average force is relatively large. Add itionally, the tangential force at the time of direct beating is large, and the dire ction is opposite with down beating. With directly beating, deformation force loading fast and uninstall slow. Correspondingly, with down beating, deformat ion force loading slow and uninstall fast.

  11. Efficient LIDAR Point Cloud Data Managing and Processing in a Hadoop-Based Distributed Framework

    NASA Astrophysics Data System (ADS)

    Wang, C.; Hu, F.; Sha, D.; Han, X.

    2017-10-01

    Light Detection and Ranging (LiDAR) is one of the most promising technologies in surveying and mapping city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop's storage and computing ability. At the same time, the Point Cloud Library (PCL), an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.

  12. Different methods to analyze stepped wedge trial designs revealed different aspects of intervention effects.

    PubMed

    Twisk, J W R; Hoogendijk, E O; Zwijsen, S A; de Boer, M R

    2016-04-01

    Within epidemiology, a stepped wedge trial design (i.e., a one-way crossover trial in which several arms start the intervention at different time points) is increasingly popular as an alternative to a classical cluster randomized controlled trial. Despite this increasing popularity, there is a huge variation in the methods used to analyze data from a stepped wedge trial design. Four linear mixed models were used to analyze data from a stepped wedge trial design on two example data sets. The four methods were chosen because they have been (frequently) used in practice. Method 1 compares all the intervention measurements with the control measurements. Method 2 treats the intervention variable as a time-independent categorical variable comparing the different arms with each other. In method 3, the intervention variable is a time-dependent categorical variable comparing groups with different number of intervention measurements, whereas in method 4, the changes in the outcome variable between subsequent measurements are analyzed. Regarding the results in the first example data set, methods 1 and 3 showed a strong positive intervention effect, which disappeared after adjusting for time. Method 2 showed an inverse intervention effect, whereas method 4 did not show a significant effect at all. In the second example data set, the results were the opposite. Both methods 2 and 4 showed significant intervention effects, whereas the other two methods did not. For method 4, the intervention effect attenuated after adjustment for time. Different methods to analyze data from a stepped wedge trial design reveal different aspects of a possible intervention effect. The choice of a method partly depends on the type of the intervention and the possible time-dependent effect of the intervention. Furthermore, it is advised to combine the results of the different methods to obtain an interpretable overall result. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Seasonal variations in body composition, maximal oxygen uptake, and gas exchange threshold in cross-country skiers.

    PubMed

    Polat, Metin; Korkmaz Eryılmaz, Selcen; Aydoğan, Sami

    2018-01-01

    In order to ensure that athletes achieve their highest performance levels during competitive seasons, monitoring their long-term performance data is crucial for understanding the impact of ongoing training programs and evaluating training strategies. The present study was thus designed to investigate the variations in body composition, maximal oxygen uptake (VO 2max ), and gas exchange threshold values of cross-country skiers across training phases throughout a season. In total, 15 athletes who participate in international cross-country ski competitions voluntarily took part in this study. The athletes underwent incremental treadmill running tests at 3 different time points over a period of 1 year. The first measurements were obtained in July, during the first preparation period; the second measurements were obtained in October, during the second preparation period; and the third measurements were obtained in February, during the competition period. Body weight, body mass index (BMI), body fat (%), as well as VO 2max values and gas exchange threshold, measured using V-slope method during the incremental running tests, were assessed at all 3 time points. The collected data were analyzed using SPSS 20 package software. Significant differences between the measurements were assessed using Friedman's twoway variance analysis with a post hoc option. The athletes' body weights and BMI measurements at the third point were significantly lower compared with the results of the second measurement ( p <0.001). Moreover, the incremental running test time was significantly higher at the third measurement, compared with both the first ( p <0.05) and the second ( p <0.01) measurements. Similarly, the running speed during the test was significantly higher at the third measurement time point compared with the first measurement time point ( p <0.05). Body fat (%), time to reach the gas exchange threshold, running speed at the gas exchange threshold, VO 2max , amount of oxygen consumed at gas exchange threshold level (VO 2GET ), maximal heart rate (HR max ), and heart rate at gas exchange threshold level (HR GET ) values did not significantly differ between the measurement time points ( p >0.05). VO 2max and gas exchange threshold values recorded during the third measurements, the timing of which coincided with the competitive season of the cross-country skiers, did not significantly change, but their incremental running test time and running speed significantly increased while their body weight and BMI significantly decreased. These results indicate that the cross-country skiers developed a tolerance for high-intensity exercise and reached their highest level of athletic performance during the competitive season.

  14. Attempt to generalize fractional-order electric elements to complex-order ones

    NASA Astrophysics Data System (ADS)

    Si, Gangquan; Diao, Lijie; Zhu, Jianwei; Lei, Yuhang; Zhang, Yanbin

    2017-06-01

    The complex derivative {D}α +/- {{j}β }, with α, β \\in R+ is a generalization of the concept of integer derivative, where α=1, β=0. Fractional-order electric elements and circuits are becoming more and more attractive. In this paper, the complex-order electric elements concept is proposed for the first time, and the complex-order elements are modeled and analyzed. Some interesting phenomena are found that the real part of the order affects the phase of output signal, and the imaginary part affects the amplitude for both the complex-order capacitor and complex-order memristor. More interesting is that the complex-order capacitor can do well at the time of fitting electrochemistry impedance spectra. The complex-order memristor is also analyzed. The area inside the hysteresis loops increases with the increasing of the imaginary part of the order and decreases with the increasing of the real part. Some complex case of complex-order memristors hysteresis loops are analyzed at last, whose loop has touching points beyond the origin of the coordinate system.

  15. Single-Cell-Based Analysis Highlights a Surge in Cell-to-Cell Molecular Variability Preceding Irreversible Commitment in a Differentiation Process

    PubMed Central

    Boullu, Loïs; Morin, Valérie; Vallin, Elodie; Guillemin, Anissa; Papili Gao, Nan; Cosette, Jérémie; Arnaud, Ophélie; Kupiec, Jean-Jacques; Espinasse, Thibault

    2016-01-01

    In some recent studies, a view emerged that stochastic dynamics governing the switching of cells from one differentiation state to another could be characterized by a peak in gene expression variability at the point of fate commitment. We have tested this hypothesis at the single-cell level by analyzing primary chicken erythroid progenitors through their differentiation process and measuring the expression of selected genes at six sequential time-points after induction of differentiation. In contrast to population-based expression data, single-cell gene expression data revealed a high cell-to-cell variability, which was masked by averaging. We were able to show that the correlation network was a very dynamical entity and that a subgroup of genes tend to follow the predictions from the dynamical network biomarker (DNB) theory. In addition, we also identified a small group of functionally related genes encoding proteins involved in sterol synthesis that could act as the initial drivers of the differentiation. In order to assess quantitatively the cell-to-cell variability in gene expression and its evolution in time, we used Shannon entropy as a measure of the heterogeneity. Entropy values showed a significant increase in the first 8 h of the differentiation process, reaching a peak between 8 and 24 h, before decreasing to significantly lower values. Moreover, we observed that the previous point of maximum entropy precedes two paramount key points: an irreversible commitment to differentiation between 24 and 48 h followed by a significant increase in cell size variability at 48 h. In conclusion, when analyzed at the single cell level, the differentiation process looks very different from its classical population average view. New observables (like entropy) can be computed, the behavior of which is fully compatible with the idea that differentiation is not a “simple” program that all cells execute identically but results from the dynamical behavior of the underlying molecular network. PMID:28027290

  16. SU-E-T-50: Automatic Validation of Megavoltage Beams Modeled for Clinical Use in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melchior, M; Salinas Aranda, F; 21st Century Oncology, Ft. Myers, FL

    2014-06-01

    Purpose: To automatically validate megavoltage beams modeled in XiO™ 4.50 (Elekta, Stockholm, Sweden) and Varian Eclipse™ Treatment Planning Systems (TPS) (Varian Associates, Palo Alto, CA, USA), reducing validation time before beam-on for clinical use. Methods: A software application that can automatically read and analyze DICOM RT Dose and W2CAD files was developed using MatLab integrated development environment.TPS calculated dose distributions, in DICOM RT Dose format, and dose values measured in different Varian Clinac beams, in W2CAD format, were compared. Experimental beam data used were those acquired for beam commissioning, collected on a water phantom with a 2D automatic beam scanningmore » system.Two methods were chosen to evaluate dose distributions fitting: gamma analysis and point tests described in Appendix E of IAEA TECDOC-1583. Depth dose curves and beam profiles were evaluated for both open and wedged beams. Tolerance parameters chosen for gamma analysis are 3% and 3 mm dose and distance, respectively.Absolute dose was measured independently at points proposed in Appendix E of TECDOC-1583 to validate software results. Results: TPS calculated depth dose distributions agree with measured beam data under fixed precision values at all depths analyzed. Measured beam dose profiles match TPS calculated doses with high accuracy in both open and wedged beams. Depth and profile dose distributions fitting analysis show gamma values < 1. Relative errors at points proposed in Appendix E of TECDOC-1583 meet therein recommended tolerances.Independent absolute dose measurements at points proposed in Appendix E of TECDOC-1583 confirm software results. Conclusion: Automatic validation of megavoltage beams modeled for their use in the clinic was accomplished. The software tool developed proved efficient, giving users a convenient and reliable environment to decide whether to accept or not a beam model for clinical use. Validation time before beam-on for clinical use was reduced to a few hours.« less

  17. Single-Cell-Based Analysis Highlights a Surge in Cell-to-Cell Molecular Variability Preceding Irreversible Commitment in a Differentiation Process.

    PubMed

    Richard, Angélique; Boullu, Loïs; Herbach, Ulysse; Bonnafoux, Arnaud; Morin, Valérie; Vallin, Elodie; Guillemin, Anissa; Papili Gao, Nan; Gunawan, Rudiyanto; Cosette, Jérémie; Arnaud, Ophélie; Kupiec, Jean-Jacques; Espinasse, Thibault; Gonin-Giraud, Sandrine; Gandrillon, Olivier

    2016-12-01

    In some recent studies, a view emerged that stochastic dynamics governing the switching of cells from one differentiation state to another could be characterized by a peak in gene expression variability at the point of fate commitment. We have tested this hypothesis at the single-cell level by analyzing primary chicken erythroid progenitors through their differentiation process and measuring the expression of selected genes at six sequential time-points after induction of differentiation. In contrast to population-based expression data, single-cell gene expression data revealed a high cell-to-cell variability, which was masked by averaging. We were able to show that the correlation network was a very dynamical entity and that a subgroup of genes tend to follow the predictions from the dynamical network biomarker (DNB) theory. In addition, we also identified a small group of functionally related genes encoding proteins involved in sterol synthesis that could act as the initial drivers of the differentiation. In order to assess quantitatively the cell-to-cell variability in gene expression and its evolution in time, we used Shannon entropy as a measure of the heterogeneity. Entropy values showed a significant increase in the first 8 h of the differentiation process, reaching a peak between 8 and 24 h, before decreasing to significantly lower values. Moreover, we observed that the previous point of maximum entropy precedes two paramount key points: an irreversible commitment to differentiation between 24 and 48 h followed by a significant increase in cell size variability at 48 h. In conclusion, when analyzed at the single cell level, the differentiation process looks very different from its classical population average view. New observables (like entropy) can be computed, the behavior of which is fully compatible with the idea that differentiation is not a "simple" program that all cells execute identically but results from the dynamical behavior of the underlying molecular network.

  18. Meta-Analysis of Cell-based CaRdiac stUdiEs (ACCRUE) in patients with acute myocardial infarction based on individual patient data.

    PubMed

    Gyöngyösi, Mariann; Wojakowski, Wojciech; Lemarchand, Patricia; Lunde, Ketil; Tendera, Michal; Bartunek, Jozef; Marban, Eduardo; Assmus, Birgit; Henry, Timothy D; Traverse, Jay H; Moyé, Lemuel A; Sürder, Daniel; Corti, Roberto; Huikuri, Heikki; Miettinen, Johanna; Wöhrle, Jochen; Obradovic, Slobodan; Roncalli, Jérome; Malliaras, Konstantinos; Pokushalov, Evgeny; Romanov, Alexander; Kastrup, Jens; Bergmann, Martin W; Atsma, Douwe E; Diederichsen, Axel; Edes, Istvan; Benedek, Imre; Benedek, Theodora; Pejkov, Hristo; Nyolczas, Noemi; Pavo, Noemi; Bergler-Klein, Jutta; Pavo, Imre J; Sylven, Christer; Berti, Sergio; Navarese, Eliano P; Maurer, Gerald

    2015-04-10

    The meta-Analysis of Cell-based CaRdiac study is the first prospectively declared collaborative multinational database, including individual data of patients with ischemic heart disease treated with cell therapy. We analyzed the safety and efficacy of intracoronary cell therapy after acute myocardial infarction (AMI), including individual patient data from 12 randomized trials (ASTAMI, Aalst, BOOST, BONAMI, CADUCEUS, FINCELL, REGENT, REPAIR-AMI, SCAMI, SWISS-AMI, TIME, LATE-TIME; n=1252). The primary end point was freedom from combined major adverse cardiac and cerebrovascular events (including all-cause death, AMI recurrance, stroke, and target vessel revascularization). The secondary end point was freedom from hard clinical end points (death, AMI recurrence, or stroke), assessed with random-effects meta-analyses and Cox regressions for interactions. Secondary efficacy end points included changes in end-diastolic volume, end-systolic volume, and ejection fraction, analyzed with random-effects meta-analyses and ANCOVA. We reported weighted mean differences between cell therapy and control groups. No effect of cell therapy on major adverse cardiac and cerebrovascular events (14.0% versus 16.3%; hazard ratio, 0.86; 95% confidence interval, 0.63-1.18) or death (1.4% versus 2.1%) or death/AMI recurrence/stroke (2.9% versus 4.7%) was identified in comparison with controls. No changes in ejection fraction (mean difference: 0.96%; 95% confidence interval, -0.2 to 2.1), end-diastolic volume, or systolic volume were observed compared with controls. These results were not influenced by anterior AMI location, reduced baseline ejection fraction, or the use of MRI for assessing left ventricular parameters. This meta-analysis of individual patient data from randomized trials in patients with recent AMI revealed that intracoronary cell therapy provided no benefit, in terms of clinical events or changes in left ventricular function. URL: http://www.clinicaltrials.gov. Unique identifier: NCT01098591. © 2015 American Heart Association, Inc.

  19. Multiscale analysis of heart rate dynamics: entropy and time irreversibility measures.

    PubMed

    Costa, Madalena D; Peng, Chung-Kang; Goldberger, Ary L

    2008-06-01

    Cardiovascular signals are largely analyzed using traditional time and frequency domain measures. However, such measures fail to account for important properties related to multiscale organization and non-equilibrium dynamics. The complementary role of conventional signal analysis methods and emerging multiscale techniques, is, therefore, an important frontier area of investigation. The key finding of this presentation is that two recently developed multiscale computational tools--multiscale entropy and multiscale time irreversibility--are able to extract information from cardiac interbeat interval time series not contained in traditional methods based on mean, variance or Fourier spectrum (two-point correlation) techniques. These new methods, with careful attention to their limitations, may be useful in diagnostics, risk stratification and detection of toxicity of cardiac drugs.

  20. Multiscale Analysis of Heart Rate Dynamics: Entropy and Time Irreversibility Measures

    PubMed Central

    Peng, Chung-Kang; Goldberger, Ary L.

    2016-01-01

    Cardiovascular signals are largely analyzed using traditional time and frequency domain measures. However, such measures fail to account for important properties related to multiscale organization and nonequilibrium dynamics. The complementary role of conventional signal analysis methods and emerging multiscale techniques, is, therefore, an important frontier area of investigation. The key finding of this presentation is that two recently developed multiscale computational tools— multiscale entropy and multiscale time irreversibility—are able to extract information from cardiac interbeat interval time series not contained in traditional methods based on mean, variance or Fourier spectrum (two-point correlation) techniques. These new methods, with careful attention to their limitations, may be useful in diagnostics, risk stratification and detection of toxicity of cardiac drugs. PMID:18172763

  1. Analysis of Bearing Capacity Pile Foundation with Using Capwap Software for Testing Pile Driving Analyzer (pda) at Fasfel Development Project Parlimbungan Ketek Sikara-Kara Mandailing Natal District (north Sumatera)

    NASA Astrophysics Data System (ADS)

    Oberlyn Simanjuntak, Johan; Suita, Diana

    2017-12-01

    Pile foundation is one type deep foundation that serves to distribute the load of hard soil structure loading which has a high bearing capacity that is located deep enough inside the soil. To determine the bearing capacity of the pile and at the same time control the Calendring results, the Pile Driving Analyzer (PDA) test at 8 pile sections from the 84 point piling section (10% of the number sections), the results were analyzed by CAPWAP SOFTWARE, and the highest bearing capacity of Ru 177 ton and the lowest bearing capacity of 111 tons, is bigger than the plan load which load plans that is 60,9 tons. Finally the PDA safe is bearing bearing capacity of the load planning.

  2. Effects of climatic variables on weight loss: a global analysis.

    PubMed

    Ustulin, Morena; Keum, Changwon; Woo, Junghoon; Woo, Jeong-Taek; Rhee, Sang Youl

    2017-01-20

    Several studies have analyzed the effects of weather on factors associated with weight loss. In this study, we directly analyzed the effect of weather on intentional weight loss using global-scale data provided by smartphone applications. Through Weather Underground API and the Noom Coach application, we extracted information on weather and body weight for each user located in each of several geographic areas on all login days. We identified meteorological information (pressure, precipitation, wind speed, dew point, and temperature) and self-monitored body weight data simultaneously. A linear mixed-effects model was performed analyzing 3274 subjects. Subjects in North America had higher initial BMIs than those of subjects in Eastern Asia. During the study period, most subjects who used the smartphone application experienced weight loss in a significant way (80.39%, p-value < 0.001). Subjects who infrequently recorded information about dinner had smaller variations than those of other subjects (β freq.users dinner*time  = 0.007, p-value < 0.001). Colder temperature, lower dew point, and higher values for wind speed and precipitation were significantly associated with weight loss. In conclusion, we found a direct and independent impact of meteorological conditions on intentional weight loss efforts on a global scale (not only on a local level).

  3. Influence of Lumbar Lordosis on the Outcome of Decompression Surgery for Lumbar Canal Stenosis.

    PubMed

    Chang, Han Soo

    2018-01-01

    Although sagittal spinal balance plays an important role in spinal deformity surgery, its role in decompression surgery for lumbar canal stenosis is not well understood. To investigate the hypothesis that sagittal spinal balance also plays a role in decompression surgery for lumbar canal stenosis, a prospective cohort study analyzing the correlation between preoperative lumbar lordosis and outcome was performed. A cohort of 85 consecutive patients who underwent decompression for lumbar canal stenosis during the period 2007-2011 was analyzed. Standing lumbar x-rays and 36-item short form health survey questionnaires were obtained before and up to 2 years after surgery. Correlations between lumbar lordosis and 2 parameters of the 36-item short form health survey (average physical score and bodily pain score) were statistically analyzed using linear mixed effects models. There was a significant correlation between preoperative lumbar lordosis and the 2 outcome parameters at postoperative, 6-month, 1-year, and 2-year time points. A 10° increase of lumbar lordosis was associated with a 5-point improvement in average physical scores. This correlation was not present in preoperative scores. This study showed that preoperative lumbar lordosis significantly influences the outcome of decompression surgery on lumbar canal stenosis. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Finite element solution of nonlinear eddy current problems with periodic excitation and its industrial applications☆

    PubMed Central

    Bíró, Oszkár; Koczka, Gergely; Preis, Kurt

    2014-01-01

    An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer. PMID:24829517

  5. Finite element solution of nonlinear eddy current problems with periodic excitation and its industrial applications.

    PubMed

    Bíró, Oszkár; Koczka, Gergely; Preis, Kurt

    2014-05-01

    An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer.

  6. Semantic focusing allows fully automated single-layer slide scanning of cervical cytology slides.

    PubMed

    Lahrmann, Bernd; Valous, Nektarios A; Eisenmann, Urs; Wentzensen, Nicolas; Grabe, Niels

    2013-01-01

    Liquid-based cytology (LBC) in conjunction with Whole-Slide Imaging (WSI) enables the objective and sensitive and quantitative evaluation of biomarkers in cytology. However, the complex three-dimensional distribution of cells on LBC slides requires manual focusing, long scanning-times, and multi-layer scanning. Here, we present a solution that overcomes these limitations in two steps: first, we make sure that focus points are only set on cells. Secondly, we check the total slide focus quality. From a first analysis we detected that superficial dust can be separated from the cell layer (thin layer of cells on the glass slide) itself. Then we analyzed 2,295 individual focus points from 51 LBC slides stained for p16 and Ki67. Using the number of edges in a focus point image, specific color values and size-inclusion filters, focus points detecting cells could be distinguished from focus points on artifacts (accuracy 98.6%). Sharpness as total focus quality of a virtual LBC slide is computed from 5 sharpness features. We trained a multi-parameter SVM classifier on 1,600 images. On an independent validation set of 3,232 cell images we achieved an accuracy of 94.8% for classifying images as focused. Our results show that single-layer scanning of LBC slides is possible and how it can be achieved. We assembled focus point analysis and sharpness classification into a fully automatic, iterative workflow, free of user intervention, which performs repetitive slide scanning as necessary. On 400 LBC slides we achieved a scanning-time of 13.9±10.1 min with 29.1±15.5 focus points. In summary, the integration of semantic focus information into whole-slide imaging allows automatic high-quality imaging of LBC slides and subsequent biomarker analysis.

  7. Solving LR Conflicts Through Context Aware Scanning

    NASA Astrophysics Data System (ADS)

    Leon, C. Rodriguez; Forte, L. Garcia

    2011-09-01

    This paper presents a new algorithm to compute the exact list of tokens expected by any LR syntax analyzer at any point of the scanning process. The lexer can, at any time, compute the exact list of valid tokens to return only tokens in this set. In the case than more than one matching token is in the valid set, the lexer can resort to a nested LR parser to disambiguate. Allowing nested LR parsing requires some slight modifications when building the LR parsing tables. We also show how LR parsers can parse conflictive and inherently ambiguous languages using a combination of nested parsing and context aware scanning. These expanded lexical analyzers can be generated from high level specifications.

  8. Spatially localized phosphorous metabolism of skeletal muscle in Duchenne muscular dystrophy patients: 24-month follow-up.

    PubMed

    Hooijmans, M T; Doorenweerd, N; Baligand, C; Verschuuren, J J G M; Ronen, I; Niks, E H; Webb, A G; Kan, H E

    2017-01-01

    To assess the changes in phosphodiester (PDE)-levels, detected by 31P magnetic resonance spectroscopy (MRS), over 24-months to determine the potential of PDE as marker for muscle tissue changes in Duchenne Muscular Dystrophy (DMD) patients. Spatially resolved phosphorous datasets were acquired in the right lower leg of 18 DMD patients (range: 5-15.4 years) and 12 age-matched healthy controls (range: 5-14 years) at three time-points (baseline, 12-months, and 24-months) using a 7T MR-System (Philips Achieva). 3-point Dixon images were acquired at 3T (Philips Ingenia) to determine muscle fat fraction. Analyses were done for six muscles that represent different stages of muscle wasting. Differences between groups and time-points were assessed with non-parametric tests with correction for multiple comparisons. Coefficient of variance (CV) were determined for PDE in four healthy adult volunteers in high and low signal-to-noise ratio (SNR) datasets. PDE-levels were significantly higher (two-fold) in DMD patients compared to controls in all analyzed muscles at almost every time point and did not change over the study period. Fat fraction was significantly elevated in all muscles at all time points compared to healthy controls, and increased significantly over time, except in the tibialis posterior muscle. The mean within subject CV for PDE-levels was 4.3% in datasets with high SNR (>10:1) and 5.7% in datasets with low SNR. The stable two-fold increase in PDE-levels found in DMD patients in muscles with different levels of muscle wasting over 2-year time, including DMD patients as young as 5.5 years-old, suggests that PDE-levels may increase very rapidly early in the disease process and remain elevated thereafter. The low CV values in high and low SNR datasets show that PDE-levels can be accurately and reproducibly quantified in all conditions. Our data confirms the great potential of PDE as a marker for muscle tissue changes in DMD patients.

  9. Spatially localized phosphorous metabolism of skeletal muscle in Duchenne muscular dystrophy patients: 24–month follow-up

    PubMed Central

    Doorenweerd, N.; Baligand, C.; Verschuuren, J. J. G. M.; Ronen, I.; Niks, E. H.; Webb, A. G.; Kan, H. E.

    2017-01-01

    Objectives To assess the changes in phosphodiester (PDE)-levels, detected by 31P magnetic resonance spectroscopy (MRS), over 24-months to determine the potential of PDE as marker for muscle tissue changes in Duchenne Muscular Dystrophy (DMD) patients. Methods Spatially resolved phosphorous datasets were acquired in the right lower leg of 18 DMD patients (range: 5–15.4 years) and 12 age-matched healthy controls (range: 5–14 years) at three time-points (baseline, 12-months, and 24-months) using a 7T MR-System (Philips Achieva). 3-point Dixon images were acquired at 3T (Philips Ingenia) to determine muscle fat fraction. Analyses were done for six muscles that represent different stages of muscle wasting. Differences between groups and time-points were assessed with non-parametric tests with correction for multiple comparisons. Coefficient of variance (CV) were determined for PDE in four healthy adult volunteers in high and low signal-to-noise ratio (SNR) datasets. Results PDE-levels were significantly higher (two-fold) in DMD patients compared to controls in all analyzed muscles at almost every time point and did not change over the study period. Fat fraction was significantly elevated in all muscles at all time points compared to healthy controls, and increased significantly over time, except in the tibialis posterior muscle. The mean within subject CV for PDE-levels was 4.3% in datasets with high SNR (>10:1) and 5.7% in datasets with low SNR. Discussion and conclusion The stable two-fold increase in PDE-levels found in DMD patients in muscles with different levels of muscle wasting over 2-year time, including DMD patients as young as 5.5 years-old, suggests that PDE-levels may increase very rapidly early in the disease process and remain elevated thereafter. The low CV values in high and low SNR datasets show that PDE-levels can be accurately and reproducibly quantified in all conditions. Our data confirms the great potential of PDE as a marker for muscle tissue changes in DMD patients. PMID:28763477

  10. Time Crystal Behavior of Excited Eigenstates

    NASA Astrophysics Data System (ADS)

    Syrwid, Andrzej; Zakrzewski, Jakub; Sacha, Krzysztof

    2017-12-01

    In analogy to spontaneous breaking of continuous space translation symmetry in the process of space crystal formation, it was proposed that spontaneous breaking of continuous time translation symmetry could lead to time crystal formation. In other words, a time-independent system prepared in the energy ground state is expected to reveal periodic motion under infinitely weak perturbation. In the case of the system proposed originally by Wilczek, spontaneous breaking of time translation symmetry cannot be observed if one starts with the ground state. We point out that the symmetry breaking can take place if the system is prepared in an excited eigenstate. The latter can be realized experimentally in ultracold atomic gases. We simulate the process of the spontaneous symmetry breaking due to measurements of particle positions and analyze the lifetime of the resulting symmetry broken state.

  11. Time Crystal Behavior of Excited Eigenstates.

    PubMed

    Syrwid, Andrzej; Zakrzewski, Jakub; Sacha, Krzysztof

    2017-12-22

    In analogy to spontaneous breaking of continuous space translation symmetry in the process of space crystal formation, it was proposed that spontaneous breaking of continuous time translation symmetry could lead to time crystal formation. In other words, a time-independent system prepared in the energy ground state is expected to reveal periodic motion under infinitely weak perturbation. In the case of the system proposed originally by Wilczek, spontaneous breaking of time translation symmetry cannot be observed if one starts with the ground state. We point out that the symmetry breaking can take place if the system is prepared in an excited eigenstate. The latter can be realized experimentally in ultracold atomic gases. We simulate the process of the spontaneous symmetry breaking due to measurements of particle positions and analyze the lifetime of the resulting symmetry broken state.

  12. HIV disclosure by men who have sex with men to immediate family over time.

    PubMed

    Serovich, Julianne M; Esbensen, Anna J; Mason, Tina L

    2005-08-01

    Previous researchers have comprehensively documented rates of HIV disclosure to family at discrete time periods yet none have taken a dynamic approach to this phenomenon. The purpose of this study was to address the trajectory of HIV serostatus disclosure to family members. Time to disclosure was analyzed from data provided by 135 HIV-positive men who have sex with men. Results indicated that mothers remain the family member to be told in greatest proportion, yet the proportion of family members told changes over time in a different manner than presented in earlier research. Additionally, the rate at which family members are told at all time points generally does not significantly differ from each other when accounting for characteristics of participants and family members.

  13. HIV Disclosure by Men Who have Sex with Men to Immediate Family over Time

    PubMed Central

    SEROVICH, JULIANNE M.; ESBENSEN, ANNA J.; MASON, TINA L.

    2006-01-01

    Previous researchers have comprehensively documented rates of HIV disclosure to family at discrete time periods yet none have taken a dynamic approach to this phenomenon. The purpose of this study was to address the trajectory of HIV serostatus disclosure to family members. Time to disclosure was analyzed from data provided by 135 HIV-positive men who have sex with men. Results indicated that mothers remain the family member to be told in greatest proportion, yet the proportion of family members told changes over time in a different manner than presented in earlier research. Additionally, the rate at which family members are told at all time points generally does not significantly differ from each other when accounting for characteristics of participants and family members. PMID:16124845

  14. Identification of different macrophage subpopulations with distinct activities in a mouse model of oxygen-induced retinopathy

    PubMed Central

    Zhu, Yanji; Zhang, Ling; Lu, Qing; Gao, Yushuo; Cai, Yujuan; Sui, Ailing; Su, Ting; Shen, Xi; Xie, Bing

    2017-01-01

    The aim of the present study was to characterize the phenotypic shift, quantity and role changes in different subgroups of retinal macrophages in a mouse model of oxygen-induced retinopathy (OIR). The mRNA expression levels of macrophage M1 and M2 subgroup marker genes and polarization-associated genes were analyzed by RT-qPCR. The number of M1 and M2 macrophages in our mouse model of OIR was analyzed by flow cytometry at different time points during the progression of OIR. Immunofluorescence whole mount staining of the retinas of mice with OIR was performed at different time points to examine the influx of macrophages, as well as the morphological characteristics and roles of M1 and M2 macrophages. An increased number of macrophages was recruited during the progression of angiogenesis in the retinas of mice with OIR due to the pro-inflammatory microenvironment containing high levels of cell adhesion and leukocyte transendothelial migration molecules. RT-qPCR and flow cytometric analysis at different time points revealed a decline in the number of M1 cells from a significantly high level at post-natal day (P)13 to a relatively normal level at P21, as well as an increase in the number of M2 cells from P13 to P21 in the mice with OIR, implicating a shift of macrophage polarization towards the M2 subtype. Immunofluorescence staining suggested that the M1 cells interacted with endothelial tip cells at the vascular front, while M2 cells embraced the emerging vessels and bridged the neighboring vessel sprouts. Thus, our data indicate that macrophages play an active role in OIR by contributing to the different steps of neovascularization. Our findings indicate that tissue macrophages may be considered as a potential target for the anti-angiogenic therapy of ocular neovascularization disease. PMID:28627621

  15. The interrelation between victimization and bullying inside young offender institutions.

    PubMed

    Häufle, Jenny; Wolter, Daniel

    2015-01-01

    Bullying and victimization are serious problems within prisons. Young Offender Institutions (YOIs), in particular, suffer from high rates of inmate-on-inmate violence. More recent theories about the development of bullying in closed custody institutions imply a relationship between the experience of victimization and the usage of bullying. In our study, we test this linkage using longitudinal survey data taken at two time-points from 473 inmates (aged 15-24) inside three YOIs in Germany. We first analyze the extent of bullying and victimization, and then used a longitudinal structural equation model to predict inmate bullying behavior at time 2 based on victimization that occurred at time 1. Age is used as a predictor variable to account for differences in the amount of victimization and bullying. Results suggest that bullying and victimization are high in the YOIs, which were subject to research. Most inmates reported being a bully and a victim at the same time. Younger inmates use more direct physical bullying but not psychological bullying. An increase in psychological bullying over time can significantly be explained by victimization at an earlier measurement time point. Our study therefore supports recent theoretical assumptions about the development of bullying behavior. Possible implications for prevention and intervention are discussed. © 2014 Wiley Periodicals, Inc.

  16. Gender-role attitudes and behavior across the transition to parenthood.

    PubMed

    Katz-Wise, Sabra L; Priess, Heather A; Hyde, Janet S

    2010-01-01

    On the basis of social structural theory and identity theory, the current study examined changes in gender-role attitudes and behavior across the first-time transition to parenthood and following the birth of a second child for experienced mothers and fathers. Data were analyzed from the ongoing longitudinal Wisconsin Study of Families and Work. Gender-role attitudes, work and family identity salience, and division of household labor were measured for 205 first-time and 198 experienced mothers and fathers across 4 time points from 5 months pregnant to 12 months postpartum. Multilevel latent growth curve analysis was used to analyze the data. In general, parents became more traditional in their gender-role attitudes and behavior following the birth of a child, women changed more than men, and first-time parents changed more than experienced parents. Findings suggest that changes in gender-role attitudes and behavior following the birth of a child may be attributed to both the process of transitioning to parenthood for the first time and that of negotiating the demands of having a new baby in the family. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  17. Application of homomorphic signal processing to stress wave factor analysis

    NASA Technical Reports Server (NTRS)

    Karagulle, H.; Williams, J. H., Jr.; Lee, S. S.

    1985-01-01

    The stress wave factor (SWF) signal, which is the output of an ultrasonic testing system where the transmitting and receiving transducers are coupled to the same face of the test structure, is analyzed in the frequency domain. The SWF signal generated in an isotropic elastic plate is modelled as the superposition of successive reflections. The reflection which is generated by the stress waves which travel p times as a longitudinal (P) wave and s times as a shear (S) wave through the plate while reflecting back and forth between the bottom and top faces of the plate is designated as the reflection with p, s. Short-time portions of the SWF signal are considered for obtaining spectral information on individual reflections. If the significant reflections are not overlapped, the short-time Fourier analysis is used. A summary of the elevant points of homomorphic signal processing, which is also called cepstrum analysis, is given. Homomorphic signal processing is applied to short-time SWF signals to obtain estimates of the log spectra of individual reflections for cases in which the reflections are overlapped. Two typical SWF signals generated in aluminum plates (overlapping and non-overlapping reflections) are analyzed.

  18. Software Comparison for Renewable Energy Deployment in a Distribution Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, David Wenzhong; Muljadi, Eduard; Tian, Tian

    The main objective of this report is to evaluate different software options for performing robust distributed generation (DG) power system modeling. The features and capabilities of four simulation tools, OpenDSS, GridLAB-D, CYMDIST, and PowerWorld Simulator, are compared to analyze their effectiveness in analyzing distribution networks with DG. OpenDSS and GridLAB-D, two open source software, have the capability to simulate networks with fluctuating data values. These packages allow the running of a simulation each time instant by iterating only the main script file. CYMDIST, a commercial software, allows for time-series simulation to study variations on network controls. PowerWorld Simulator, another commercialmore » tool, has a batch mode simulation function through the 'Time Step Simulation' tool, which obtains solutions for a list of specified time points. PowerWorld Simulator is intended for analysis of transmission-level systems, while the other three are designed for distribution systems. CYMDIST and PowerWorld Simulator feature easy-to-use graphical user interfaces (GUIs). OpenDSS and GridLAB-D, on the other hand, are based on command-line programs, which increase the time necessary to become familiar with the software packages.« less

  19. Prevalence of sleep deficiency in early gestation and its associations with stress and depressive symptoms.

    PubMed

    Okun, Michele L; Kline, Christopher E; Roberts, James M; Wettlaufer, Barbara; Glover, Khaleelah; Hall, Martica

    2013-12-01

    Sleep deficiency is an emerging concept denoting a deficit in the quantity or quality of sleep. This may be particularly salient for pregnant women since they report considerable sleep complaints. Sleep deficiency is linked with morbidity, including degradations in psychosocial functioning, (e.g., depression and stress), which are recognized risk factors for adverse pregnancy outcomes. We sought to describe the frequency of sleep deficiency across early gestation (10-20 weeks) and whether sleep deficiency is associated with reports of more depressive symptoms and stress. Pregnant women (N=160) with no self-reported sleep or psychological disorder provided sleep data collected via diary and actigraphy during early pregnancy: 10-12, 14-16, and 18-20 weeks' gestation. Sleep deficiency was defined as short sleep duration, insufficient sleep, or insomnia. Symptoms of depression and stress were collected at the same three time points. Linear mixed effects models were used to analyze the data. Approximately 28%-38% met criteria for sleep deficiency for at least one time point in early gestation. Women who were sleep deficient across all time points reported more perceived stress than those who were not sleep deficient (p<0.01). Depressive symptoms were higher among women with diary-defined sleep deficiency across all time points (p=0.02). Sleep deficiency is a useful concept to describe sleep recognized to be disturbed in pregnancy. Women with persistent sleep deficiency appear to be at greater risk for impairments in psychosocial functioning during early gestation. These associations are important since psychosocial functioning is a recognized correlate of adverse pregnancy outcomes. Sleep deficiency may be another important risk factor for adverse pregnancy outcomes.

  20. Real Time Correction of Aircraft Flight Fonfiguration

    NASA Technical Reports Server (NTRS)

    Schipper, John F. (Inventor)

    2009-01-01

    Method and system for monitoring and analyzing, in real time, variation with time of an aircraft flight parameter. A time-dependent recovery band, defined by first and second recovery band boundaries that are spaced apart at at least one time point, is constructed for a selected flight parameter and for a selected time recovery time interval length .DELTA.t(FP;rec). A flight parameter, having a value FP(t=t.sub.p) at a time t=t.sub.p, is likely to be able to recover to a reference flight parameter value FP(t';ref), lying in a band of reference flight parameter values FP(t';ref;CB), within a time interval given by t.sub.p.ltoreq.t'.ltoreq.t.sub.p.DELTA.t(FP;rec), if (or only if) the flight parameter value lies between the first and second recovery band boundary traces.

  1. Integrin-linked kinase (ILK) modulates wound healing through regulation of hepatocyte growth factor (HGF)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serrano, Isabel; Diez-Marques, Maria L.; Rodriguez-Puyol, Manuel

    2012-11-15

    Integrin-linked kinase (ILK) is an intracellular effector of cell-matrix interactions and regulates many cellular processes, including growth, proliferation, survival, differentiation, migration, invasion and angiogenesis. The present work analyzes the role of ILK in wound healing in adult animals using a conditional knock-out of the ILK gene generated with the tamoxifen-inducible Cre-lox system (CRE-LOX mice). Results show that ILK deficiency leads to retarded wound closure in skin. Intracellular mechanisms involved in this process were analyzed in cultured mouse embryonic fibroblast (MEF) isolated from CRE-LOX mice and revealed that wounding promotes rapid activation of phosphatidylinositol 3-kinase (PI3K) and ILK. Knockdown of ILKmore » resulted in a retarded wound closure due to a decrease in cellular proliferation and loss of HGF protein expression during the healing process, in vitro and in vivo. Alterations in cell proliferation and wound closure in ILK-deficient MEF or mice could be rescued by exogenous administration of human HGF. These data demonstrate, for the first time, that the activation of PI3K and ILK after skin wounding are critical for HGF-dependent tissue repair and wound healing. -- Highlights: Black-Right-Pointing-Pointer ILK deletion results in decreased HGF expression and delayed scratch wound repair. Black-Right-Pointing-Pointer PI3K/ILK/AKT pathway signals through HGF to regulate wound healing. Black-Right-Pointing-Pointer An ILK-dependent increase in HGF expression is responsible for wound healing in vivo. Black-Right-Pointing-Pointer ILK-KO mice are used to confirm the requirement for ILK function in wound healing. Black-Right-Pointing-Pointer Human HGF treatment restores delayed wound closure in vitro and in vivo.« less

  2. Research on on-line monitoring technology for steel ball's forming process based on load signal analysis method

    NASA Astrophysics Data System (ADS)

    Li, Ying-jun; Ai, Chang-sheng; Men, Xiu-hua; Zhang, Cheng-liang; Zhang, Qi

    2013-04-01

    This paper presents a novel on-line monitoring technology to obtain forming quality in steel ball's forming process based on load signal analysis method, in order to reveal the bottom die's load characteristic in initial cold heading forging process of steel balls. A mechanical model of the cold header producing process is established and analyzed by using finite element method. The maximum cold heading force is calculated. The results prove that the monitoring on the cold heading process with upsetting force is reasonable and feasible. The forming defects are inflected on the three feature points of the bottom die signals, which are the initial point, infection point, and peak point. A novel PVDF piezoelectric force sensor which is simple on construction and convenient on installation is designed. The sensitivity of the PVDF force sensor is calculated. The characteristics of PVDF force sensor are analyzed by FEM. The PVDF piezoelectric force sensor is fabricated to acquire the actual load signals in the cold heading process, and calibrated by a special device. The measuring system of on-line monitoring is built. The characteristics of the actual signals recognized by learning and identification algorithm are in consistence with simulation results. Identification of actual signals shows that the timing difference values of all feature points for qualified products are not exceed ±6 ms, and amplitude difference values are less than ±3%. The calibration and application experiments show that PVDF force sensor has good static and dynamic performances, and is competent at dynamic measuring on upsetting force. It greatly improves automatic level and machining precision. Equipment capacity factor with damages identification method depends on grade of steel has been improved to 90%.

  3. Alcohol consumption patterns among vocational school students in central Thailand.

    PubMed

    Chaveepojnkamjorn, Wisit

    2012-11-01

    The objective of this study was to evaluate alcohol consumption patterns among vocational school students in central Thailand. We conducted a cross sectional study among 1,803 vocational students (80.4 % aged < 17 years) in central Thailand using a self-administered questionnaire which consisted of 2 parts: sociodemographic factors and alcohol drinking behavior from December 2007 to February 2008. Descriptive statistics, a chi-square test and multiple logistic regression were used to analyze the data. The results of this study showed 40.9% of male students and 20.9% of female students drank alcoholic beverages. Multiple logistic regression analysis revealed 2 factors were associated with alcohol consumption among male subjects: field of study (OR 1.5, 95% CI 1.1-2.0), and GPA (OR < 2 = 1.8; 95% CI 1.2-2.7; OR > 3 = 0.6; 95% CI 0.4-0.9). The three most popular venues for drinking were at parties (43.1%), at home/in the dormitory (34.9%) and in bars or saloons near the school (20.9%). Fifty-three point two percent of males drinks alcohol 1-2 times per month and time, 47% drank > 2 times per month. Nearly 78% of female students drink alcohol 1-2 times per month and 22% drink alcohol > 2 time per month. Forty point nine percent of male students consumed 1-2 drinks per time and 36% consumed more than 4 drinks per time. Fifty point four percent of females drank 2 drinks per month. One-third of male students said they engaged in binge drinking in a 2-week period and 14% of girls said they binge drank in a 2-week period. Alcohol consumption is a significant problem among Thai vocational school students. Measures for managing this problem are discussed.

  4. Time-Resolved Transposon Insertion Sequencing Reveals Genome-Wide Fitness Dynamics during Infection.

    PubMed

    Yang, Guanhua; Billings, Gabriel; Hubbard, Troy P; Park, Joseph S; Yin Leung, Ka; Liu, Qin; Davis, Brigid M; Zhang, Yuanxing; Wang, Qiyao; Waldor, Matthew K

    2017-10-03

    Transposon insertion sequencing (TIS) is a powerful high-throughput genetic technique that is transforming functional genomics in prokaryotes, because it enables genome-wide mapping of the determinants of fitness. However, current approaches for analyzing TIS data assume that selective pressures are constant over time and thus do not yield information regarding changes in the genetic requirements for growth in dynamic environments (e.g., during infection). Here, we describe structured analysis of TIS data collected as a time series, termed pattern analysis of conditional essentiality (PACE). From a temporal series of TIS data, PACE derives a quantitative assessment of each mutant's fitness over the course of an experiment and identifies mutants with related fitness profiles. In so doing, PACE circumvents major limitations of existing methodologies, specifically the need for artificial effect size thresholds and enumeration of bacterial population expansion. We used PACE to analyze TIS samples of Edwardsiella piscicida (a fish pathogen) collected over a 2-week infection period from a natural host (the flatfish turbot). PACE uncovered more genes that affect E. piscicida 's fitness in vivo than were detected using a cutoff at a terminal sampling point, and it identified subpopulations of mutants with distinct fitness profiles, one of which informed the design of new live vaccine candidates. Overall, PACE enables efficient mining of time series TIS data and enhances the power and sensitivity of TIS-based analyses. IMPORTANCE Transposon insertion sequencing (TIS) enables genome-wide mapping of the genetic determinants of fitness, typically based on observations at a single sampling point. Here, we move beyond analysis of endpoint TIS data to create a framework for analysis of time series TIS data, termed pattern analysis of conditional essentiality (PACE). We applied PACE to identify genes that contribute to colonization of a natural host by the fish pathogen Edwardsiella piscicida. PACE uncovered more genes that affect E. piscicida 's fitness in vivo than were detected using a terminal sampling point, and its clustering of mutants with related fitness profiles informed design of new live vaccine candidates. PACE yields insights into patterns of fitness dynamics and circumvents major limitations of existing methodologies. Finally, the PACE method should be applicable to additional "omic" time series data, including screens based on clustered regularly interspaced short palindromic repeats with Cas9 (CRISPR/Cas9). Copyright © 2017 Yang et al.

  5. Two Point Space-Time Correlation of Density Fluctuations Measured in High Velocity Free Jets

    NASA Technical Reports Server (NTRS)

    Panda, Jayanta

    2006-01-01

    Two-point space-time correlations of air density fluctuations in unheated, fully-expanded free jets at Mach numbers M(sub j) = 0.95, 1.4, and 1.8 were measured using a Rayleigh scattering based diagnostic technique. The molecular scattered light from two small probe volumes of 1.03 mm length was measured for a completely non-intrusive means of determining the turbulent density fluctuations. The time series of density fluctuations were analyzed to estimate the integral length scale L in a moving frame of reference and the convective Mach number M(sub c) at different narrow Strouhal frequency (St) bands. It was observed that M(sub c) and the normalized moving frame length scale L*St/D, where D is the jet diameter, increased with Strouhal frequency before leveling off at the highest resolved frequency. Significant differences were observed between data obtained from the lip shear layer and the centerline of the jet. The wave number frequency transform of the correlation data demonstrated progressive increase in the radiative part of turbulence fluctuations with increasing jet Mach number.

  6. Micro-vibration detection with heterodyne holography based on time-averaged method

    NASA Astrophysics Data System (ADS)

    Qin, XiaoDong; Pan, Feng; Chen, ZongHui; Hou, XueQin; Xiao, Wen

    2017-02-01

    We propose a micro-vibration detection method by introducing heterodyne interferometry to time-averaged holography. This method compensates for the deficiency of time-average holography in quantitative measurements and widens its range of application effectively. Acousto-optic modulators are used to modulate the frequencies of the reference beam and the object beam. Accurate detection of the maximum amplitude of each point in the vibration plane is performed by altering the frequency difference of both beams. The range of amplitude detection of plane vibration is extended. In the stable vibration mode, the distribution of the maximum amplitude of each point is measured and the fitted curves are plotted. Hence the plane vibration mode of the object is demonstrated intuitively and detected quantitatively. We analyzed the method in theory and built an experimental system with a sine signal as the excitation source and a typical piezoelectric ceramic plate as the target. The experimental results indicate that, within a certain error range, the detected vibration mode agrees with the intrinsic vibration characteristics of the object, thus proving the validity of this method.

  7. Electronic method for autofluorography of macromolecules on two-D matrices. [Patent application

    DOEpatents

    Davidson, J.B.; Case, A.L.

    1981-12-30

    A method for detecting, localizing, and quantifying macromolecules contained in a two-dimensional matrix is provided which employs a television-based position sensitive detection system. A molecule-containing matrix may be produced by conventional means to produce spots of light at the molecule locations which are detected by the television system. The matrix, such as a gel matrix, is exposed to an electronic camera system including an image-intensifier and secondary electron conduction camera capable of light integrating times of many minutes. A light image stored in the form of a charge image on the camera tube target is scanned by conventional television techniques, digitized, and stored in a digital memory. Intensity of any point on the image may be determined from the number at the memory address of the point. The entire image may be displayed on a television monitor for inspection and photographing or individual spots may be analyzed through selected readout of the memory locations. Compared to conventional film exposure methods, the exposure time may be reduced 100 to 1000 times.

  8. Using phone sensors and an artificial neural network to detect gait changes during drinking episodes in the natural environment.

    PubMed

    Suffoletto, Brian; Gharani, Pedram; Chung, Tammy; Karimi, Hassan

    2018-02-01

    Phone sensors could be useful in assessing changes in gait that occur with alcohol consumption. This study determined (1) feasibility of collecting gait-related data during drinking occasions in the natural environment, and (2) how gait-related features measured by phone sensors relate to estimated blood alcohol concentration (eBAC). Ten young adult heavy drinkers were prompted to complete a 5-step gait task every hour from 8pm to 12am over four consecutive weekends. We collected 3-axis accelerometer, gyroscope, and magnetometer data from phone sensors, and computed 24 gait-related features using a sliding window technique. eBAC levels were calculated at each time point based on Ecological Momentary Assessment (EMA) of alcohol use. We used an artificial neural network model to analyze associations between sensor features and eBACs in training (70% of the data) and validation and test (30% of the data) datasets. We analyzed 128 data points where both eBAC and gait-related sensor data were captured, either when not drinking (n=60), while eBAC was ascending (n=55) or eBAC was descending (n=13). 21 data points were captured at times when the eBAC was greater than the legal limit (0.08mg/dl). Using a Bayesian regularized neural network, gait-related phone sensor features showed a high correlation with eBAC (Pearson's r>0.9), and >95% of estimated eBAC would fall between -0.012 and +0.012 of actual eBAC. It is feasible to collect gait-related data from smartphone sensors during drinking occasions in the natural environment. Sensor-based features can be used to infer gait changes associated with elevated blood alcohol content. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. The inadequacy of Individual Educational Program (IEP) goals for high school students with word-level reading difficulties.

    PubMed

    Catone, William V; Brady, Susan A

    2005-06-01

    This investigation analyzed goals from the Individual Educational Programs (IEPs) of 54 high school students with diagnosed reading disabilities in basic skills (decoding and/or word identification). Results showed that for 73% of the students, the IEPs written when they were in high school failed to specify any objectives regarding their acute difficulties with basic skills. IEPs from earlier points in the students' educations were also reviewed, as available. For 23 of the students, IEPs were present in the students' files for three time points: elementary school (ES), middle school (MS), and high school (HS). Another 20 students from the sample of 54 had IEPs available for two time points (HS and either MS or ES). Comparisons with the IEPs from younger years showed a pattern of decline from ES to MS to HS in the percentage of IEPs that commented on or set goals pertaining to weaknesses in decoding. These findings suggest that basic skills deficits that persist into the upper grade levels are not being sufficiently targeted for remediation, and help explain why older students frequently fail to resolve their reading problems.

  10. An efficient method for the prediction of deleterious multiple-point mutations in the secondary structure of RNAs using suboptimal folding solutions

    PubMed Central

    Churkin, Alexander; Barash, Danny

    2008-01-01

    Background RNAmute is an interactive Java application which, given an RNA sequence, calculates the secondary structure of all single point mutations and organizes them into categories according to their similarity to the predicted structure of the wild type. The secondary structure predictions are performed using the Vienna RNA package. A more efficient implementation of RNAmute is needed, however, to extend from the case of single point mutations to the general case of multiple point mutations, which may often be desired for computational predictions alongside mutagenesis experiments. But analyzing multiple point mutations, a process that requires traversing all possible mutations, becomes highly expensive since the running time is O(nm) for a sequence of length n with m-point mutations. Using Vienna's RNAsubopt, we present a method that selects only those mutations, based on stability considerations, which are likely to be conformational rearranging. The approach is best examined using the dot plot representation for RNA secondary structure. Results Using RNAsubopt, the suboptimal solutions for a given wild-type sequence are calculated once. Then, specific mutations are selected that are most likely to cause a conformational rearrangement. For an RNA sequence of about 100 nts and 3-point mutations (n = 100, m = 3), for example, the proposed method reduces the running time from several hours or even days to several minutes, thus enabling the practical application of RNAmute to the analysis of multiple-point mutations. Conclusion A highly efficient addition to RNAmute that is as user friendly as the original application but that facilitates the practical analysis of multiple-point mutations is presented. Such an extension can now be exploited prior to site-directed mutagenesis experiments by virologists, for example, who investigate the change of function in an RNA virus via mutations that disrupt important motifs in its secondary structure. A complete explanation of the application, called MultiRNAmute, is available at [1]. PMID:18445289

  11. Early mortality in multiple myeloma: the time-dependent impact of comorbidity: A population-based study in 621 real-life patients.

    PubMed

    Ríos-Tamayo, Rafael; Sáinz, Juan; Martínez-López, Joaquín; Puerta, José Manuel; Chang, Daysi-Yoe-Ling; Rodríguez, Teresa; Garrido, Pilar; de Veas, José Luís García; Romero, Antonio; Moratalla, Lucía; López-Fernández, Elisa; González, Pedro Antonio; Sánchez, María José; Jiménez-Moleón, José Juan; Jurado, Manuel; Lahuerta, Juan José

    2016-07-01

    Multiple myeloma is a heterogeneous disease with variable survival; this variability cannot be fully explained by the current systems of risk stratification. Early mortality remains a serious obstacle to further improve the trend toward increased survival demonstrated in recent years. However, the definition of early mortality is not standardized yet. Importantly, no study has focused on the impact of comorbidity on early mortality in multiple myeloma to date. Therefore, we analyzed the role of baseline comorbidity in a large population-based cohort of 621 real-life myeloma patients over a 31-year period. To evaluate early mortality, a sequential multivariate regression model at 2, 6, and 12 months from diagnosis was performed. It was demonstrated that comorbidity had an independent impact on early mortality, which is differential and time-dependent. Besides renal failure, respiratory disease at 2 months, liver disease at 6 months, and hepatitis virus C infection at 12 months, were, respectively, associated with early mortality, adjusting for other well-established prognostic factors. On the other hand, the long-term monitoring in our study points out a modest downward trend in early mortality over time. This is the first single institution population-based study aiming to assess the impact of comorbidity on early mortality in multiple myeloma. It is suggested that early mortality should be analyzed at three key time points (2, 6, and 12 months), in order to allow comparisons between studies. Comorbidity plays a critical role in the outcome of myeloma patients in terms of early mortality. Am. J. Hematol. 91:700-704, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  12. A Categorization of Dynamic Analyzers

    NASA Technical Reports Server (NTRS)

    Lujan, Michelle R.

    1997-01-01

    Program analysis techniques and tools are essential to the development process because of the support they provide in detecting errors and deficiencies at different phases of development. The types of information rendered through analysis includes the following: statistical measurements of code, type checks, dataflow analysis, consistency checks, test data,verification of code, and debugging information. Analyzers can be broken into two major categories: dynamic and static. Static analyzers examine programs with respect to syntax errors and structural properties., This includes gathering statistical information on program content, such as the number of lines of executable code, source lines. and cyclomatic complexity. In addition, static analyzers provide the ability to check for the consistency of programs with respect to variables. Dynamic analyzers in contrast are dependent on input and the execution of a program providing the ability to find errors that cannot be detected through the use of static analysis alone. Dynamic analysis provides information on the behavior of a program rather than on the syntax. Both types of analysis detect errors in a program, but dynamic analyzers accomplish this through run-time behavior. This paper focuses on the following broad classification of dynamic analyzers: 1) Metrics; 2) Models; and 3) Monitors. Metrics are those analyzers that provide measurement. The next category, models, captures those analyzers that present the state of the program to the user at specified points in time. The last category, monitors, checks specified code based on some criteria. The paper discusses each classification and the techniques that are included under them. In addition, the role of each technique in the software life cycle is discussed. Familiarization with the tools that measure, model and monitor programs provides a framework for understanding the program's dynamic behavior from different, perspectives through analysis of the input/output data.

  13. Analyzing crack development pattern of masonry structure in seismic oscillation by digital photography

    NASA Astrophysics Data System (ADS)

    Zhang, Guojian; Yu, Chengxin; Ding, Xinhua

    2018-01-01

    In this study, digital photography is used to monitor the instantaneous deformation of a masonry wall in seismic oscillation. In order to obtain higher measurement accuracy, the image matching-time baseline parallax method (IM-TBPM) is used to correct errors caused by the change of intrinsic and extrinsic parameters of digital cameras. Results show that the average errors of control point C5 are 0.79mm, 0.44mm and 0.96mm in X, Z and comprehensive direction, respectively. The average errors of control point C6 are 0.49mm, 0.44mm and 0.71mm in X, Z and comprehensive direction, respectively. These suggest that IM-TBPM can meet the accuracy requirements of instantaneous deformation monitoring. In seismic oscillation the middle to lower of the masonry wall develops cracks firstly. Then the shear failure occurs on the middle of masonry wall. This study provides technical basis for analyzing the crack development pattern of masonry structure in seismic oscillation and have significant implications for improved construction of masonry structures in earthquake prone areas.

  14. Transcranial photoacoustic tomography of the monkey brain

    NASA Astrophysics Data System (ADS)

    Nie, Liming; Huang, Chao; Guo, Zijian; Anastasio, Mark; Wang, Lihong V.

    2012-02-01

    A photoacoustic tomography (PAT) system using a virtual point ultrasonic transducer was developed for transcranial imaging of monkey brains. The virtual point transducer provided a 10 times greater field-of-view (FOV) than finiteaperture unfocused transducers, which enables large primate imaging. The cerebral cortex of a monkey brain was accurately mapped transcranially, through up to two skulls ranging from 4 to 8 mm in thickness. The mass density and speed of sound distributions of the skull were estimated from adjunct X-ray CT image data and utilized with a timereversal algorithm to mitigate artifacts in the reconstructed image due to acoustic aberration. The oxygenation saturation (sO2) in blood phantoms through a monkey skull was also imaged and quantified, with results consistent with measurements by a gas analyzer. The oxygenation saturation (sO2) in blood phantoms through a monkey skull was also imaged and quantified, with results consistent with measurements by a gas analyzer. Our experimental results demonstrate that PAT can overcome the optical and ultrasound attenuation of a relatively thick skull, and the imaging aberration caused by skull can be corrected to a great extent.

  15. Printing line/space patterns on nonplanar substrates using a digital micromirror device-based point-array scanning technique

    NASA Astrophysics Data System (ADS)

    Kuo, Hung-Fei; Kao, Guan-Hsuan; Zhu, Liang-Xiu; Hung, Kuo-Shu; Lin, Yu-Hsin

    2018-02-01

    This study used a digital micromirror device (DMD) to produce point-array patterns and employed a self-developed optical system to define line-and-space patterns on nonplanar substrates. First, field tracing was employed to analyze the aerial images of the lithographic system, which comprised an optical system and the DMD. Multiobjective particle swarm optimization was then applied to determine the spot overlapping rate used. The objective functions were set to minimize linewidth and maximize image log slope, through which the dose of the exposure agent could be effectively controlled and the quality of the nonplanar lithography could be enhanced. Laser beams with 405-nm wavelength were employed as the light source. Silicon substrates coated with photoresist were placed on a nonplanar translation stage. The DMD was used to produce lithographic patterns, during which the parameters were analyzed and optimized. The optimal delay time-sequence combinations were used to scan images of the patterns. Finally, an exposure linewidth of less than 10 μm was successfully achieved using the nonplanar lithographic process.

  16. All-dielectric ultrathin conformal metasurfaces: lensing and cloaking applications at 532 nm wavelength

    NASA Astrophysics Data System (ADS)

    Cheng, Jierong; Jafar-Zanjani, Samad; Mosallaei, Hossein

    2016-12-01

    Metasurfaces are ideal candidates for conformal wave manipulation on curved objects due to their low profiles and rich functionalities. Here we design and analyze conformal metasurfaces for practical optical applications at 532 nm visible band for the first time. The inclusions are silicon disk nanoantennas embedded in a flexible supporting layer of polydimethylsiloxane (PDMS). They behave as local phase controllers in subwavelength dimensions for successful modification of electromagnetic responses point by point, with merits of high efficiency, at visible regime, ultrathin films, good tolerance to the incidence angle and the grid stretching due to the curvy substrate. An efficient modeling technique based on field equivalence principle is systematically proposed for characterizing metasurfaces with huge arrays of nanoantennas oriented in a conformal manner. Utilizing the robust nanoantenna inclusions and benefiting from the powerful analyzing tool, we successfully demonstrate the superior performances of the conformal metasurfaces in two specific areas, with one for lensing and compensation of spherical aberration, and the other carpet cloak, both at 532 nm visible spectrum.

  17. Quantum no-singularity theorem from geometric flows

    NASA Astrophysics Data System (ADS)

    Alsaleh, Salwa; Alasfar, Lina; Faizal, Mir; Ali, Ahmed Farag

    2018-04-01

    In this paper, we analyze the classical geometric flow as a dynamical system. We obtain an action for this system, such that its equation of motion is the Raychaudhuri equation. This action will be used to quantize this system. As the Raychaudhuri equation is the basis for deriving the singularity theorems, we will be able to understand the effects and such a quantization will have on the classical singularity theorems. Thus, quantizing the geometric flow, we can demonstrate that a quantum space-time is complete (nonsingular). This is because the existence of a conjugate point is a necessary condition for the occurrence of singularities, and we will be able to demonstrate that such conjugate points cannot occur due to such quantum effects.

  18. Dynamic route and departure time choice model based on self-adaptive reference point and reinforcement learning

    NASA Astrophysics Data System (ADS)

    Li, Xue-yan; Li, Xue-mei; Yang, Lingrun; Li, Jing

    2018-07-01

    Most of the previous studies on dynamic traffic assignment are based on traditional analytical framework, for instance, the idea of Dynamic User Equilibrium has been widely used in depicting both the route choice and the departure time choice. However, some recent studies have demonstrated that the dynamic traffic flow assignment largely depends on travelers' rationality degree, travelers' heterogeneity and what the traffic information the travelers have. In this paper, we develop a new self-adaptive multi agent model to depict travelers' behavior in Dynamic Traffic Assignment. We use Cumulative Prospect Theory with heterogeneous reference points to illustrate travelers' bounded rationality. We use reinforcement-learning model to depict travelers' route and departure time choosing behavior under the condition of imperfect information. We design the evolution rule of travelers' expected arrival time and the algorithm of traffic flow assignment. Compared with the traditional model, the self-adaptive multi agent model we proposed in this paper can effectively help travelers avoid the rush hour. Finally, we report and analyze the effect of travelers' group behavior on the transportation system, and give some insights into the relation between travelers' group behavior and the performance of transportation system.

  19. Mobile Device Trends in Orthopedic Surgery: Rapid Change and Future Implications.

    PubMed

    Andrawis, John P; Muzykewicz, David A; Franko, Orrin I

    2016-01-01

    Mobile devices are increasingly becoming integral communication and clinical tools. Monitoring the prevalence and utilization characteristics of surgeons and trainees is critical to understanding how these new technologies can be best used in practice. The authors conducted a prospective Internet-based survey over 7 time points from August 2010 to August 2014 at all nationwide American Council for Graduate Medical Education-accredited orthopedic programs. The survey questionnaire was designed to evaluate the use of devices and mobile applications (apps) among trainees and physicians in the clinical setting. Results were analyzed and summarized for orthopedic surgeons and trainees. During the 48-month period, there were 7 time points with 467, 622, 329, 223, 237, 111, and 134 responses. Mobile device use in the clinical setting increased across all fields and levels of training during the study period. Orthopedic trainees increased their use of Smartphone apps in the clinical setting from 60% to 84%, whereas attending use increased from 41% to 61%. During this time frame, use of Apple/Android platforms increased from 45%/13% to 85%/15%, respectively. At all time points, 70% of orthopedic surgeons believed their institution/hospital should support mobile device use. As measured over a 48-month period, mobile devices have become an ubiquitous tool in the clinical setting among orthopedic surgeons and trainees. The authors expect these trends to continue and encourage providers and trainees to be aware of the limitations and risks inherent with new technology. Copyright 2016, SLACK Incorporated.

  20. New well testing applications of the pressure derivative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onur, M.

    1989-01-01

    This work presents new derivative type curves based on a new derivative group which is equal to the dimensionless pressure group divided by its logarithmic derivative with respect to dimensionless time group. One major advantage of these type curves is that the type-curve match of field pressure/pressure-derivative data with the new derivative type curves is accomplished by moving the field data plot in only the horizontal direction. This type-curve match fixes time match-point values. The pressure change versus time data is then matched with the dimensionless pressure solution to determine match-point values. Well/reservoir parameters can then be estimated in themore » standard way. This two step type-curve matching procedure increases the likelihood of obtaining a unique match. Moreover, the unique correspondence between the ordinate of the field data plot and the new derivative type curves should prove useful in determining whether given field data actually represents the well/reservoir model assumed by a selected type curve solution. It is also shown that the basic idea used in construction the type curves can be used to ensure that proper semilog straight lines are chosen when analyzing pressure data by semilog methods. Analysis of both drawdown and buildup data is considered and actual field cases are analyzed using the new derivative type curves and the semilog identification method. This work also presents new methods based on the pressure derivative to analyze buildup data obtained at a well (fracture or unfractured) produced to pseudosteady-state prior to shut-in. By using a method of analysis based on the pressure derivative, it is shown that a well's drainage area at the instant of shut-in and the flow capacity can be computed directly from buildup data even in cases where conventional semilog straight lines are not well-defined.« less

  1. A field-deployable mobile molecular diagnostic system for malaria at the point of need.

    PubMed

    Choi, Gihoon; Song, Daniel; Shrestha, Sony; Miao, Jun; Cui, Liwang; Guan, Weihua

    2016-11-01

    In response to the urgent need of a field-deployable and highly sensitive malaria diagnosis, we developed a standalone, "sample-in-answer-out" molecular diagnostic system (AnyMDx) to enable quantitative molecular analysis of blood-borne malaria in low resource areas. The system consists of a durable battery-powered analyzer and a disposable microfluidic compact disc loaded with reagents ready for use. A low power thermal module and a novel fluorescence-sensing module are integrated into the analyzer for real-time monitoring of loop-mediated isothermal nucleic acid amplification (LAMP) of target parasite DNA. With 10 μL of raw blood sample, the AnyMDx system automates the nucleic acid sample preparation and subsequent LAMP and real-time detection. Under laboratory conditions with whole-blood samples spiked with cultured Plasmodium falciparum, we achieved a detection limit of ∼0.6 parasite per μL, much lower than those for the conventional microscopy and rapid diagnostic tests (∼50-100 parasites per μL). The turnaround time from sample to answer is less than 40 minutes. The AnyMDx is user-friendly requiring minimal technological training. The analyzer and the disposable reagent compact discs are cost-effective, making AnyMDx a potential tool for malaria molecular diagnosis under field settings for malaria elimination.

  2. Analyzing the dynamics of cell cycle processes from fixed samples through ergodic principles

    PubMed Central

    Wheeler, Richard John

    2015-01-01

    Tools to analyze cyclical cellular processes, particularly the cell cycle, are of broad value for cell biology. Cell cycle synchronization and live-cell time-lapse observation are widely used to analyze these processes but are not available for many systems. Simple mathematical methods built on the ergodic principle are a well-established, widely applicable, and powerful alternative analysis approach, although they are less widely used. These methods extract data about the dynamics of a cyclical process from a single time-point “snapshot” of a population of cells progressing through the cycle asynchronously. Here, I demonstrate application of these simple mathematical methods to analysis of basic cyclical processes—cycles including a division event, cell populations undergoing unicellular aging, and cell cycles with multiple fission (schizogony)—as well as recent advances that allow detailed mapping of the cell cycle from continuously changing properties of the cell such as size and DNA content. This includes examples using existing data from mammalian, yeast, and unicellular eukaryotic parasite cell biology. Through the ongoing advances in high-throughput cell analysis by light microscopy, electron microscopy, and flow cytometry, these mathematical methods are becoming ever more important and are a powerful complementary method to traditional synchronization and time-lapse cell cycle analysis methods. PMID:26543196

  3. Continuous monitoring of enzymatic activity within native electrophoresis gels: Application to mitochondrial oxidative phosphorylation complexes

    PubMed Central

    Covian, Raul; Chess, David; Balaban, Robert S.

    2012-01-01

    Native gel electrophoresis allows the separation of very small amounts of protein complexes while retaining aspects of their activity. In-gel enzymatic assays are usually performed by using reaction-dependent deposition of chromophores or light scattering precipitates quantified at fixed time points after gel removal and fixation, limiting the ability to analyze enzyme reaction kinetics. Herein, we describe a custom reaction chamber with reaction media recirculation and filtering and an imaging system that permits the continuous monitoring of in-gel enzymatic activity even in the presence of turbidity. Images were continuously collected using time-lapse high resolution digital imaging, and processing routines were developed to obtain kinetic traces of the in-gel activities and analyze reaction time courses. This system also permitted the evaluation of enzymatic activity topology within the protein bands of the gel. This approach was used to analyze the reaction kinetics of two mitochondrial complexes in native gels. Complex IV kinetics showed a short initial linear phase where catalytic rates could be calculated, whereas Complex V activity revealed a significant lag phase followed by two linear phases. The utility of monitoring the entire kinetic behavior of these reactions in native gels, as well as the general application of this approach, is discussed. PMID:22975200

  4. Continuous monitoring of enzymatic activity within native electrophoresis gels: application to mitochondrial oxidative phosphorylation complexes.

    PubMed

    Covian, Raul; Chess, David; Balaban, Robert S

    2012-12-01

    Native gel electrophoresis allows the separation of very small amounts of protein complexes while retaining aspects of their activity. In-gel enzymatic assays are usually performed by using reaction-dependent deposition of chromophores or light-scattering precipitates quantified at fixed time points after gel removal and fixation, limiting the ability to analyze the enzyme reaction kinetics. Herein, we describe a custom reaction chamber with reaction medium recirculation and filtering and an imaging system that permits the continuous monitoring of in-gel enzymatic activity even in the presence of turbidity. Images were continuously collected using time-lapse high-resolution digital imaging, and processing routines were developed to obtain kinetic traces of the in-gel activities and analyze reaction time courses. This system also permitted the evaluation of enzymatic activity topology within the protein bands of the gel. This approach was used to analyze the reaction kinetics of two mitochondrial complexes in native gels. Complex IV kinetics showed a short initial linear phase in which catalytic rates could be calculated, whereas Complex V activity revealed a significant lag phase followed by two linear phases. The utility of monitoring the entire kinetic behavior of these reactions in native gels, as well as the general application of this approach, is discussed. Published by Elsevier Inc.

  5. Cost analysis of hospital material management systems.

    PubMed

    Egbelu, P J; Harmonosky, C M; Ventura, J A; O'Brien, W E; Sommer, H J

    1998-01-01

    Integrated healthcare material management begins with manufactures of medical/surgical supplies, uses distributors and ends at the point of use at hospitals. Recent material management philosophies in the healthcare industry, such as just-in-time and stockless systems, are yet to be fully evaluated. In order to evaluate the cost effectiveness of each type of material management technique, a cost model for hospital materials management has been designed. Several case scenarios are analyzed and results are reported.

  6. Longitudinal development of cortical thickness, folding, and fiber density networks in the first 2 years of life.

    PubMed

    Nie, Jingxin; Li, Gang; Wang, Li; Shi, Feng; Lin, Weili; Gilmore, John H; Shen, Dinggang

    2014-08-01

    Quantitatively characterizing the development of cortical anatomical networks during the early stage of life plays an important role in revealing the relationship between cortical structural connection and high-level functional development. The development of correlation networks of cortical-thickness, cortical folding, and fiber-density is systematically analyzed in this article to study the relationship between different anatomical properties during the first 2 years of life. Specifically, longitudinal MR images of 73 healthy subjects from birth to 2 year old are used. For each subject at each time point, its measures of cortical thickness, cortical folding, and fiber density are projected to its cortical surface that has been partitioned into 78 cortical regions. Then, the correlation matrices for cortical thickness, cortical folding, and fiber density at each time point can be constructed, respectively, by computing the inter-regional Pearson correlation coefficient (of any pair of ROIs) across all 73 subjects. Finally, the presence/absence pattern (i.e., binary pattern) of the connection network is constructed from each inter-regional correlation matrix, and its statistical and anatomical properties are adopted to analyze the longitudinal development of anatomical networks. The results show that the development of anatomical network could be characterized differently by using different anatomical properties (i.e., using cortical thickness, cortical folding, or fiber density). Copyright © 2013 Wiley Periodicals, Inc.

  7. Phone traffic as a measurement of agricultural events

    NASA Astrophysics Data System (ADS)

    Martín, Samuel; Borondo, Javier; Morales, Alfredo; Losada, Juan Carlos; Tarquis, Ana M.; Benito, Rosa Maria

    2015-04-01

    Inspired by empirical studies of networked systems such as the Internet, social networks, and biological networks, researchers have in recent years developed a variety of techniques and models to help us understand or predict the behaviour of these systems (1). However, it has been recently when global food system has been seen as a complex web of production, processing, storage and transportation opening new challenges in their analysis. Agricultural activities in developing countries remain as important today as in the 1950s implying seasonal workers mobilization. The proliferation of mobile phones (MPs) offers an unprecedented tool to analyze human activity mapping. We would like to mention that in developed countries, the number of MP subscribers has surpassed the total population, with a penetration rate now reaching 121%, whereas in developing countries, it is as high as 90% and continuing to rise (2). As an example, we have analyzed the impact that agricultural activities, such as the growing of groundnut, have on Senegal. To this end we have analyzed the Normalized Difference Vegetation Index (NDVI) time series of the whole of Senegal and spotted the regions where groundnut is grown to identify the time period when this crop growth. By analyzing phone calls at each region of the country we found that a significant fraction of antennas exhibit two well defined peaks of activity corresponding with the begging and end of the growing season. Antennas located on regions identified as growing regions present this pattern. However, other antennas, located in non growing regions, such as Dakar, also present the two peaks pattern pointing out the synchronization between growing regions and key points in cities that emerges from the agricultural activity. References 1. Marta C. González, César A. Hidalgo and Albert-László Barabási (2008) Understanding individual human mobility patterns. Nature 453, 779-78. 2. International Telecommunication Union (2014) World Telecommunication Development Conference (WTDC-2014): Final Report. (ITU, Dubai, United Arab Emirates).

  8. A novel method for the line-of-response and time-of-flight reconstruction in TOF-PET detectors based on a library of synchronized model signals

    NASA Astrophysics Data System (ADS)

    Moskal, P.; Zoń, N.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Kamińska, D.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kowalski, P.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz.; Pałka, M.; Raczyński, L.; Rudy, Z.; Rundel, O.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Wiślicki, W.; Zieliński, M.

    2015-03-01

    A novel method of hit time and hit position reconstruction in scintillator detectors is described. The method is based on comparison of detector signals with results stored in a library of synchronized model signals registered for a set of well-defined positions of scintillation points. The hit position is reconstructed as the one corresponding to the signal from the library which is most similar to the measurement signal. The time of the interaction is determined as a relative time between the measured signal and the most similar one in the library. A degree of similarity of measured and model signals is defined as the distance between points representing the measurement- and model-signal in the multi-dimensional measurement space. Novelty of the method lies also in the proposed way of synchronization of model signals enabling direct determination of the difference between time-of-flights (TOF) of annihilation quanta from the annihilation point to the detectors. The introduced method was validated using experimental data obtained by means of the double strip prototype of the J-PET detector and 22Na sodium isotope as a source of annihilation gamma quanta. The detector was built out from plastic scintillator strips with dimensions of 5 mm×19 mm×300 mm, optically connected at both sides to photomultipliers, from which signals were sampled by means of the Serial Data Analyzer. Using the introduced method, the spatial and TOF resolution of about 1.3 cm (σ) and 125 ps (σ) were established, respectively.

  9. Surgical Safety Checklist compliance: a job done poorly!

    PubMed

    Sparks, Eric A; Wehbe-Janek, Hania; Johnson, Rebecca L; Smythe, W Roy; Papaconstantinou, Harry T

    2013-11-01

    The Surgical Safety Checklist (SSC) has been introduced as an effective tool for reducing perioperative mortality and complications. Although reported completion rates are high, objective compliance is not well defined. The purpose of this retrospective analysis is to determine SSC compliance as measured by accuracy and completion, and factors that can affect compliance. In September 2010, our institution implemented an adaptation of the World Health Organization's SSC in an effort to improve patient safety and outcomes. A tool was developed for objective evaluation of overall compliance (maximum score 40) that was an aggregate score of completion and accuracy (20 each). Random samples of SSCs were analyzed at specific, predefined, time points throughout the first year after implementation. Procedure start time, operative time, and case complexity were assessed to determine association with compliance. A total of 671 SSCs were analyzed. The participation rate improved from 33% (95 of 285) at week 1 to 94% (249 of 265) at 1 year (p < 0.0001, chi-square test). Mean overall compliance score was 27.7 (± 5.4 SD) of 40 possible points (69.3% ± 13.5% of total possible score; n = 671) and did not change over time. Although completion scores were high (16.9 ± 2.7 out of 20 [84.5% ± 13.6%]), accuracy was poor (10.8 ± 3.4 out of 20 [54.1% ± 16.9%]). Overall compliance score was significantly associated with case start-time (p < 0.05), and operative time and case complexity showed no association. Our data indicate that although implementation of an SSC results in a high level of overall participation and completion, accuracy remained poor. Identification of barriers to effective use is needed, as improper checklist use can adversely affect patient safety. Copyright © 2013 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  10. Transition and mixing in axisymmetric jets and vortex rings

    NASA Technical Reports Server (NTRS)

    Allen, G. A., Jr.; Cantwell, B. J.

    1986-01-01

    A class of impulsively started, axisymmetric, laminar jets produced by a time dependent joint source of momentum are considered. These jets are different flows, each initially at rest in an unbounded fluid. The study is conducted at three levels of detail. First, a generalized set of analytic creeping flow solutions are derived with a method of flow classification. Second, from this set, three specific creeping flow solutions are studied in detail: the vortex ring, the round jet, and the ramp jet. This study involves derivation of vorticity, stream function, entrainment diagrams, and evolution of time lines through computer animation. From entrainment diagrams, critical points are derived and analyzed. The flow geometry is dictated by the properties and location of critical points which undergo bifurcation and topological transformation (a form of transition) with changing Reynolds number. Transition Reynolds numbers were calculated. A state space trajectory was derived describing the topological behavior of these critical points. This state space derivation yielded three states of motion which are universal for all axisymmetric jets. Third, the axisymmetric round jet is solved numerically using the unsteady laminar Navier Stokes equations. These equations were shown to be self similar for the round jet. Numerical calculations were performed up to a Reynolds number of 30 for a 60x60 point mesh. Animations generated from numerical solution showed each of the three states of motion for the round jet, including the Re = 30 case.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rounaghi, S.A., E-mail: s.a.rounaghi@gmail.com; Kiani Rashid, A.R.; Eshghi, H., E-mail: heshghi@ferdowsi.um.ac.ir

    Decomposition of melamine was studied by solid state reaction of melamine and aluminum powders during high energy ball-milling. The milling procedure performed for both pure melamine and melamine/Al mixed powders as the starting materials for various times up to 48 h under ambient atmosphere. The products were characterized by X-ray diffraction (XRD) and Fourier transform infrared spectroscopy (FTIR). The results revealed that Al causes melamine deammoniation at the first stages of milling and further milling process leads to the s-triazine ring degradation while nano-crystallite hexagonal aluminum nitride (h-AlN) was the main solid product. Comparison to milling process, the possibility ofmore » the reaction of melamine with Al was also investigated by thermal treatment method using differential scanning calorimeter (DSC) and thermo gravimetric analyzer (TGA). Melamine decomposition occurred by thermal treatment in the range of 270-370 Degree-Sign C, but no reaction between melamine and aluminum was observed. - Graphical Abstract: Mechanochemical reaction of melamine with Al resulted in the formation of nanocrystalline AlN after 7 h milling time Highlights: Black-Right-Pointing-Pointer High energy ball milling of melamine and aluminum results decomposition of melamine with elimination of ammonia. Black-Right-Pointing-Pointer Nano-crystalline AlN was synthesized by the mechanochemical route. Black-Right-Pointing-Pointer Milling process has no conspicuous effect on pure melamine degradation. Black-Right-Pointing-Pointer No reaction takes place by heating melamine and aluminum powder mixture in argon.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K. S.; Nakae, L. F.; Prasad, M. K.

    Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less

  13. 40 CFR 86.123-78 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values... percent at any point, the best-fit non-linear equation which represents the data to within 2 percent of... may be necessary to clean the analyzer frequently to prevent interference with NOX measurements (see...

  14. 40 CFR 86.123-78 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values... percent at any point, the best-fit non-linear equation which represents the data to within 2 percent of... may be necessary to clean the analyzer frequently to prevent interference with NOX measurements (see...

  15. 40 CFR 86.123-78 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values... percent at any point, the best-fit non-linear equation which represents the data to within 2 percent of... may be necessary to clean the analyzer frequently to prevent interference with NOX measurements (see...

  16. 40 CFR 90.318 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... chemiluminescent oxides of nitrogen analyzer as described in this section. (b) Initial and Periodic Interference...-squares best-fit straight line is two percent or less of the value at each data point, calculate... at any point, use the best-fit non-linear equation which represents the data to within two percent of...

  17. 40 CFR 91.318 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... nitrogen analyzer as described in this section. (b) Initial and periodic interference. Prior to its...-squares best-fit straight line is two percent or less of the value at each data point, concentration... two percent at any point, use the best-fit non-linear equation which represents the data to within two...

  18. 40 CFR 90.318 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... chemiluminescent oxides of nitrogen analyzer as described in this section. (b) Initial and Periodic Interference...-squares best-fit straight line is two percent or less of the value at each data point, calculate... at any point, use the best-fit non-linear equation which represents the data to within two percent of...

  19. 40 CFR 91.318 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... nitrogen analyzer as described in this section. (b) Initial and periodic interference. Prior to its...-squares best-fit straight line is two percent or less of the value at each data point, concentration... two percent at any point, use the best-fit non-linear equation which represents the data to within two...

  20. Progress in using real-time GPS for seismic monitoring of the Cascadia megathrust

    NASA Astrophysics Data System (ADS)

    Szeliga, W. M.; Melbourne, T. I.; Santillan, V. M.; Scrivner, C.; Webb, F.

    2014-12-01

    We report on progress in our development of a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone. This system is based on 1 Hz point position estimates computed in the ITRF08 reference frame. Convergence from phase and range observables to point position estimates is accelerated using a Kalman filter based, on-line stream editor. Positions are estimated using a short-arc approach and algorithms from JPL's GIPSY-OASIS software with satellite clock and orbit products from the International GNSS Service (IGS). The resulting positions show typical RMS scatter of 2.5 cm in the horizontal and 5 cm in the vertical with latencies below 2 seconds. To facilitate the use of these point position streams for applications such as seismic monitoring, we broadcast real-time positions and covariances using custom-built streaming software. This software is capable of buffering 24-hour streams for hundreds of stations and providing them through a REST-ful web interface. To demonstrate the power of this approach, we have developed a Java-based front-end that provides a real-time visual display of time-series, vector displacement, and contoured peak ground displacement. We have also implemented continuous estimation of finite fault slip along the Cascadia megathrust using an NIF approach. The resulting continuous slip distributions are combined with pre-computed tsunami Green's functions to generate real-time tsunami run-up estimates for the entire Cascadia coastal margin. This Java-based front-end is available for download through the PANGA website. We currently analyze 80 PBO and PANGA stations along the Cascadia margin and are gearing up to process all 400+ real-time stations operating in the Pacific Northwest, many of which are currently telemetered in real-time to CWU. These will serve as milestones towards our over-arching goal of extending our processing to include all of the available real-time streams from the Pacific rim. In addition, we are developing methodologies to combine our real-time solutions with those from Scripps Institute of Oceanography's PPP-AR real-time solutions as well as real-time solutions from the USGS. These combined products should improve the robustness and reliability of real-time point-position streams in the near future.

  1. Induction of osteoporosis with its influence on osteoporotic determinants and their interrelationships in rats by DEXA.

    PubMed

    Heiss, Christian; Govindarajan, Parameswari; Schlewitz, Gudrun; Hemdan, Nasr Y A; Schliefke, Nathalie; Alt, Volker; Thormann, Ulrich; Lips, Katrin Susanne; Wenisch, Sabine; Langheinrich, Alexander C; Zahner, Daniel; Schnettler, Reinhard

    2012-06-01

    As women are the population most affected by multifactorial osteoporosis, research is focused on unraveling the underlying mechanism of osteoporosis induction in rats by combining ovariectomy (OVX) either with calcium, phosphorus, vitamin C and vitamin D2/D3 deficiency, or by administration of glucocorticoid (dexamethasone). Different skeletal sites of sham, OVX-Diet and OVX-Steroid rats were analyzed by Dual Energy X-ray Absorptiometry (DEXA) at varied time points of 0, 4 and 12 weeks to determine and compare the osteoporotic factors such as bone mineral density (BMD), bone mineral content (BMC), area, body weight and percent fat among different groups and time points. Comparative analysis and interrelationships among osteoporotic determinants by regression analysis were also determined. T scores were below-2.5 in OVX-Diet rats at 4 and 12 weeks post-OVX. OVX-diet rats revealed pronounced osteoporotic status with reduced BMD and BMC than the steroid counterparts, with the spine and pelvis as the most affected skeletal sites. Increase in percent fat was observed irrespective of the osteoporosis inducers applied. Comparative analysis and interrelationships between osteoporotic determinants that are rarely studied in animals indicate the necessity to analyze BMC and area along with BMD in obtaining meaningful information leading to proper prediction of probability of osteoporotic fractures. Enhanced osteoporotic effect observed in OVX-Diet rats indicates that estrogen dysregulation combined with diet treatment induces and enhances osteoporosis with time when compared to the steroid group. Comparative and regression analysis indicates the need to determine BMC along with BMD and area in osteoporotic determination.

  2. The function of the chemokine receptor CXCR6 in the T cell response of mice against Listeria monocytogenes.

    PubMed

    Heesch, Kira; Raczkowski, Friederike; Schumacher, Valéa; Hünemörder, Stefanie; Panzer, Ulf; Mittrücker, Hans-Willi

    2014-01-01

    The chemokine receptor CXCR6 is expressed on different T cell subsets and up-regulated following T cell activation. CXCR6 has been implicated in the localization of cells to the liver due to the constitutive expression of its ligand CXCL16 on liver sinusoidal endothelial cells. Here, we analyzed the role of CXCR6 in CD8+ T cell responses to infection of mice with Listeria monocytogenes. CD8+ T cells responding to listerial antigens acquired high expression levels of CXCR6. However, deficiency of mice in CXCR6 did not impair control of the L. monocytogenes infection. CXCR6-deficient mice were able to generate listeria-specific CD4+ and CD8+ T cell responses and showed accumulation of T cells in the infected liver. In transfer assays, we detected reduced accumulation of listeria-specific CXCR6-deficient CD8+ T cells in the liver at early time points post infection. Though, CXCR6 was dispensable at later time points of the CD8+ T cell response. When transferred CD8+ T cells were followed for extended time periods, we observed a decline in CXCR6-deficient CD8+ T cells. The manifestation of this cell loss depended on the tissue analyzed. In conclusion, our results demonstrate that CXCR6 is not required for the formation of a T cell response to L. monocytogenes and for the accumulation of T cells in the infected liver but CXCR6 appears to influence long-term survival and tissue distribution of activated cells.

  3. The Function of the Chemokine Receptor CXCR6 in the T Cell Response of Mice against Listeria monocytogenes

    PubMed Central

    Heesch, Kira; Raczkowski, Friederike; Schumacher, Valéa; Hünemörder, Stefanie; Panzer, Ulf; Mittrücker, Hans-Willi

    2014-01-01

    The chemokine receptor CXCR6 is expressed on different T cell subsets and up-regulated following T cell activation. CXCR6 has been implicated in the localization of cells to the liver due to the constitutive expression of its ligand CXCL16 on liver sinusoidal endothelial cells. Here, we analyzed the role of CXCR6 in CD8+ T cell responses to infection of mice with Listeria monocytogenes. CD8+ T cells responding to listerial antigens acquired high expression levels of CXCR6. However, deficiency of mice in CXCR6 did not impair control of the L. monocytogenes infection. CXCR6-deficient mice were able to generate listeria-specific CD4+ and CD8+ T cell responses and showed accumulation of T cells in the infected liver. In transfer assays, we detected reduced accumulation of listeria-specific CXCR6-deficient CD8+ T cells in the liver at early time points post infection. Though, CXCR6 was dispensable at later time points of the CD8+ T cell response. When transferred CD8+ T cells were followed for extended time periods, we observed a decline in CXCR6-deficient CD8+ T cells. The manifestation of this cell loss depended on the tissue analyzed. In conclusion, our results demonstrate that CXCR6 is not required for the formation of a T cell response to L. monocytogenes and for the accumulation of T cells in the infected liver but CXCR6 appears to influence long-term survival and tissue distribution of activated cells. PMID:24832098

  4. Time trends in patient characteristics treated on acute stroke-units: results from the Austrian Stroke Unit Registry 2003-2011.

    PubMed

    Teuschl, Yvonne; Brainin, Michael; Matz, Karl; Dachenhausen, Alexandra; Ferrari, Julia; Seyfang, Leonhard; Lang, Wilfried

    2013-04-01

    Demographic changes, increased awareness of vascular risk factors, better diagnostic, progress in medical care, and increasing primary stroke prevention influence the profile of patients admitted to stroke-units. Changes in patient population and stroke type have important consequences on outcome and management at stroke-units. Data from the national database of the Austrian Stroke Unit Registry were analyzed for time-trends in demography, risk factors, cause, and stroke severity. Data of 48 038 ischemic and 5088 hemorrhagic strokes were analyzed. Between 2003 and 2011, median age increased significantly for ischemic strokes from 68 to 71 years in men and from 76 to 78 years in women, respectively. Ischemic stroke patients showed significantly increased rates of hypertension, hypercholesterolemia, and atrial fibrillation. In hemorrhagic strokes an increase for hypercholesterolemia and cardiac diseases other than atrial fibrillation and myocardial infarction were only found in men. A small but significant decrease in stroke severity was found for ischemic strokes from 4 to 3 points on the National Institutes of Health Stroke Scale in men and from 5 to 4 in women, and for hemorrhagic strokes from 9 to 6 points in men and from 9 to 7 in women. Cardioembolic strokes increased slightly, whereas macroangiopathy decreased. Significant time trends were seen for characteristics of ischemic and hemorrhagic stroke patients admitted to acute stroke-units in Austria. These include trends for older age and toward milder strokes with more cardioembolic causes. This signals a need for increased resources for managing multimorbidity and enabling early mobilization.

  5. Imaging quality analysis of computer-generated holograms using the point-based method and slice-based method

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Chen, Siqing; Zheng, Huadong; Sun, Tao; Yu, Yingjie; Gao, Hongyue; Asundi, Anand K.

    2017-06-01

    Computer holography has made a notably progress in recent years. The point-based method and slice-based method are chief calculation algorithms for generating holograms in holographic display. Although both two methods are validated numerically and optically, the differences of the imaging quality of these methods have not been specifically analyzed. In this paper, we analyze the imaging quality of computer-generated phase holograms generated by point-based Fresnel zone plates (PB-FZP), point-based Fresnel diffraction algorithm (PB-FDA) and slice-based Fresnel diffraction algorithm (SB-FDA). The calculation formula and hologram generation with three methods are demonstrated. In order to suppress the speckle noise, sequential phase-only holograms are generated in our work. The results of reconstructed images numerically and experimentally are also exhibited. By comparing the imaging quality, the merits and drawbacks with three methods are analyzed. Conclusions are given by us finally.

  6. How does School Experience Relate to Adolescent Identity Formation Over Time? Cross-Lagged Associations between School Engagement, School Burnout and Identity Processing Styles.

    PubMed

    Erentaitė, Rasa; Vosylis, Rimantas; Gabrialavičiūtė, Ingrida; Raižienė, Saulė

    2018-04-01

    The existing research findings still do not provide a clear understanding of the links between adolescent school experience and their identity formation. To address this gap, we analyzed the dynamic links between adolescent school experiences and identity formation by exploring the cross-lagged associations between school engagement, school burnout and identity processing styles (information-oriented, normative and diffuse-avoidant) over a 2-year period during middle-to-late adolescence. The sample of this school-based study included 916 adolescents (51.4% females) in the 9th to 12th grades from diverse socio-economic and family backgrounds. The results from the cross-lagged analyses with three time points revealed that (a) school engagement positively predicted information-oriented identity processing over a 2-year period; (b) school burnout positively predicted the reliance on normative and diffuse-avoidant identity styles across the three measurements; (c) the effects were stable over the three time points and across different gender, grade, and socio-economic status groups. The unidirectional effects identified in our study support the general prediction that active engagement in learning at school can serve as a resource for adolescent identity formation, while school burnout, in contrast, can hinder the formation of adolescent identity. This points to the importance of taking developmental identity-related needs of adolescents into account when planning the school curriculum.

  7. Etiology of the stability of reading difficulties: the longitudinal twin study of reading disabilities.

    PubMed

    Astrom, Raven L; Wadsworth, Sally J; DeFries, John C

    2007-06-01

    Results obtained from previous longitudinal studies of reading difficulties indicate that reading deficits are generally stable. However, little is known about the etiology of this stability. Thus, the primary objective of this first longitudinal twin study of reading difficulties is to provide an initial assessment of genetic and environmental influences on the stability of reading deficits. Data were analyzed from a sample of 56 twin pairs, 18 identical (monozygotic, MZ) and 38 fraternal (dizygotic, DZ), in which at least one member of each pair was classified as reading-disabled in the Colorado Learning Disabilities Research Center, and on whom follow-up data were available. The twins were tested at two time points (average age of 10.3 years at initial assessment and 16.1 years at follow-up). A composite measure of reading performance (PIAT Reading Recognition, Reading Comprehension and Spelling) was highly stable, with a stability correlation of .84. Data from the initial time point were first subjected to univariate DeFries-Fulker multiple regression analysis and the resulting estimate of the heritability of the group deficit (h2g) was .84 (+/-.26). When the initial and follow-up data were then fitted to a bivariate extension of the basic DF model, bivariate heritability was estimated at .65, indicating that common genetic influences account for approximately 75% of the stability between reading measures at the two time points.

  8. Kinematic characteristics of tenodesis grasp in C6 quadriplegia.

    PubMed

    Mateo, S; Revol, P; Fourtassi, M; Rossetti, Y; Collet, C; Rode, G

    2013-02-01

    Descriptive control case study. To analyze the kinematics of tenodesis grasp in participants with C6 quadriplegia and healthy control participants in a pointing task and two daily life tasks involving a whole hand grip (apple) or a lateral grip (floppy disk). France. Four complete participants with C6 quadriplegia were age matched with four healthy control participants. All participants were right-handed. The measured kinematic parameters were the movement time (MT), the peak velocity (PV), the time of PV (TPV) and the wrist angle in the sagittal plane at movement onset, at the TPV and at the movement end point. The participants with C6 quadriplegia had significantly longer MTs in both prehension tasks. No significant differences in TPV were found between the two groups. Unlike control participants, for both prehension tasks the wrist of participants with C6 quadriplegia was in a neutral position at movement onset, in flexion at the TPV, and in extension at the movement end point. Two main kinematic parameters characterize tenodesis grasp movements in C6 quadriplegics: wrist flexion during reaching and wrist extension during the grasping phase, and increased MT reflecting the time required to adjust the wrist's position to achieve the tenodesis grasp. These characteristics were observed for two different grips (whole hand and lateral grip). These results suggest sequential planning of reaching and tenodesis grasp, and should be taken into account for prehension rehabilitation in patients with quadriplegia.

  9. Stochastically gated local and occupation times of a Brownian particle

    NASA Astrophysics Data System (ADS)

    Bressloff, Paul C.

    2017-01-01

    We generalize the Feynman-Kac formula to analyze the local and occupation times of a Brownian particle moving in a stochastically gated one-dimensional domain. (i) The gated local time is defined as the amount of time spent by the particle in the neighborhood of a point in space where there is some target that only receives resources from (or detects) the particle when the gate is open; the target does not interfere with the motion of the Brownian particle. (ii) The gated occupation time is defined as the amount of time spent by the particle in the positive half of the real line, given that it can only cross the origin when a gate placed at the origin is open; in the closed state the particle is reflected. In both scenarios, the gate randomly switches between the open and closed states according to a two-state Markov process. We derive a stochastic, backward Fokker-Planck equation (FPE) for the moment-generating function of the two types of gated Brownian functional, given a particular realization of the stochastic gate, and analyze the resulting stochastic FPE using a moments method recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment-generating function, averaged with respect to realizations of the stochastic gate.

  10. The motion of throw away detectors relative to the space shuttle

    NASA Technical Reports Server (NTRS)

    Mullins, L. D.

    1975-01-01

    The motions of throw away detectors (TAD's) are analyzed using the linearized relative motion equations. The TAD's are to be used in the amps program as diagnostic instruments for making various measurements near the shuttle. The TAD's are ejected from the shuttle in arbitrary directions with small relative velocities (0.1 to 1.0 m/s) their subsequent trajectories relative to the shuttle are analyzed. Initial conditions that are likely to result in recontact between the TAD and the shuttle are identified. The sensitivity of the motion to variations in the initial conditions, possibly resulting from inaccuracy in the ejection mechanism, are analyzed as are effects of atmospheric drag. A targeting method, a method of giving the TAD correct initial conditions such that it will pass through a given point relative to the shuttle at a given time, is developed. The results of many specific cases are presented in graphical form.

  11. Methyl-CpG island-associated genome signature tags

    DOEpatents

    Dunn, John J

    2014-05-20

    Disclosed is a method for analyzing the organismic complexity of a sample through analysis of the nucleic acid in the sample. In the disclosed method, through a series of steps, including digestion with a type II restriction enzyme, ligation of capture adapters and linkers and digestion with a type IIS restriction enzyme, genome signature tags are produced. The sequences of a statistically significant number of the signature tags are determined and the sequences are used to identify and quantify the organisms in the sample. Various embodiments of the invention described herein include methods for using single point genome signature tags to analyze the related families present in a sample, methods for analyzing sequences associated with hyper- and hypo-methylated CpG islands, methods for visualizing organismic complexity change in a sampling location over time and methods for generating the genome signature tag profile of a sample of fragmented DNA.

  12. Identification of the Key Fields and Their Key Technical Points of Oncology by Patent Analysis

    PubMed Central

    Zhang, Ting; Chen, Juan; Jia, Xiaofeng

    2015-01-01

    Background This paper aims to identify the key fields and their key technical points of oncology by patent analysis. Methodology/Principal Findings Patents of oncology applied from 2006 to 2012 were searched in the Thomson Innovation database. The key fields and their key technical points were determined by analyzing the Derwent Classification (DC) and the International Patent Classification (IPC), respectively. Patent applications in the top ten DC occupied 80% of all the patent applications of oncology, which were the ten fields of oncology to be analyzed. The number of patent applications in these ten fields of oncology was standardized based on patent applications of oncology from 2006 to 2012. For each field, standardization was conducted separately for each of the seven years (2006–2012) and the mean of the seven standardized values was calculated to reflect the relative amount of patent applications in that field; meanwhile, regression analysis using time (year) and the standardized values of patent applications in seven years (2006–2012) was conducted so as to evaluate the trend of patent applications in each field. Two-dimensional quadrant analysis, together with the professional knowledge of oncology, was taken into consideration in determining the key fields of oncology. The fields located in the quadrant with high relative amount or increasing trend of patent applications are identified as key ones. By using the same method, the key technical points in each key field were identified. Altogether 116,820 patents of oncology applied from 2006 to 2012 were retrieved, and four key fields with twenty-nine key technical points were identified, including “natural products and polymers” with nine key technical points, “fermentation industry” with twelve ones, “electrical medical equipment” with four ones, and “diagnosis, surgery” with four ones. Conclusions/Significance The results of this study could provide guidance on the development direction of oncology, and also help researchers broaden innovative ideas and discover new technological opportunities. PMID:26599967

  13. Identification of the Key Fields and Their Key Technical Points of Oncology by Patent Analysis.

    PubMed

    Zhang, Ting; Chen, Juan; Jia, Xiaofeng

    2015-01-01

    This paper aims to identify the key fields and their key technical points of oncology by patent analysis. Patents of oncology applied from 2006 to 2012 were searched in the Thomson Innovation database. The key fields and their key technical points were determined by analyzing the Derwent Classification (DC) and the International Patent Classification (IPC), respectively. Patent applications in the top ten DC occupied 80% of all the patent applications of oncology, which were the ten fields of oncology to be analyzed. The number of patent applications in these ten fields of oncology was standardized based on patent applications of oncology from 2006 to 2012. For each field, standardization was conducted separately for each of the seven years (2006-2012) and the mean of the seven standardized values was calculated to reflect the relative amount of patent applications in that field; meanwhile, regression analysis using time (year) and the standardized values of patent applications in seven years (2006-2012) was conducted so as to evaluate the trend of patent applications in each field. Two-dimensional quadrant analysis, together with the professional knowledge of oncology, was taken into consideration in determining the key fields of oncology. The fields located in the quadrant with high relative amount or increasing trend of patent applications are identified as key ones. By using the same method, the key technical points in each key field were identified. Altogether 116,820 patents of oncology applied from 2006 to 2012 were retrieved, and four key fields with twenty-nine key technical points were identified, including "natural products and polymers" with nine key technical points, "fermentation industry" with twelve ones, "electrical medical equipment" with four ones, and "diagnosis, surgery" with four ones. The results of this study could provide guidance on the development direction of oncology, and also help researchers broaden innovative ideas and discover new technological opportunities.

  14. Geometry in a dynamical system without space: Hyperbolic Geometry in Kuramoto Oscillator Systems

    NASA Astrophysics Data System (ADS)

    Engelbrecht, Jan; Chen, Bolun; Mirollo, Renato

    Kuramoto oscillator networks have the special property that their time evolution is constrained to lie on 3D orbits of the Möbius group acting on the N-fold torus TN which explains the N - 3 constants of motion discovered by Watanabe and Strogatz. The dynamics for phase models can be further reduced to 2D invariant sets in T N - 1 which have a natural geometry equivalent to the unit disk Δ with hyperbolic metric. We show that the classic Kuramoto model with order parameter Z1 (the first moment of the oscillator configuration) is a gradient flow in this metric with a unique fixed point on each generic 2D invariant set, corresponding to the hyperbolic barycenter of an oscillator configuration. This gradient property makes the dynamics especially easy to analyze. We exhibit several new families of Kuramoto oscillator models which reduce to gradient flows in this metric; some of these have a richer fixed point structure including non-hyperbolic fixed points associated with fixed point bifurcations. Work Supported by NSF DMS 1413020.

  15. Determination of velocity correction factors for real-time air velocity monitoring in underground mines.

    PubMed

    Zhou, Lihong; Yuan, Liming; Thomas, Rick; Iannacchione, Anthony

    2017-12-01

    When there are installations of air velocity sensors in the mining industry for real-time airflow monitoring, a problem exists with how the monitored air velocity at a fixed location corresponds to the average air velocity, which is used to determine the volume flow rate of air in an entry with the cross-sectional area. Correction factors have been practically employed to convert a measured centerline air velocity to the average air velocity. However, studies on the recommended correction factors of the sensor-measured air velocity to the average air velocity at cross sections are still lacking. A comprehensive airflow measurement was made at the Safety Research Coal Mine, Bruceton, PA, using three measuring methods including single-point reading, moving traverse, and fixed-point traverse. The air velocity distribution at each measuring station was analyzed using an air velocity contour map generated with Surfer ® . The correction factors at each measuring station for both the centerline and the sensor location were calculated and are discussed.

  16. Detector-device-independent quantum key distribution: Security analysis and fast implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boaron, Alberto; Korzh, Boris; Houlmann, Raphael

    One of the most pressing issues in quantum key distribution (QKD) is the problem of detector side-channel attacks. To overcome this problem, researchers proposed an elegant “time-reversal” QKD protocol called measurement-device-independent QKD (MDI-QKD), which is based on time-reversed entanglement swapping. But, MDI-QKD is more challenging to implement than standard point-to-point QKD. Recently, we proposed an intermediary QKD protocol called detector-device-independent QKD (DDI-QKD) in order to overcome the drawbacks of MDI-QKD, with the hope that it would eventually lead to a more efficient detector side-channel-free QKD system. We analyze the security of DDI-QKD and elucidate its security assumptions. We find thatmore » DDI-QKD is not equivalent to MDI-QKD, but its security can be demonstrated with reasonable assumptions. On the more practical side, we consider the feasibility of DDI-QKD and present a fast experimental demonstration (clocked at 625 MHz), capable of secret key exchange up to more than 90 km.« less

  17. A COMPACTRIO-BASED BEAM LOSS MONITOR FOR THE SNS RF TEST CAVE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blokland, Willem; Armstrong, Gary A

    2009-01-01

    An RF Test Cave has been built at the Spallation Neutron Source (SNS) to be able to test RF cavities without interfering the SNS accelerator operations. In addition to using thick concrete wall to minimize radiation exposure, a Beam Loss Monitor (BLM) must abort the operation within 100 usec when the integrated radiation within the cave exceeds a threshold. We choose the CompactRIO platform to implement the BLM based on its performance, cost-effectiveness, and rapid development. Each in/output module is connected through an FPGA to provide point-by-point processing. Every 10 usec the data is acquired analyzed and compared to themore » threshold. Data from the FPGA is transferred using DMA to the real-time controller, which communicates to a gateway PC to talk to the SNS control system. The system includes diagnostics to test the hardware and integrates the losses in real-time. In this paper we describe our design, implementation, and results« less

  18. A multiple imputation strategy for sequential multiple assignment randomized trials

    PubMed Central

    Shortreed, Susan M.; Laber, Eric; Stroup, T. Scott; Pineau, Joelle; Murphy, Susan A.

    2014-01-01

    Sequential multiple assignment randomized trials (SMARTs) are increasingly being used to inform clinical and intervention science. In a SMART, each patient is repeatedly randomized over time. Each randomization occurs at a critical decision point in the treatment course. These critical decision points often correspond to milestones in the disease process or other changes in a patient’s health status. Thus, the timing and number of randomizations may vary across patients and depend on evolving patient-specific information. This presents unique challenges when analyzing data from a SMART in the presence of missing data. This paper presents the first comprehensive discussion of missing data issues typical of SMART studies: we describe five specific challenges, and propose a flexible imputation strategy to facilitate valid statistical estimation and inference using incomplete data from a SMART. To illustrate these contributions, we consider data from the Clinical Antipsychotic Trial of Intervention and Effectiveness (CATIE), one of the most well-known SMARTs to date. PMID:24919867

  19. Detector-device-independent quantum key distribution: Security analysis and fast implementation

    DOE PAGES

    Boaron, Alberto; Korzh, Boris; Houlmann, Raphael; ...

    2016-08-09

    One of the most pressing issues in quantum key distribution (QKD) is the problem of detector side-channel attacks. To overcome this problem, researchers proposed an elegant “time-reversal” QKD protocol called measurement-device-independent QKD (MDI-QKD), which is based on time-reversed entanglement swapping. But, MDI-QKD is more challenging to implement than standard point-to-point QKD. Recently, we proposed an intermediary QKD protocol called detector-device-independent QKD (DDI-QKD) in order to overcome the drawbacks of MDI-QKD, with the hope that it would eventually lead to a more efficient detector side-channel-free QKD system. We analyze the security of DDI-QKD and elucidate its security assumptions. We find thatmore » DDI-QKD is not equivalent to MDI-QKD, but its security can be demonstrated with reasonable assumptions. On the more practical side, we consider the feasibility of DDI-QKD and present a fast experimental demonstration (clocked at 625 MHz), capable of secret key exchange up to more than 90 km.« less

  20. EMG parameters and EEG α Index change at fatigue period during different types of muscle contraction

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Zhou, Bin; Song, Gaoqing

    2010-10-01

    The purpose of this study is to measure and analyze the characteristics in change of EMG and EEG parameters at muscle fatigue period in participants with different exercise capacity. Twenty participants took part in the tests. They were divided into two groups, Group A (constant exerciser) and Group B (seldom-exerciser). MVC dynamic and 1/3 isometric exercises were performed; EMG and EEG signals were recorded synchronously during different type of muscle contraction. Results indicated that values of MVC, RMS and IEMG in Group A were greater than Group B, but isometric exercise time was shorter than the time of dynamic exercise although its intensity was light. Turning point of IEMG and α Index occurred synchronously during constant muscle contraction of isometric or dynamic exercise. It is concluded that IEMG turning point may be an indication to justify muscle fatigue. Synchronization of EEG and EMG reflects its common characteristics on its bio-electric change.

  1. EMG parameters and EEG α Index change at fatigue period during different types of muscle contraction

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Zhou, Bin; Song, Gaoqing

    2011-03-01

    The purpose of this study is to measure and analyze the characteristics in change of EMG and EEG parameters at muscle fatigue period in participants with different exercise capacity. Twenty participants took part in the tests. They were divided into two groups, Group A (constant exerciser) and Group B (seldom-exerciser). MVC dynamic and 1/3 isometric exercises were performed; EMG and EEG signals were recorded synchronously during different type of muscle contraction. Results indicated that values of MVC, RMS and IEMG in Group A were greater than Group B, but isometric exercise time was shorter than the time of dynamic exercise although its intensity was light. Turning point of IEMG and α Index occurred synchronously during constant muscle contraction of isometric or dynamic exercise. It is concluded that IEMG turning point may be an indication to justify muscle fatigue. Synchronization of EEG and EMG reflects its common characteristics on its bio-electric change.

  2. Universality of phase transition dynamics: topological defects from symmetry breaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zurek, Wojciech H.; Del Campo, Adolfo

    In the course of a non-equilibrium continuous phase transition, the dynamics ceases to be adiabatic in the vicinity of the critical point as a result of the critical slowing down (the divergence of the relaxation time in the neighborhood of the critical point). This enforces a local choice of the broken symmetry and can lead to the formation of topological defects. The Kibble-Zurek mechanism (KZM) was developed to describe the associated nonequilibrium dynamics and to estimate the density of defects as a function of the quench rate through the transition. During recent years, several new experiments investigating formation of defectsmore » in phase transitions induced by a quench both in classical and quantum mechanical systems were carried out. At the same time, some established results were called into question. We review and analyze the Kibble-Zurek mechanism focusing in particular on this surge of activity, and suggest possible directions for further progress.« less

  3. Innovative Near Real-Time Data Dissemination Tools Developed by the Space Weather Research Center

    NASA Astrophysics Data System (ADS)

    Mullinix, R.; Maddox, M. M.; Berrios, D.; Kuznetsova, M.; Pulkkinen, A.; Rastaetter, L.; Zheng, Y.

    2012-12-01

    Space weather affects virtually all of NASA's endeavors, from robotic missions to human exploration. Knowledge and prediction of space weather conditions are therefore essential to NASA operations. The diverse nature of currently available space environment measurements and modeling products compels the need for a single access point to such information. The Integrated Space Weather Analysis (iSWA) System provides this single point access along with the capability to collect and catalog a vast range of sources including both observational and model data. NASA Goddard Space Weather Research Center heavily utilizes the iSWA System daily for research, space weather model validation, and forecasting for NASA missions. iSWA provides the capabilities to view and analyze near real-time space weather data from any where in the world. This presentation will describe the technology behind the iSWA system and describe how to use the system for space weather research, forecasting, training, education, and sharing.

  4. Determination of velocity correction factors for real-time air velocity monitoring in underground mines

    PubMed Central

    Yuan, Liming; Thomas, Rick; Iannacchione, Anthony

    2017-01-01

    When there are installations of air velocity sensors in the mining industry for real-time airflow monitoring, a problem exists with how the monitored air velocity at a fixed location corresponds to the average air velocity, which is used to determine the volume flow rate of air in an entry with the cross-sectional area. Correction factors have been practically employed to convert a measured centerline air velocity to the average air velocity. However, studies on the recommended correction factors of the sensor-measured air velocity to the average air velocity at cross sections are still lacking. A comprehensive airflow measurement was made at the Safety Research Coal Mine, Bruceton, PA, using three measuring methods including single-point reading, moving traverse, and fixed-point traverse. The air velocity distribution at each measuring station was analyzed using an air velocity contour map generated with Surfer®. The correction factors at each measuring station for both the centerline and the sensor location were calculated and are discussed. PMID:29201495

  5. Electrokinetic Analysis of Cell Translocation in Low-Cost Microfluidic Cytometry for Tumor Cell Detection and Enumeration.

    PubMed

    Guo, Jinhong; Pui, Tze Sian; Ban, Yong-Ling; Rahman, Abdur Rub Abdur; Kang, Yuejun

    2013-12-01

    Conventional Coulter counters have been introduced as an important tool in biological cell assays since several decades ago. Recently, the emerging portable Coulter counter has demonstrated its merits in point of care diagnostics, such as on chip detection and enumeration of circulating tumor cells (CTC). The working principle is based on the cell translocation time and amplitude of electrical current change that the cell induces. In this paper, we provide an analysis of a Coulter counter that evaluates the hydrodynamic and electrokinetic properties of polystyrene microparticles in a microfluidic channel. The hydrodynamic force and electrokinetic force are concurrently analyzed to determine the translocation time and the electrical current pulses induced by the particles. Finally, we characterize the chip performance for CTC detection. The experimental results validate the numerical analysis of the microfluidic chip. The presented model can provide critical insight and guidance for developing micro-Coulter counter for point of care prognosis.

  6. Education Gains Attributable to Fertility Decline: Patterns by Gender, Period, and Country in Latin America and Asia.

    PubMed

    Li, Jing; Dow, William H; Rosero-Bixby, Luis

    2017-08-01

    We investigate the heterogeneity across countries and time in the relationship between mother's fertility and children's educational attainment-the quantity-quality (Q-Q) trade-off-by using census data from 17 countries in Asia and Latin America, with data from each country spanning multiple census years. For each country-year, we estimate micro-level instrumental variables models predicting secondary school attainment using number of siblings of the child, instrumented by the sex composition of the first two births in the family. We then analyze correlates of Q-Q trade-off patterns across countries. On average, one additional sibling in the family reduces the probability of secondary education by 6 percentage points for girls and 4 percentage points for boys. This Q-Q trade-off is significantly associated with the level of son preference, slightly decreasing over time and with fertility, but it does not significantly differ by educational level of the country.

  7. How Mathematics Describes Life

    NASA Astrophysics Data System (ADS)

    Teklu, Abraham

    2017-01-01

    The circle of life is something we have all heard of from somewhere, but we don't usually try to calculate it. For some time we have been working on analyzing a predator-prey model to better understand how mathematics can describe life, in particular the interaction between two different species. The model we are analyzing is called the Holling-Tanner model, and it cannot be solved analytically. The Holling-Tanner model is a very common model in population dynamics because it is a simple descriptor of how predators and prey interact. The model is a system of two differential equations. The model is not specific to any particular set of species and so it can describe predator-prey species ranging from lions and zebras to white blood cells and infections. One thing all these systems have in common are critical points. A critical point is a value for both populations that keeps both populations constant. It is important because at this point the differential equations are equal to zero. For this model there are two critical points, a predator free critical point and a coexistence critical point. Most of the analysis we did is on the coexistence critical point because the predator free critical point is always unstable and frankly less interesting than the coexistence critical point. What we did is consider two regimes for the differential equations, large B and small B. B, A, and C are parameters in the differential equations that control the system where B measures how responsive the predators are to change in the population, A represents predation of the prey, and C represents the satiation point of the prey population. For the large B case we were able to approximate the system of differential equations by a single scalar equation. For the small B case we were able to predict the limit cycle. The limit cycle is a process of the predator and prey populations growing and shrinking periodically. This model has a limit cycle in the regime of small B, that we solved for numerically. With some assumptions to reduce the differential equations we were able to create a system of equations and unknowns to predict the behavior of the limit cycle for small B.

  8. Automating the Generation of the Cassini Tour Atlas Database

    NASA Technical Reports Server (NTRS)

    Grazier, Kevin R.; Roumeliotis, Chris; Lange, Robert D.

    2010-01-01

    The Tour Atlas is a large database of geometrical tables, plots, and graphics used by Cassini science planning engineers and scientists primarily for science observation planning. Over time, as the contents of the Tour Atlas grew, the amount of time it took to recreate the Tour Atlas similarly grew--to the point that it took one person a week of effort. When Cassini tour designers estimated that they were going to create approximately 30 candidate Extended Mission trajectories--which needed to be analyzed for science return in a short amount of time--it became a necessity to automate. We report on the automation methodology that reduced the amount of time it took one person to (re)generate a Tour Atlas from a week to, literally, one UNIX command.

  9. Force Trends and Pulsatility for Catheter Contact Identification in Intracardiac Electrograms during Arrhythmia Ablation.

    PubMed

    Rivas-Lalaleo, David; Muñoz-Romero, Sergio; Huerta, Mónica; Erazo-Rodas, Mayra; Sánchez-Muñoz, Juan José; Rojo-Álvarez, José Luis; García-Alberola, Arcadi

    2018-05-02

    The intracardiac electrical activation maps are commonly used as a guide in the ablation of cardiac arrhythmias. The use of catheters with force sensors has been proposed in order to know if the electrode is in contact with the tissue during the registration of intracardiac electrograms (EGM). Although threshold criteria on force signals are often used to determine the catheter contact, this may be a limited criterion due to the complexity of the heart dynamics and cardiac vorticity. The present paper is devoted to determining the criteria and force signal profiles that guarantee the contact of the electrode with the tissue. In this study, we analyzed 1391 force signals and their associated EGM recorded during 2 and 8 s, respectively, in 17 patients (82 ± 60 points per patient). We aimed to establish a contact pattern by first visually examining and classifying the signals, according to their likely-contact joint profile and following the suggestions from experts in the doubtful cases. First, we used Principal Component Analysis to scrutinize the force signal dynamics by analyzing the main eigen-directions, first globally and then grouped according to the certainty of their tissue-catheter contact. Second, we used two different linear classifiers (Fisher discriminant and support vector machines) to identify the most relevant components of the previous signal models. We obtained three main types of eigenvectors, namely, pulsatile relevant, non-pulsatile relevant, and irrelevant components. The classifiers reached a moderate to sufficient discrimination capacity (areas under the curve between 0.84 and 0.95 depending on the contact certainty and on the classifier), which allowed us to analyze the relevant properties in the force signals. We conclude that the catheter-tissue contact profiles in force recordings are complex and do not depend only on the signal intensity being above a threshold at a single time instant, but also on time pulsatility and trends. These findings pave the way towards a subsystem which can be included in current intracardiac navigation systems assisted by force contact sensors, and it can provide the clinician with an estimate of the reliability on the tissue-catheter contact in the point-by-point EGM acquisition procedure.

  10. Force Trends and Pulsatility for Catheter Contact Identification in Intracardiac Electrograms during Arrhythmia Ablation

    PubMed Central

    Muñoz-Romero, Sergio; Erazo-Rodas, Mayra; Sánchez-Muñoz, Juan José; García-Alberola, Arcadi

    2018-01-01

    The intracardiac electrical activation maps are commonly used as a guide in the ablation of cardiac arrhythmias. The use of catheters with force sensors has been proposed in order to know if the electrode is in contact with the tissue during the registration of intracardiac electrograms (EGM). Although threshold criteria on force signals are often used to determine the catheter contact, this may be a limited criterion due to the complexity of the heart dynamics and cardiac vorticity. The present paper is devoted to determining the criteria and force signal profiles that guarantee the contact of the electrode with the tissue. In this study, we analyzed 1391 force signals and their associated EGM recorded during 2 and 8 s, respectively, in 17 patients (82 ± 60 points per patient). We aimed to establish a contact pattern by first visually examining and classifying the signals, according to their likely-contact joint profile and following the suggestions from experts in the doubtful cases. First, we used Principal Component Analysis to scrutinize the force signal dynamics by analyzing the main eigen-directions, first globally and then grouped according to the certainty of their tissue-catheter contact. Second, we used two different linear classifiers (Fisher discriminant and support vector machines) to identify the most relevant components of the previous signal models. We obtained three main types of eigenvectors, namely, pulsatile relevant, non-pulsatile relevant, and irrelevant components. The classifiers reached a moderate to sufficient discrimination capacity (areas under the curve between 0.84 and 0.95 depending on the contact certainty and on the classifier), which allowed us to analyze the relevant properties in the force signals. We conclude that the catheter-tissue contact profiles in force recordings are complex and do not depend only on the signal intensity being above a threshold at a single time instant, but also on time pulsatility and trends. These findings pave the way towards a subsystem which can be included in current intracardiac navigation systems assisted by force contact sensors, and it can provide the clinician with an estimate of the reliability on the tissue-catheter contact in the point-by-point EGM acquisition procedure. PMID:29724033

  11. Dynamics of a linear system coupled to a chain of light nonlinear oscillators analyzed through a continuous approximation

    NASA Astrophysics Data System (ADS)

    Charlemagne, S.; Ture Savadkoohi, A.; Lamarque, C.-H.

    2018-07-01

    The continuous approximation is used in this work to describe the dynamics of a nonlinear chain of light oscillators coupled to a linear main system. A general methodology is applied to an example where the chain has local nonlinear restoring forces. The slow invariant manifold is detected at fast time scale. At slow time scale, equilibrium and singular points are sought around this manifold in order to predict periodic regimes and strongly modulated responses of the system. Analytical predictions are in good accordance with numerical results and represent a potent tool for designing nonlinear chains for passive control purposes.

  12. Joint Launch + One Year Science Review of USML-1 and USMP-1 with the Microgravity Measurement Group

    NASA Technical Reports Server (NTRS)

    Ramachandran, N. (Editor); Frazier, Donald. O. (Editor); Lehoczky, Sandor L. (Editor); Baugher, Charles R. (Editor)

    1994-01-01

    This document summarizes from the various investigations their comprehensive results and highlights, and also serves as a combined mission report for the first United States Microgravity Laboratory (USML-1) amd the United States Microgravity Payload (USMP-1). USML-1 included 31 investigations in fluid dynamics, crystal growth, combustion, biotechnology, and technology demonstrations supported by 11 facilities. On the USMP-1 mission, both the MEPHISTO and Lambda Point experiments exceeded by over 100 percent their planned science objectives. The mission was also the first time that acceleration data were down-linked and analyzed in real time.

  13. Multifractality and heteroscedastic dynamics: An application to time series analysis

    NASA Astrophysics Data System (ADS)

    Nascimento, C. M.; Júnior, H. B. N.; Jennings, H. D.; Serva, M.; Gleria, Iram; Viswanathan, G. M.

    2008-01-01

    An increasingly important problem in physics concerns scale invariance symmetry in diverse complex systems, often characterized by heteroscedastic dynamics. We investigate the nature of the relationship between the heteroscedastic and fractal aspects of the dynamics of complex systems, by analyzing the sensitivity to heteroscedasticity of the scaling properties of weakly nonstationary time series. By using multifractal detrended fluctuation analysis, we study the singularity spectra of currency exchange rate fluctuations, after partially or completely eliminating n-point correlations via data shuffling techniques. We conclude that heteroscedasticity can significantly increase multifractality and interpret these findings in the context of self-organizing and adaptive complex systems.

  14. Using Multivariate Regression Model with Least Absolute Shrinkage and Selection Operator (LASSO) to Predict the Incidence of Xerostomia after Intensity-Modulated Radiotherapy for Head and Neck Cancer

    PubMed Central

    Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Wu, Jia-Ming; Wang, Hung-Yu; Horng, Mong-Fong; Chang, Chun-Ming; Lan, Jen-Hong; Huang, Ya-Yu; Fang, Fu-Min; Leung, Stephen Wan

    2014-01-01

    Purpose The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderate-to-severe patient-rated xerostomia among head and neck cancer (HNC) patients treated with IMRT. Methods and Materials Quality of life questionnaire datasets from 206 patients with HNC were analyzed. The European Organization for Research and Treatment of Cancer QLQ-H&N35 and QLQ-C30 questionnaires were used as the endpoint evaluation. The primary endpoint (grade 3+ xerostomia) was defined as moderate-to-severe xerostomia at 3 (XER3m) and 12 months (XER12m) after the completion of IMRT. Normal tissue complication probability (NTCP) models were developed. The optimal and suboptimal numbers of prognostic factors for a multivariate logistic regression model were determined using the LASSO with bootstrapping technique. Statistical analysis was performed using the scaled Brier score, Nagelkerke R2, chi-squared test, Omnibus, Hosmer-Lemeshow test, and the AUC. Results Eight prognostic factors were selected by LASSO for the 3-month time point: Dmean-c, Dmean-i, age, financial status, T stage, AJCC stage, smoking, and education. Nine prognostic factors were selected for the 12-month time point: Dmean-i, education, Dmean-c, smoking, T stage, baseline xerostomia, alcohol abuse, family history, and node classification. In the selection of the suboptimal number of prognostic factors by LASSO, three suboptimal prognostic factors were fine-tuned by Hosmer-Lemeshow test and AUC, i.e., Dmean-c, Dmean-i, and age for the 3-month time point. Five suboptimal prognostic factors were also selected for the 12-month time point, i.e., Dmean-i, education, Dmean-c, smoking, and T stage. The overall performance for both time points of the NTCP model in terms of scaled Brier score, Omnibus, and Nagelkerke R2 was satisfactory and corresponded well with the expected values. Conclusions Multivariate NTCP models with LASSO can be used to predict patient-rated xerostomia after IMRT. PMID:24586971

  15. Using multivariate regression model with least absolute shrinkage and selection operator (LASSO) to predict the incidence of Xerostomia after intensity-modulated radiotherapy for head and neck cancer.

    PubMed

    Lee, Tsair-Fwu; Chao, Pei-Ju; Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Wu, Jia-Ming; Wang, Hung-Yu; Horng, Mong-Fong; Chang, Chun-Ming; Lan, Jen-Hong; Huang, Ya-Yu; Fang, Fu-Min; Leung, Stephen Wan

    2014-01-01

    The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderate-to-severe patient-rated xerostomia among head and neck cancer (HNC) patients treated with IMRT. Quality of life questionnaire datasets from 206 patients with HNC were analyzed. The European Organization for Research and Treatment of Cancer QLQ-H&N35 and QLQ-C30 questionnaires were used as the endpoint evaluation. The primary endpoint (grade 3(+) xerostomia) was defined as moderate-to-severe xerostomia at 3 (XER3m) and 12 months (XER12m) after the completion of IMRT. Normal tissue complication probability (NTCP) models were developed. The optimal and suboptimal numbers of prognostic factors for a multivariate logistic regression model were determined using the LASSO with bootstrapping technique. Statistical analysis was performed using the scaled Brier score, Nagelkerke R(2), chi-squared test, Omnibus, Hosmer-Lemeshow test, and the AUC. Eight prognostic factors were selected by LASSO for the 3-month time point: Dmean-c, Dmean-i, age, financial status, T stage, AJCC stage, smoking, and education. Nine prognostic factors were selected for the 12-month time point: Dmean-i, education, Dmean-c, smoking, T stage, baseline xerostomia, alcohol abuse, family history, and node classification. In the selection of the suboptimal number of prognostic factors by LASSO, three suboptimal prognostic factors were fine-tuned by Hosmer-Lemeshow test and AUC, i.e., Dmean-c, Dmean-i, and age for the 3-month time point. Five suboptimal prognostic factors were also selected for the 12-month time point, i.e., Dmean-i, education, Dmean-c, smoking, and T stage. The overall performance for both time points of the NTCP model in terms of scaled Brier score, Omnibus, and Nagelkerke R(2) was satisfactory and corresponded well with the expected values. Multivariate NTCP models with LASSO can be used to predict patient-rated xerostomia after IMRT.

  16. Progress on the CWU READI Analysis Center

    NASA Astrophysics Data System (ADS)

    Melbourne, T. I.; Szeliga, W. M.; Santillan, V. M.; Scrivner, C.

    2015-12-01

    Real-time GPS position streams are desirable for a variety of seismic monitoring and hazard mitigation applications. We report on progress in our development of a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone. This system is based on 1 Hz point position estimates computed in the ITRF08 reference frame. Convergence from phase and range observables to point position estimates is accelerated using a Kalman filter based, on-line stream editor that produces independent estimations of carrier phase integer biases and other parameters. Positions are then estimated using a short-arc approach and algorithms from JPL's GIPSY-OASIS software with satellite clock and orbit products from the International GNSS Service (IGS). The resulting positions show typical RMS scatter of 2.5 cm in the horizontal and 5 cm in the vertical with latencies below 2 seconds. To facilitate the use of these point position streams for applications such as seismic monitoring, we broadcast real-time positions and covariances using custom-built aggregation-distribution software based on RabbitMQ messaging platform. This software is capable of buffering 24-hour streams for hundreds of stations and providing them through a REST-ful web interface. To demonstrate the power of this approach, we have developed a Java-based front-end that provides a real-time visual display of time-series, displacement vector fields, and map-view, contoured, peak ground displacement. This Java-based front-end is available for download through the PANGA website. We are currently analyzing 80 PBO and PANGA stations along the Cascadia margin and gearing up to process all 400+ real-time stations that are operating in the Pacific Northwest, many of which are currently telemetered in real-time to CWU. These will serve as milestones towards our over-arching goal of extending our processing to include all of the available real-time streams from the Pacific rim. In addition, we have developed a Kalman filter to combine CWU real-time PPP solutions with those from Scripps Institute of Oceanography's PPP-AR real-time solutions as well as real-time solutions from the USGS. These combined products should improve the robustness and reliability of real-time point-position streams in the near future.

  17. Diagnostic value of ST-segment deviations during cardiac exercise stress testing: Systematic comparison of different ECG leads and time-points.

    PubMed

    Puelacher, Christian; Wagener, Max; Abächerli, Roger; Honegger, Ursina; Lhasam, Nundsin; Schaerli, Nicolas; Prêtre, Gil; Strebel, Ivo; Twerenbold, Raphael; Boeddinghaus, Jasper; Nestelberger, Thomas; Rubini Giménez, Maria; Hillinger, Petra; Wildi, Karin; Sabti, Zaid; Badertscher, Patrick; Cupa, Janosch; Kozhuharov, Nikola; du Fay de Lavallaz, Jeanne; Freese, Michael; Roux, Isabelle; Lohrmann, Jens; Leber, Remo; Osswald, Stefan; Wild, Damian; Zellweger, Michael J; Mueller, Christian; Reichlin, Tobias

    2017-07-01

    Exercise ECG stress testing is the most widely available method for evaluation of patients with suspected myocardial ischemia. Its major limitation is the relatively poor accuracy of ST-segment changes regarding ischemia detection. Little is known about the optimal method to assess ST-deviations. A total of 1558 consecutive patients undergoing bicycle exercise stress myocardial perfusion imaging (MPI) were enrolled. Presence of inducible myocardial ischemia was adjudicated using MPI results. The diagnostic value of ST-deviations for detection of exercise-induced myocardial ischemia was systematically analyzed 1) for each individual lead, 2) at three different intervals after the J-point (J+40ms, J+60ms, J+80ms), and 3) at different time points during the test (baseline, maximal workload, 2min into recovery). Exercise-induced ischemia was detected in 481 (31%) patients. The diagnostic accuracy of ST-deviations was highest at +80ms after the J-point, and at 2min into recovery. At this point, ST-amplitude showed an AUC of 0.63 (95% CI 0.59-0.66) for the best-performing lead I. The combination of ST-amplitude and ST-slope in lead I did not increase the AUC. Lead I reached a sensitivity of 37% and a specificity of 83%, with similar sensitivity to manual ECG analysis (34%, p=0.31) but lower specificity (90%, p<0.001). When using ECG stress testing for evaluation of patients with suspected myocardial ischemia, the diagnostic accuracy of ST-deviations is highest when evaluated at +80ms after the J-point, and at 2min into recovery. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. The End of Points

    ERIC Educational Resources Information Center

    Feldman, Jo

    2018-01-01

    Have teachers become too dependent on points? This article explores educators' dependency on their points systems, and the ways that points can distract teachers from really analyzing students' capabilities and achievements. Feldman argues that using a more subjective grading system can help illuminate crucial information about students and what…

  19. Does lemon juice increase radioiodine reaccumulation within the parotid glands more than if lemon juice is not administered?

    PubMed

    Kulkarni, Kanchan; Van Nostrand, Douglas; Atkins, Francis; Mete, Mihriye; Wexler, Jason; Wartofsky, Leonard

    2014-02-01

    The protective effect of sialagogues following I therapy became controversial after a study proposed that sialagogues increase the reaccumulation of I in the parotid glands (PGs) to a level higher than when sialagogues are not administered ('rebound effect'). The present study examined PG radiopharmacokinetics within 2-4 h after radioiodine administration to evaluate whether sialagogues cause a 'rebound effect'. This prospective study was conducted at the Medstar Washington Hospital Center. The study patients had (i) differentiated thyroid cancer, (ii) no history of salivary gland disease or medications affecting the salivary glands, (iii) a clinical salivary scan (SS) with lemon juice (LJ) (SSwLJ) that was performed before I therapy, and (iv) a second SS performed without LJ (SSwoLJ) performed prior to I therapy after giving informed consent. Each PG was assessed for I uptake using time-activity curves (TACs) that were (i) corrected for background and decay, (ii) smoothed using a seven-point unweighted moving average, and (iii) normalized to the administered I activity. TACs of the SSwLJ and SSwoLJ were compared with activity at each time point over 120 min. Areas under the TACs for the PGs were calculated for each gland's SSwLJ and SSwoLJ, and the relative percentage change in potential radiation absorbed dose (PRAD) was calculated. A total of 2100 time points were analyzed in nine patients (18 PGs). I activity in the PGs on SSwLJ exceeded activity seen on the SSwoLJ at 134 time points (6.3%), and 98 (73%) of these were on the basis of spontaneous salivation during SSwoLJ. Mean percentage decrease in relative PRAD was 34.2±17.4% (range, 3.1-66.1%). During the time period studied, LJ administration did not result in a 'rebound effect' but resulted in mean relative decrease of 34.2% in PRAD to the PGs.

  20. Dynamic miRNA-mRNA regulations are essential for maintaining Drosophila immune homeostasis during Micrococcus luteus infection.

    PubMed

    Wei, Guanyun; Sun, Lianjie; Li, Ruimin; Li, Lei; Xu, Jiao; Ma, Fei

    2018-04-01

    Pathogen bacteria infections can lead to dynamic changes of microRNA (miRNA) and mRNA expression profiles, which may control synergistically the outcome of immune responses. To reveal the role of dynamic miRNA-mRNA regulation in Drosophila innate immune responses, we have detailedly analyzed the paired miRNA and mRNA expression profiles at three time points during Drosophila adult males with Micrococcus luteus (M. luteus) infection using RNA- and small RNA-seq data. Our results demonstrate that differentially expressed miRNAs and mRNAs represent extensively dynamic changes over three time points during Drosophila with M. luteus infection. The pathway enrichment analysis indicates that differentially expressed genes are involved in diverse signaling pathways, including Toll and Imd as well as orther signaling pathways at three time points during Drosophila with M. luteus infection. Remarkably, the dynamic change of miRNA expression is delayed by compared to mRNA expression change over three time points, implying that the "time" parameter should be considered when the function of miRNA/mRNA is further studied. In particular, the dynamic miRNA-mRNA regulatory networks have shown that miRNAs may synergistically regulate gene expressions of different signaling pathways to promote or inhibit innate immune responses and maintain homeostasis in Drosophila, and some new regulators involved in Drosophila innate immune response have been identified. Our findings strongly suggest that miRNA regulation is a key mechanism involved in fine-tuning cooperatively gene expressions of diverse signaling pathways to maintain innate immune response and homeostasis in Drosophila. Taken together, the present study reveals a novel role of dynamic miRNA-mRNA regulation in immune response to bacteria infection, and provides a new insight into the underlying molecular regulatory mechanism of Drosophila innate immune responses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Factors Associated with Cognition in Adults: The Seattle Longitudinal Study

    PubMed Central

    Yu, Fang; Ryan, Lindsay H.; Schaie, K. Warner; Willis, Sherry L.; Kolanowski, Ann

    2010-01-01

    A better understanding of factors that affect cognition could lead to improved health and greater independence for older adults. We examined the association of four modifiable factors (leisure-time physical activity, leisure-time cognitive activity, self-directed work, and hypertension) with changes in two aspects of fluid intelligence (verbal memory and inductive reasoning). Data for 626 adults collected over 14 years (three time points) were analyzed by multi-level modeling. A component of self-directed work, higher work control, was associated with better verbal memory (p < .05) and inductive reasoning (p < .01). There were no significant interactions among these factors. The findings suggest that a strong sense of control at work may be protective for fluid intelligence in adults. PMID:19606423

  2. The University of Colorado OSO-8 spectrometer experiment. IV - Mission operations

    NASA Technical Reports Server (NTRS)

    Hansen, E. R.; Bruner, E. C., Jr.

    1979-01-01

    The remote operation of two high-resolution ultraviolet spectrometers on the OSO-8 satellite is discussed. Mission operations enabled scientific observers to plan observations based on current solar data, interact with the observing program using real- or near real-time data and commands, evaluate quick-look instrument data, and analyze the observations for publication. During routine operations, experiments were planned a day prior to their execution, and the data from these experiments received a day later. When a shorter turnaround was required, a real-time mode was available. Here, the real-time data and command links into the remote control center were used to evaluate experiment operation and make satellite pointing or instrument configuration changes with a 1-90 minute turnaround.

  3. Stochastic modeling of neurobiological time series: Power, coherence, Granger causality, and separation of evoked responses from ongoing activity

    NASA Astrophysics Data System (ADS)

    Chen, Yonghong; Bressler, Steven L.; Knuth, Kevin H.; Truccolo, Wilson A.; Ding, Mingzhou

    2006-06-01

    In this article we consider the stochastic modeling of neurobiological time series from cognitive experiments. Our starting point is the variable-signal-plus-ongoing-activity model. From this model a differentially variable component analysis strategy is developed from a Bayesian perspective to estimate event-related signals on a single trial basis. After subtracting out the event-related signal from recorded single trial time series, the residual ongoing activity is treated as a piecewise stationary stochastic process and analyzed by an adaptive multivariate autoregressive modeling strategy which yields power, coherence, and Granger causality spectra. Results from applying these methods to local field potential recordings from monkeys performing cognitive tasks are presented.

  4. An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle

    NASA Astrophysics Data System (ADS)

    Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei

    2016-08-01

    We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.

  5. Characterization of intermittency in renewal processes: Application to earthquakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akimoto, Takuma; Hasumi, Tomohiro; Aizawa, Yoji

    2010-03-15

    We construct a one-dimensional piecewise linear intermittent map from the interevent time distribution for a given renewal process. Then, we characterize intermittency by the asymptotic behavior near the indifferent fixed point in the piecewise linear intermittent map. Thus, we provide a framework to understand a unified characterization of intermittency and also present the Lyapunov exponent for renewal processes. This method is applied to the occurrence of earthquakes using the Japan Meteorological Agency and the National Earthquake Information Center catalog. By analyzing the return map of interevent times, we find that interevent times are not independent and identically distributed random variablesmore » but that the conditional probability distribution functions in the tail obey the Weibull distribution.« less

  6. Implicit multiplane 3D camera calibration matrices for stereo image processing

    NASA Astrophysics Data System (ADS)

    McKee, James W.; Burgett, Sherrie J.

    1997-12-01

    By implicit camera calibration, we mean the process of calibrating cameras without explicitly computing their physical parameters. We introduce a new implicit model based on a generalized mapping between an image plane and multiple, parallel calibration planes (usually between four to seven planes). This paper presents a method of computing a relationship between a point on a three-dimensional (3D) object and its corresponding two-dimensional (2D) coordinate in a camera image. This relationship is expanded to form a mapping of points in 3D space to points in image (camera) space and visa versa that requires only matrix multiplication operations. This paper presents the rationale behind the selection of the forms of four matrices and the algorithms to calculate the parameters for the matrices. Two of the matrices are used to map 3D points in object space to 2D points on the CCD camera image plane. The other two matrices are used to map 2D points on the image plane to points on user defined planes in 3D object space. The mappings include compensation for lens distortion and measurement errors. The number of parameters used can be increased, in a straight forward fashion, to calculate and use as many parameters as needed to obtain a user desired accuracy. Previous methods of camera calibration use a fixed number of parameters which can limit the obtainable accuracy and most require the solution of nonlinear equations. The procedure presented can be used to calibrate a single camera to make 2D measurements or calibrate stereo cameras to make 3D measurements. Positional accuracy of better than 3 parts in 10,000 have been achieved. The algorithms in this paper were developed and are implemented in MATLABR (registered trademark of The Math Works, Inc.). We have developed a system to analyze the path of optical fiber during high speed payout (unwinding) of optical fiber off a bobbin. This requires recording and analyzing high speed (5 microsecond exposure time), synchronous, stereo images of the optical fiber during payout. A 3D equation for the fiber at an instant in time is calculated from the corresponding pair of stereo images as follows. In each image, about 20 points along the 2D projection of the fiber are located. Each of these 'fiber points' in one image is mapped to its projection line in 3D space. Each projection line is mapped into another line in the second image. The intersection of each mapped projection line and a curve fitted to the fiber points of the second image (fiber projection in second image) is calculated. Each intersection point is mapped back to the 3D space. A 3D fiber coordinate is formed from the intersection, in 3D space, of a mapped intersection point with its corresponding projection line. The 3D equation for the fiber is computed from this ordered list of 3D coordinates. This process requires a method of accurately mapping 2D (image space) to 3D (object space) and visa versa.3173

  7. Digging for Fossils in the Hertzsprung Gap

    NASA Technical Reports Server (NTRS)

    Ayres, Thomas R.

    1999-01-01

    Objective was to conduct deep (approx. 250 ks) pointings on two EUV sources, the early-F giant beta Cas and the mid-G giant mu Velorum; to obtain spectra in the range 70-300 A and to record Deep Survey light curves over the extensive duration of each observation. We have analyzed the DS lightcurve and the SW spectrum, breaking the latter lip into time slices corresponding to key phases of the observation: pre-flare, flare rise, and two segments of the flare decay.

  8. Equatorial waves simulated by the NCAR community climate model

    NASA Technical Reports Server (NTRS)

    Cheng, Xinhua; Chen, Tsing-Chang

    1988-01-01

    The equatorial planetary waves simulated by the NCAR CCM1 general circulation model were investigated in terms of space-time spectral analysis (Kao, 1968; Hayashi, 1971, 1973) and energetic analysis (Hayashi, 1980). These analyses are particularly applied to grid-point data on latitude circles. In order to test some physical factors which may affect the generation of tropical transient planetary waves, three different model simulations with the CCM1 (the control, the no-mountain, and the no-cloud experiments) were analyzed.

  9. Multiple Point Dynamic Gas Density Measurements Using Molecular Rayleigh Scattering

    NASA Technical Reports Server (NTRS)

    Seasholtz, Richard; Panda, Jayanta

    1999-01-01

    A nonintrusive technique for measuring dynamic gas density properties is described. Molecular Rayleigh scattering is used to measure the time-history of gas density simultaneously at eight spatial locations at a 50 kHz sampling rate. The data are analyzed using the Welch method of modified periodograms to reduce measurement uncertainty. Cross-correlations, power spectral density functions, cross-spectral density functions, and coherence functions may be obtained from the data. The technique is demonstrated using low speed co-flowing jets with a heated inner jet.

  10. Observations from Sarmizegetusa Sanctuary

    NASA Astrophysics Data System (ADS)

    Barbosu, M.

    2000 years ago, Sarmizegetusa Regia was the capital of ancient Dacia (today: Romania). It is known that the Dacian high priests used the Sanctuary of Sarmizegetusa not only for religious ceremonies, but also for astronomical observations. After having completed geodesic measurements, we analyzed the architecture of the sanctuary with its main points, directions and circles. We discuss here what kind of astronomical observations could have been made with the scientific knowledge of that time. The final section of this work is dedicated to the remarkable resemblance between Sarmizegztusa and Stonehenge.

  11. Wave Information Studies of US Coastlines: Hindcast Wave Information for the Great Lakes: Lake Erie

    DTIC Science & Technology

    1991-10-01

    total ice cover) for individual grid cells measuring 5 km square. 42. The GLERL analyzed each half-month data set to provide the maximum, minimum...average, median, and modal ice concentrations for each 5-km cell . The median value, which represents an estimate of the 50-percent point of the ice...incorporating the progression and decay of the time-dependent ice cover was complicated by the fact that different grid cell sizes were used for mapping the ice

  12. Computerized Measurement and Tracking of Acoustical Resonances.

    DTIC Science & Technology

    1982-12-01

    inaccuracies which wace sweep cats dependent and data points which were not necassirily equaI~ly spaced. method c was not utr-I2.zsd ’lus to the i-nability...f the output. B. CONCLISIONS FROM rASK COMPLEITION Upon completing tasks 1 ani 2, it was determi-ned that method b (frequenzy synthes-:zer/phase...following method . The synthesizar outputs a frequency off resonance into the r-esonator and the lock-in analyzer is sampled one hundred times. The mean and

  13. Mathematical correlation of modal-parameter-identification methods via system-realization theory

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    1987-01-01

    A unified approach is introduced using system-realization theory to derive and correlate modal-parameter-identification methods for flexible structures. Several different time-domain methods are analyzed and treated. A basic mathematical foundation is presented which provides insight into the field of modal-parameter identification for comparison and evaluation. The relation among various existing methods is established and discussed. This report serves as a starting point to stimulate additional research toward the unification of the many possible approaches for modal-parameter identification.

  14. Investigation of Post-mortem Tissue Effects Using Long-time Decorrelation Ultrasound

    NASA Astrophysics Data System (ADS)

    Csány, Gergely; Balogh, Lajos; Gyöngy, Miklós

    Decorrelation ultrasound is being increasingly used to investigate long-term biological phenomena. In the current work, ultrasound image sequences of mice who did not survive anesthesia (in a separate investigation) were analyzed and post-mortem tissue effects were observed via decorrelation calculation. A method was developed to obtain a quantitative parameter characterizing the rate of decorrelation. The results show that ultrasound decorrelation imaging is an effective method of observing post-mortem tissue effects and point to further studies elucidating the mechanism behind these effects.

  15. Applying Multivariate Adaptive Splines to Identify Genes With Expressions Varying After Diagnosis in Microarray Experiments.

    PubMed

    Duan, Fenghai; Xu, Ye

    2017-01-01

    To analyze a microarray experiment to identify the genes with expressions varying after the diagnosis of breast cancer. A total of 44 928 probe sets in an Affymetrix microarray data publicly available on Gene Expression Omnibus from 249 patients with breast cancer were analyzed by the nonparametric multivariate adaptive splines. Then, the identified genes with turning points were grouped by K-means clustering, and their network relationship was subsequently analyzed by the Ingenuity Pathway Analysis. In total, 1640 probe sets (genes) were reliably identified to have turning points along with the age at diagnosis in their expression profiling, of which 927 expressed lower after turning points and 713 expressed higher after the turning points. K-means clustered them into 3 groups with turning points centering at 54, 62.5, and 72, respectively. The pathway analysis showed that the identified genes were actively involved in various cancer-related functions or networks. In this article, we applied the nonparametric multivariate adaptive splines method to a publicly available gene expression data and successfully identified genes with expressions varying before and after breast cancer diagnosis.

  16. A vertically-stacked, polymer, microfluidic point mutation analyzer: Rapid, high accuracy detection of low-abundance K-ras mutations

    PubMed Central

    Han, Kyudong; Lee, Tae Yoon; Nikitopoulos, Dimitris E.; Soper, Steven A.; Murphy, Michael C.

    2011-01-01

    Recognition of point mutations in the K-ras gene can be used for the clinical management of several types of cancers. Unfortunately, several assay and hardware concerns must be addressed to allow users not well-trained in performing molecular analyses the opportunity to undertake these measurements. To provide for a larger user-base for these types of molecular assays, a vertically-stacked microfluidic analyzer with a modular architecture and process automation was developed. The analyzer employed a primary PCR coupled to an allele-specific ligase detection reaction (LDR). Each functional device, including continuous flow thermal reactors for the PCR and LDR, passive micromixers and ExoSAP-IT® purification, was designed and tested. Individual devices were fabricated in polycarbonate using hot embossing and assembled using adhesive bonding for system assembly. The system produced LDR products from a DNA sample in ~1 h, an 80% reduction in time compared to conventional bench-top instrumentation. Purifying the post-PCR products with the ExoSAP-IT® enzyme led to optimized LDR performance minimizing false positive signals and producing reliable results. Mutant alleles in genomic DNA were quantified to the level of 0.25 ng of mutant DNA in 50 ng of wild-type DNA for a 25 μL sample, equivalent to DNA from 42 mutant cells. PMID:21771577

  17. Multi-scale fluctuation analysis of precipitation in Beijing by Extreme-point Symmetric Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Li, Jiqing; Duan, Zhipeng; Huang, Jing

    2018-06-01

    With the aggravation of the global climate change, the shortage of water resources in China is becoming more and more serious. Using reasonable methods to study changes in precipitation is very important for planning and management of water resources. Based on the time series of precipitation in Beijing from 1951 to 2015, the multi-scale features of precipitation are analyzed by the Extreme-point Symmetric Mode Decomposition (ESMD) method to forecast the precipitation shift. The results show that the precipitation series have periodic changes of 2.6, 4.3, 14 and 21.7 years, and the variance contribution rate of each modal component shows that the inter-annual variation dominates the precipitation in Beijing. It is predicted that precipitation in Beijing will continue to decrease in the near future.

  18. Planning of distributed generation in distribution network based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Jinze; Qu, Zhi; He, Xiaoyang; Jin, Xiaoming; Li, Tie; Wang, Mingkai; Han, Qiu; Gao, Ziji; Jiang, Feng

    2018-02-01

    Large-scale access of distributed power can improve the current environmental pressure, at the same time, increasing the complexity and uncertainty of overall distribution system. Rational planning of distributed power can effectively improve the system voltage level. To this point, the specific impact on distribution network power quality caused by the access of typical distributed power was analyzed and from the point of improving the learning factor and the inertia weight, an improved particle swarm optimization algorithm (IPSO) was proposed which could solve distributed generation planning for distribution network to improve the local and global search performance of the algorithm. Results show that the proposed method can well reduce the system network loss and improve the economic performance of system operation with distributed generation.

  19. Monitoring dynamic loads on wind tunnel force balances

    NASA Technical Reports Server (NTRS)

    Ferris, Alice T.; White, William C.

    1989-01-01

    Two devices have been developed at NASA Langley to monitor the dynamic loads incurred during wind-tunnel testing. The Balance Dynamic Display Unit (BDDU), displays and monitors the combined static and dynamic forces and moments in the orthogonal axes. The Balance Critical Point Analyzer scales and sums each normalized signal from the BDDU to obtain combined dynamic and static signals that represent the dynamic loads at predefined high-stress points. The display of each instrument is a multiplex of six analog signals in a way that each channel is displayed sequentially as one-sixth of the horizontal axis on a single oscilloscope trace. Thus this display format permits the operator to quickly and easily monitor the combined static and dynamic level of up to six channels at the same time.

  20. Research on the range side lobe suppression method for modulated stepped frequency radar signals

    NASA Astrophysics Data System (ADS)

    Liu, Yinkai; Shan, Tao; Feng, Yuan

    2018-05-01

    The magnitude of time-domain range sidelobe of modulated stepped frequency radar affects the imaging quality of inverse synthetic aperture radar (ISAR). In this paper, the cause of high sidelobe in modulated stepped frequency radar imaging is analyzed first in real environment. Then, the chaos particle swarm optimization (CPSO) is used to select the amplitude and phase compensation factors according to the minimum sidelobe criterion. Finally, the compensated one-dimensional range images are obtained. Experimental results show that the amplitude-phase compensation method based on CPSO algorithm can effectively reduce the sidelobe peak value of one-dimensional range images, which outperforms the common sidelobe suppression methods and avoids the coverage of weak scattering points by strong scattering points due to the high sidelobes.

  1. The Advantage of Playing Home in NBA: Microscopic, Team-Specific and Evolving Features

    PubMed Central

    Ribeiro, Haroldo V.; Mukherjee, Satyam; Zeng, Xiao Han T.

    2016-01-01

    The idea that the success rate of a team increases when playing home is broadly accepted and documented for a wide variety of sports. Investigations on the so-called “home advantage phenomenon” date back to the 70’s and ever since has attracted the attention of scholars and sport enthusiasts. These studies have been mainly focused on identifying the phenomenon and trying to correlate it with external factors such as crowd noise and referee bias. Much less is known about the effects of home advantage in the “microscopic” dynamics of the game (within the game) or possible team-specific and evolving features of this phenomenon. Here we present a detailed study of these previous features in the National Basketball Association (NBA). By analyzing play-by-play events of more than sixteen thousand games that span thirteen NBA seasons, we have found that home advantage affects the microscopic dynamics of the game by increasing the scoring rates and decreasing the time intervals between scores of teams playing home. We verified that these two features are different among the NBA teams, for instance, the scoring rate of the Cleveland Cavaliers team is increased ≈0.16 points per minute (on average the seasons 2004–05 to 2013–14) when playing home, whereas for the New Jersey Nets (now the Brooklyn Nets) this rate increases in only ≈0.04 points per minute. We further observed that these microscopic features have evolved over time in a non-trivial manner when analyzing the results team-by-team. However, after averaging over all teams some regularities emerge; in particular, we noticed that the average differences in the scoring rates and in the characteristic times (related to the time intervals between scores) have slightly decreased over time, suggesting a weakening of the phenomenon. This study thus adds evidence of the home advantage phenomenon and contributes to a deeper understanding of this effect over the course of games. PMID:27015636

  2. The Advantage of Playing Home in NBA: Microscopic, Team-Specific and Evolving Features.

    PubMed

    Ribeiro, Haroldo V; Mukherjee, Satyam; Zeng, Xiao Han T

    2016-01-01

    The idea that the success rate of a team increases when playing home is broadly accepted and documented for a wide variety of sports. Investigations on the so-called "home advantage phenomenon" date back to the 70's and ever since has attracted the attention of scholars and sport enthusiasts. These studies have been mainly focused on identifying the phenomenon and trying to correlate it with external factors such as crowd noise and referee bias. Much less is known about the effects of home advantage in the "microscopic" dynamics of the game (within the game) or possible team-specific and evolving features of this phenomenon. Here we present a detailed study of these previous features in the National Basketball Association (NBA). By analyzing play-by-play events of more than sixteen thousand games that span thirteen NBA seasons, we have found that home advantage affects the microscopic dynamics of the game by increasing the scoring rates and decreasing the time intervals between scores of teams playing home. We verified that these two features are different among the NBA teams, for instance, the scoring rate of the Cleveland Cavaliers team is increased ≈0.16 points per minute (on average the seasons 2004-05 to 2013-14) when playing home, whereas for the New Jersey Nets (now the Brooklyn Nets) this rate increases in only ≈0.04 points per minute. We further observed that these microscopic features have evolved over time in a non-trivial manner when analyzing the results team-by-team. However, after averaging over all teams some regularities emerge; in particular, we noticed that the average differences in the scoring rates and in the characteristic times (related to the time intervals between scores) have slightly decreased over time, suggesting a weakening of the phenomenon. This study thus adds evidence of the home advantage phenomenon and contributes to a deeper understanding of this effect over the course of games.

  3. Effects of the H-3 Highway Stormwater Runoff on the Water Quality of Halawa Stream, Oahu, Hawaii, November 1998 to August 2004

    USGS Publications Warehouse

    Wolff, Reuben H.; Wong, Michael F.

    2008-01-01

    Since November 1998, water-quality data have been collected from the H-3 Highway Storm Drain C, which collects runoff from a 4-mi-long viaduct, and from Halawa Stream on Oahu, Hawaii. From January 2001 to August 2004, data were collected from the storm drain and four stream sites in the Halawa Stream drainage basin as part of the State of Hawaii Department of Transportation Storm Water Monitoring Program. Data from the stormwater monitoring program have been published in annual reports. This report uses these water-quality data to explore how the highway storm-drain runoff affects Halawa Stream and the factors that might be controlling the water quality in the drainage basin. In general, concentrations of nutrients, total dissolved solids, and total suspended solids were lower in highway runoff from Storm Drain C than at stream sites upstream and downstream of Storm Drain C. The opposite trend was observed for most trace metals, which generally occurred in higher concentrations in the highway runoff from Storm Drain C than in the samples collected from Halawa Stream. The absolute contribution from Storm Drain C highway runoff, in terms of total storm loads, was much smaller than at stations upstream and downstream, whereas the constituent yields (the relative contribution per unit drainage basin area) at Storm Drain C were comparable to or higher than storm yields at stations upstream and downstream. Most constituent concentrations and loads in stormwater runoff increased in a downstream direction. The timing of the storm sampling is an important factor controlling constituent concentrations observed in stormwater runoff samples. Automated point samplers were used to collect grab samples during the period of increasing discharge of the storm throughout the stormflow peak and during the period of decreasing discharge of the storm, whereas manually collected grab samples were generally collected during the later stages near the end of the storm. Grab samples were analyzed to determine concentrations and loads at a particular point in time. Flow-weighted time composite samples from the automated point samplers were analyzed to determine mean constituent concentrations or loads during a storm. Chemical analysis of individual grab samples from the automated point sampler at Storm Drain C demonstrated the ?first flush? phenomenon?higher constituent concentrations at the beginning of runoff events?for the trace metals cadmium, lead, zinc, and copper, whose concentrations were initially high during the period of increasing discharge and gradually decreased over the duration of the storm. Water-quality data from Storm Drain C and four stream sites were compared to the State of Hawaii Department of Health (HDOH) water-quality standards to determine the effects of highway storm runoff on the water quality of Halawa Stream. The geometric-mean standards and the 10- and 2-percent-of-the-time concentration standards for total nitrogen, nitrite plus nitrate, total phosphorus, total suspended solids, and turbidity were exceeded in many of the comparisons. However, these standards were not designed for stormwater sampling, in which constituent concentrations would be expected to increase for short periods of time. With the aim of enhancing the usefulness of the water-quality data, several modifications to the stormwater monitoring program are suggested. These suggestions include (1) the periodic analyzing of discrete samples from the automated point samplers over the course of a storm to get a clearer profile of the storm, from first flush to the end of the receding discharge; (2) adding an analysis of the dissolved fractions of metals to the sampling plan; (3) installation of an automatic sampler at Bridge 8 to enable sampling earlier in the storms; (4) a one-time sampling and analysis of soils upstream of Bridge 8 for base-line contaminant concentrations; (5) collection of samples from Halawa Stream during low-flow conditions

  4. Design of the Annular Suspension and Pointing System (ASPS) (including design addendum)

    NASA Technical Reports Server (NTRS)

    Cunningham, D.; Gismondi, T.; Hamilton, B.; Kendig, J.; Kiedrowski, J.; Vroman, A.; Wilson, G.

    1980-01-01

    The Annular Suspension and Pointing System is an experiment pointing mount designed for extremely precise 3 axis orientation of shuttle experiments. It utilizes actively controlled magnetic bearing to provide noncontacting vernier pointing and translational isolation of the experiment. The design of the system is presented and analyzed.

  5. [Research on human movement with noninvasive tissue oximeter using near infrared spectroscopy].

    PubMed

    Lin, Hong; Xi, Yu-bao; Yu, Hui

    2014-06-01

    The present paper discusses how to monitor and analyze the relative change in muscle oxygen content in quadriceps tissue, and measures and records the change in blood lactate acid concentration, blood volume and heart rate when eight players who are good at middle-distance races perform grade incremental intensity exercise on cycle ergometer by using noninvasive tissue oximeter with near infrared spectroscopy produced by China independently. The results show that muscle oxygen content has a close relationship (p < 0.01)with exercise load, blood lactic acid, blood volume and heart rate. When determined muscle oxygen content and blood lactate acid concentration was determined for many times to the same person, the test proved regular falling and rising. There was no significant changes when analyzed each set of the data was analyzed through horizontal comparison. It verifies we can judge the subjects's endurable exercise intensity and the upward inflection point of blood lactic acid corresponding to the decreasing inflection point of blood lactate acid concentration & muscle oxygen content according to the muscle oxygen content change of skeletal muscle while exercising. This paper shows NIRS research status and present situation in sports field through investigation, and analyzes the main trouble and research tendency in the future. By understanding NIRS technology gradually, the authors can realize that the muscle oxygen content which measured by noninvasive tissue oximeter using near infrared spectroscopy produced by China independently is a sensitive, nondestructive, up-to-date and reliable index, it has irreplaceable advantages when compared with traditional invasive, excised and fussy test methods.

  6. "Photographing money" task pricing

    NASA Astrophysics Data System (ADS)

    Jia, Zhongxiang

    2018-05-01

    "Photographing money" [1]is a self-service model under the mobile Internet. The task pricing is reasonable, related to the success of the commodity inspection. First of all, we analyzed the position of the mission and the membership, and introduced the factor of membership density, considering the influence of the number of members around the mission on the pricing. Multivariate regression of task location and membership density using MATLAB to establish the mathematical model of task pricing. At the same time, we can see from the life experience that membership reputation and the intensity of the task will also affect the pricing, and the data of the task success point is more reliable. Therefore, the successful point of the task is selected, and its reputation, task density, membership density and Multiple regression of task positions, according to which a nhew task pricing program. Finally, an objective evaluation is given of the advantages and disadvantages of the established model and solution method, and the improved method is pointed out.

  7. Trend and change point analyses of annual precipitation in the Souss-Massa Region in Morocco during 1932-2010

    NASA Astrophysics Data System (ADS)

    Abahous, H.; Ronchail, J.; Sifeddine, A.; Kenny, L.; Bouchaou, L.

    2017-11-01

    In the context of an arid area such as Souss Massa Region, the availability of time series analysis of observed local data is vital to better characterize the regional rainfall configuration. In this paper, dataset of monthly precipitation collected from different local meteorological stations during 1932-2010, are quality controlled and analyzed to detect trend and change points. The temporal distribution of outliers shows an annual cycle and a decrease of their number since the 1980s. The results of the standard normal homogeneity test, penalized maximal t test, and Mann-Whitney-Pettit test show that 42% of the series are homogeneous. The analysis of annual precipitation in the region of Souss Massa during 1932-2010 shows wet conditions with a maximum between 1963 and 1965 followed by a decrease since 1973. The latter is identified as a statistically significant regional change point in Western High Atlas and Anti Atlas Mountains highlighting a decline in long-term average precipitation.

  8. Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions

    DOE PAGES

    Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...

    2015-11-01

    Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less

  9. An improved grey model for the prediction of real-time GPS satellite clock bias

    NASA Astrophysics Data System (ADS)

    Zheng, Z. Y.; Chen, Y. Q.; Lu, X. S.

    2008-07-01

    In real-time GPS precise point positioning (PPP), real-time and reliable satellite clock bias (SCB) prediction is a key to implement real-time GPS PPP. It is difficult to hold the nuisance and inenarrable performance of space-borne GPS satellite atomic clock because of its high-frequency, sensitivity and impressionable, it accords with the property of grey model (GM) theory, i. e. we can look on the variable process of SCB as grey system. Firstly, based on limits of quadratic polynomial (QP) and traditional GM to predict SCB, a modified GM (1,1) is put forward to predict GPS SCB in this paper; and then, taking GPS SCB data for example, we analyzed clock bias prediction with different sample interval, the relationship between GM exponent and prediction accuracy, precision comparison of GM to QP, and concluded the general rule of different type SCB and GM exponent; finally, to test the reliability and validation of the modified GM what we put forward, taking IGS clock bias ephemeris product as reference, we analyzed the prediction precision with the modified GM, It is showed that the modified GM is reliable and validation to predict GPS SCB and can offer high precise SCB prediction for real-time GPS PPP.

  10. Using Multiorder Time-Correlation Functions (TCFs) To Elucidate Biomolecular Reaction Pathways from Microsecond Single-Molecule Fluorescence Experiments.

    PubMed

    Phelps, Carey; Israels, Brett; Marsh, Morgan C; von Hippel, Peter H; Marcus, Andrew H

    2016-12-29

    Recent advances in single-molecule fluorescence imaging have made it possible to perform measurements on microsecond time scales. Such experiments have the potential to reveal detailed information about the conformational changes in biological macromolecules, including the reaction pathways and dynamics of the rearrangements involved in processes, such as sequence-specific DNA "breathing" and the assembly of protein-nucleic acid complexes. Because microsecond-resolved single-molecule trajectories often involve "sparse" data, that is, they contain relatively few data points per unit time, they cannot be easily analyzed using the standard protocols that were developed for single-molecule experiments carried out with tens-of-millisecond time resolution and high "data density." Here, we describe a generalized approach, based on time-correlation functions, to obtain kinetic information from microsecond-resolved single-molecule fluorescence measurements. This approach can be used to identify short-lived intermediates that lie on reaction pathways connecting relatively long-lived reactant and product states. As a concrete illustration of the potential of this methodology for analyzing specific macromolecular systems, we accompany the theoretical presentation with the description of a specific biologically relevant example drawn from studies of reaction mechanisms of the assembly of the single-stranded DNA binding protein of the T4 bacteriophage replication complex onto a model DNA replication fork.

  11. Integrating Microscopic Analysis into Existing Quality Assurance Processes

    NASA Astrophysics Data System (ADS)

    Frühberger, Peter; Stephan, Thomas; Beyerer, Jürgen

    When technical goods, like mainboards and other electronic components, are produced, quality assurance (QA) is very important. To achieve this goal, different optical microscopes can be used to analyze a variety of specimen to gain comprehensive information by combining the acquired sensor data. In many industrial processes, cameras are used to examine these technical goods. Those cameras can analyze complete boards at once and offer a high level of accuracy when used for completeness checks. When small defects, e.g. soldered points, need to be examined in detail, those wide area cameras are limited. Microscopes with large magnification need to be used to analyze those critical areas. But microscopes alone cannot fulfill this task within a limited time schedule, because microscopic analysis of complete motherboards of a certain size is time demanding. Microscopes are limited concerning their depth of field and depth of focus, which is why additional components like XY moving tables need to be used to examine the complete surface. Yet today's industrial production quality standards require a 100 % control of the soldered components within a given time schedule. This level of quality, while keeping inspection time low, can only be achieved when combining multiple inspection devices in an optimized manner. This paper presents results and methods of combining industrial cameras with microscopy instrumenting a classificatory based approach intending to keep already deployed QA processes in place but extending them with the purpose of increasing the quality level of the produced technical goods while maintaining high throughput.

  12. Diabatization for Time-Dependent Density Functional Theory: Exciton Transfers and Related Conical Intersections.

    PubMed

    Tamura, Hiroyuki

    2016-11-23

    Intermolecular exciton transfers and related conical intersections are analyzed by diabatization for time-dependent density functional theory. The diabatic states are expressed as a linear combination of the adiabatic states so as to emulate the well-defined reference states. The singlet exciton coupling calculated by the diabatization scheme includes contributions from the Coulomb (Förster) and electron exchange (Dexter) couplings. For triplet exciton transfers, the Dexter coupling, charge transfer integral, and diabatic potentials of stacked molecules are calculated for analyzing direct and superexchange pathways. We discuss some topologies of molecular aggregates that induce conical intersections on the vanishing points of the exciton coupling, namely boundary of H- and J-aggregates and T-shape aggregates, as well as canceled exciton coupling to the bright state of H-aggregate, i.e., selective exciton transfer to the dark state. The diabatization scheme automatically accounts for the Berry phase by fixing the signs of reference states while scanning the coordinates.

  13. Statistical attribution analysis of the nonstationarity of the annual runoff series of the Weihe River.

    PubMed

    Xiong, Lihua; Jiang, Cong; Du, Tao

    2014-01-01

    Time-varying moments models based on Pearson Type III and normal distributions respectively are built under the generalized additive model in location, scale and shape (GAMLSS) framework to analyze the nonstationarity of the annual runoff series of the Weihe River, the largest tributary of the Yellow River. The detection of nonstationarities in hydrological time series (annual runoff, precipitation and temperature) from 1960 to 2009 is carried out using a GAMLSS model, and then the covariate analysis for the annual runoff series is implemented with GAMLSS. Finally, the attribution of each covariate to the nonstationarity of annual runoff is analyzed quantitatively. The results demonstrate that (1) obvious change-points exist in all three hydrological series, (2) precipitation, temperature and irrigated area are all significant covariates of the annual runoff series, and (3) temperature increase plays the main role in leading to the reduction of the annual runoff series in the study basin, followed by the decrease of precipitation and the increase of irrigated area.

  14. [Wireless digital radiography detectors in the emergency area: an efficacious solution].

    PubMed

    Garrido Blázquez, M; Agulla Otero, M; Rodríguez Recio, F J; Torres Cabrera, R; Hernando González, I

    2013-01-01

    To evaluate the implementation of a flat panel digital radiolography (DR) system with WiFi technology in an emergency radiology area in which a computed radiography (CR) system was previously used. We analyzed aspects related to image quality, radiation dose, workflow, and ergonomics. We analyzed the results obtained with the CR and WiFi DR systems related with the quality of images analyzed in images obtained using a phantom and after radiologists' evaluation of radiological images obtained in real patients. We also analyzed the time required for image acquisition and the workflow with the two technological systems. Finally, we analyzed the data related to the dose of radiation in patients before and after the implementation of the new equipment. Image quality improved in both the tests carried out with a phantom and in radiological images obtained in patients, which increased from 3 to 4.5 on a 5-point scale. The average time required for image acquisition decreased by 25 seconds per image. The flat panel required less radiation to be delivered in practically all the techniques carried out using automatic dosimetry, although statistically significant differences were found in only some of the techniques (chest, thoracic spine, and lumbar spine). Implementing the WiFi DR system has brought benefits. Image quality has improved and the dose of radiation to patients has decreased. The new system also has advantages in terms of functionality, ergonomics, and performance. Copyright © 2011 SERAM. Published by Elsevier Espana. All rights reserved.

  15. Effect of Energy Drinks on Discoloration of Silorane and Dimethacrylate-Based Composite Resins.

    PubMed

    Ahmadizenouz, Ghazaleh; Esmaeili, Behnaz; Ahangari, Zohreh; Khafri, Soraya; Rahmani, Aghil

    2016-08-01

    This study aimed to assess the effects of two energy drinks on color change (ΔE) of two methacrylate-based and a silorane-based composite resin after one week and one month. Thirty cubic samples were fabricated from Filtek P90, Filtek Z250 and Filtek Z350XT composite resins. All the specimens were stored in distilled water at 37°C for 24 hours. Baseline color values (L*a*b*) of each specimen were measured using a spectrophotometer according to the CIEL*a*b* color system. Ten randomly selected specimens from each composite were then immersed in the two energy drinks (Hype, Red Bull) and artificial saliva (control) for one week and one month. Color was re-assessed after each storage period and ΔE values were calculated. The data were analyzed using the Kruskal Wallis and Mann-Whitney U tests. Filtek Z250 composite showed the highest ΔE irrespective of the solutions at both time points. After seven days and one month, the lowest ΔE values were observed in Filtek Z350XT and Filtek P90 composites immersed in artificial saliva, respectively. The ΔE values of Filtek Z250 and Z350XT composites induced by Red Bull and Hype energy drinks were not significantly different. Discoloration of Filtek P90 was higher in Red Bull energy drink at both time points. Prolonged immersion time in all three solutions increased ΔE values of all composites. However, the ΔE values were within the clinically acceptable range (<3.3) at both time points.

  16. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  17. Inferring rate and state friction parameters from a rupture model of the 1995 Hyogo-ken Nanbu (Kobe) Japan earthquake

    USGS Publications Warehouse

    Guatteri, Mariagiovanna; Spudich, P.; Beroza, G.C.

    2001-01-01

    We consider the applicability of laboratory-derived rate- and state-variable friction laws to the dynamic rupture of the 1995 Kobe earthquake. We analyze the shear stress and slip evolution of Ide and Takeo's [1997] dislocation model, fitting the inferred stress change time histories by calculating the dynamic load and the instantaneous friction at a series of points within the rupture area. For points exhibiting a fast-weakening behavior, the Dieterich-Ruina friction law, with values of dc = 0.01-0.05 m for critical slip, fits the stress change time series well. This range of dc is 10-20 times smaller than the slip distance over which the stress is released, Dc, which previous studies have equated with the slip-weakening distance. The limited resolution and low-pass character of the strong motion inversion degrades the resolution of the frictional parameters and suggests that the actual dc is less than this value. Stress time series at points characterized by a slow-weakening behavior are well fitted by the Dieterich-Ruina friction law with values of dc ??? 0.01-0.05 m. The apparent fracture energy Gc can be estimated from waveform inversions more stably than the other friction parameters. We obtain a Gc = 1.5??106 J m-2 for the 1995 Kobe earthquake, in agreement with estimates for previous earthquakes. From this estimate and a plausible upper bound for the local rock strength we infer a lower bound for Dc of about 0.008 m. Copyright 2001 by the American Geophysical Union.

  18. Contact angle of unset elastomeric impression materials.

    PubMed

    Menees, Timothy S; Radhakrishnan, Rashmi; Ramp, Lance C; Burgess, John O; Lawson, Nathaniel C

    2015-10-01

    Some elastomeric impression materials are hydrophobic, and it is often necessary to take definitive impressions of teeth coated with some saliva. New hydrophilic materials have been developed. The purpose of this in vitro study was to compare contact angles of water and saliva on 7 unset elastomeric impression materials at 5 time points from the start of mixing. Two traditional polyvinyl siloxane (PVS) (Aquasil, Take 1), 2 modified PVS (Imprint 4, Panasil), a polyether (Impregum), and 2 hybrid (Identium, EXA'lence) materials were compared. Each material was flattened to 2 mm and a 5 μL drop of distilled water or saliva was dropped on the surface at 25 seconds (t0) after the start of mix. Contact angle measurements were made with a digital microscope at initial contact (t0), t1=2 seconds, t2=5 seconds, t3=50% working time, and t4=95% working time. Data were analyzed with a generalized linear mixed model analysis, and individual 1-way ANOVA and Tukey HSD post hoc tests (α=.05). For water, materials grouped into 3 categories at all time-points: the modified PVS and one hybrid material (Identium) produced the lowest contact angles, the polyether material was intermediate, and the traditional PVS materials and the other hybrid (EXA'lence) produced the highest contact angles. For saliva, Identium, Impregum, and Imprint 4 were in the group with the lowest contact angle at most time points. Modified PVS materials and one of the hybrid materials are more hydrophilic than traditional PVS materials when measured with water. Saliva behaves differently than water in contact angle measurement on unset impression material and produces a lower contact angle on polyether based materials. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  19. Photoacoustic infrared spectroscopy for conducting gas tracer tests and measuring water saturations in landfills

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, Yoojin; Han, Byunghyun; Mostafid, M. Erfan

    2012-02-15

    Highlights: Black-Right-Pointing-Pointer Photoacoustic infrared spectroscopy tested for measuring tracer gas in landfills. Black-Right-Pointing-Pointer Measurement errors for tracer gases were 1-3% in landfill gas. Black-Right-Pointing-Pointer Background signals from landfill gas result in elevated limits of detection. Black-Right-Pointing-Pointer Technique is much less expensive and easier to use than GC. - Abstract: Gas tracer tests can be used to determine gas flow patterns within landfills, quantify volatile contaminant residence time, and measure water within refuse. While gas chromatography (GC) has been traditionally used to analyze gas tracers in refuse, photoacoustic spectroscopy (PAS) might allow real-time measurements with reduced personnel costs and greater mobilitymore » and ease of use. Laboratory and field experiments were conducted to evaluate the efficacy of PAS for conducting gas tracer tests in landfills. Two tracer gases, difluoromethane (DFM) and sulfur hexafluoride (SF{sub 6}), were measured with a commercial PAS instrument. Relative measurement errors were invariant with tracer concentration but influenced by background gas: errors were 1-3% in landfill gas but 4-5% in air. Two partitioning gas tracer tests were conducted in an aerobic landfill, and limits of detection (LODs) were 3-4 times larger for DFM with PAS versus GC due to temporal changes in background signals. While higher LODs can be compensated by injecting larger tracer mass, changes in background signals increased the uncertainty in measured water saturations by up to 25% over comparable GC methods. PAS has distinct advantages over GC with respect to personnel costs and ease of use, although for field applications GC analyses of select samples are recommended to quantify instrument interferences.« less

  20. Laser tissue welding in genitourinary reconstructive surgery: assessment of optimal suture materials.

    PubMed

    Poppas, D P; Klioze, S D; Uzzo, R G; Schlossberg, S M

    1995-02-01

    Laser tissue welding in genitourinary reconstructive surgery has been shown in animal models to decrease operative time, improve healing, and decrease postoperative fistula formation when compared with conventional suture controls. Although the absence of suture material is the ultimate goal, this has not been shown to be practical with current technology for larger repairs. Therefore, suture-assisted laser tissue welding will likely be performed. This study sought to determine the optimal suture to be used during laser welding. The integrity of various organic and synthetic sutures exposed to laser irradiation were analyzed. Sutures studied included gut, clear Vicryl, clear polydioxanone suture (PDS), and violet PDS. Sutures were irradiated with a potassium titanyl phosphate (KTP)-532 laser or an 808-nm diode laser with and without the addition of a light-absorbing chromophore (fluorescein or indocyanine green, respectively). A remote temperature-sensing device obtained real-time surface temperatures during lasing. The average temperature, time, and total energy at break point were recorded. Overall, gut suture achieved significantly higher temperatures and withstood higher average energy delivery at break point with both the KTP-532 and the 808-nm diode lasers compared with all other groups (P < 0.05). Both chromophore-treated groups had higher average temperatures at break point combined with lower average energy. The break-point temperature for all groups other than gut occurred at 91 degrees C or less. The optimal temperature range for tissue welding appears to be between 60 degrees and 80 degrees C. Gut suture offers the greatest margin of error for KTP and 808-nm diode laser welding with or without the use of a chromophore.

Top